i.MX RT Crossover MCUs Knowledge Base

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

i.MX RT Crossover MCUs Knowledge Base

讨论

排序依据:
As we know, the RT series MCUs support the XIP (Execute in place) mode and benefit from saving the number of pins, serial NOR Flash is most commonly used, as the FlexSPI module can high efficient fetch the code and data from the Serial NOR flash for Cortex-M7 to execute. The fetch way is implementing via utilizing the Quad IO Fast Read command, meanwhile, the serail NOR flash works in the SDR (Single Data transfer Rate) mode, it receives data on SCLK rise edge and transmits data on SCLK fall edge. Comparing to the SDR mode, the DDR (Dual Data transfer Rate) mode has a higher throughput capacity, whether it can provide better performance of XIP mode, and how to do that if we want the Serial NOR Flash to work in DDR (Dual Data transfer Rate) mode? SDR & DDR mode SDR mode: In SDR (Single Data transfer Rate) mode, data is only clocked on one edge of the clock (either the rising or falling edge). This means that for SDR to have data being transmitted at X Mbps, the clock bit rate needs to be 2X Mbps. DDR mode: For DDR (Dual Data transfer Rate) mode, also known as DTR (Dual Transfer Rate) mode, data is transferred on both the rising and falling edge of the clock. This means data is transmitted at X Mbps only requires the clock bit rate to be X Mbps, hence doubling the bandwidth (as Fig 1 shows).   Fig 1 Enable DDR mode The below steps illustrate how to make the i.MX RT1060 boot from the QSPI with working in DDR mode. Note: The board is MIMXRT1060, IDE is MCUXpresso IDE Open a hello_world as the template Modify the FDCB(Flash Device Configuration Block) a)Set the controllerMiscOption parameter to supports DDR read command. b) Set Serial Flash frequency to 60 MHz. c)Parase the DDR read command into command sequence. The following table shows a template command sequence of DDR Quad IO FAST READ instruction and it's almost matching with the FRQDTR (Fast Read Quad IO DTR) Sequence of IS25WP064 (as Fig 2 shows).   Fig2 FRQDTR Sequence d)Adjust the dummy cycles. The dummy cycles should match with the specific serial clock frequency and the default dummy cycles of the FRQDTR sequence command is 6 (as the below table shows).   However, when the serial clock frequency is 60MHz, the dummy cycle should change to 4 (as the below table shows).   So it needs to configure [P6:P3] bits of the Read Register (as the below table shows) via adding the SET READ PARAMETERS command sequence(as Fig 3 shows) in FDCB manually. Fig 3 SET READ PARAMETERS command sequence In further, in DDR mode, the SCLK cycle is double the serial root clock cycle. The operand value should be set as 2N, 2N-1 or 2*N+1 depending on how the dummy cycles defined in the device datasheet. In the end, we can get an adjusted FCDB like below. // Set Dummy Cycles #define FLASH_DUMMY_CYCLES 8 // Set Read register command sequence's Index in LUT table #define CMD_LUT_SEQ_IDX_SET_READ_PARAM 7 // Read,Read Status,Write Enable command sequences' Index in LUT table #define CMD_LUT_SEQ_IDX_READ 0 #define CMD_LUT_SEQ_IDX_READSTATUS 1 #define CMD_LUT_SEQ_IDX_WRITEENABLE 3 const flexspi_nor_config_t qspiflash_config = { .memConfig = { .tag = FLEXSPI_CFG_BLK_TAG, .version = FLEXSPI_CFG_BLK_VERSION, .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad, .csHoldTime = 3u, .csSetupTime = 3u, // Enable DDR mode .controllerMiscOption = kFlexSpiMiscOffset_DdrModeEnable | kFlexSpiMiscOffset_SafeConfigFreqEnable, .sflashPadType = kSerialFlash_4Pads, //.serialClkFreq = kFlexSpiSerialClk_100MHz, .serialClkFreq = kFlexSpiSerialClk_60MHz, .sflashA1Size = 8u * 1024u * 1024u, // Enable Flash register configuration .configCmdEnable = 1u, .configModeType[0] = kDeviceConfigCmdType_Generic, .configCmdSeqs[0] = { .seqNum = 1, .seqId = CMD_LUT_SEQ_IDX_SET_READ_PARAM, .reserved = 0, }, .lookupTable = { // Read LUTs [4*CMD_LUT_SEQ_IDX_READ] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xED, RADDR_DDR, FLEXSPI_4PAD, 0x18), // The MODE8_DDR subsequence costs 2 cycles that is part of the whole dummy cycles [4*CMD_LUT_SEQ_IDX_READ + 1] = FLEXSPI_LUT_SEQ(MODE8_DDR, FLEXSPI_4PAD, 0x00, DUMMY_DDR, FLEXSPI_4PAD, FLASH_DUMMY_CYCLES-2), [4*CMD_LUT_SEQ_IDX_READ + 2] = FLEXSPI_LUT_SEQ(READ_DDR, FLEXSPI_4PAD, 0x04, STOP, FLEXSPI_1PAD, 0x00), // READ STATUS REGISTER [4*CMD_LUT_SEQ_IDX_READSTATUS] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x05, READ_SDR, FLEXSPI_1PAD, 0x01), [4*CMD_LUT_SEQ_IDX_READSTATUS + 1] = FLEXSPI_LUT_SEQ(STOP, FLEXSPI_1PAD, 0x00, 0, 0, 0), // WRTIE ENABLE [4*CMD_LUT_SEQ_IDX_WRITEENABLE] = FLEXSPI_LUT_SEQ(CMD_SDR,FLEXSPI_1PAD, 0x06, STOP, FLEXSPI_1PAD, 0x00), // Set Read register [4*CMD_LUT_SEQ_IDX_SET_READ_PARAM] = FLEXSPI_LUT_SEQ(CMD_SDR,FLEXSPI_1PAD, 0x63, WRITE_SDR, FLEXSPI_1PAD, 0x01), [4*CMD_LUT_SEQ_IDX_SET_READ_PARAM + 1] = FLEXSPI_LUT_SEQ(STOP,FLEXSPI_1PAD, 0x00, 0, 0, 0), }, }, .pageSize = 256u, .sectorSize = 4u * 1024u, .blockSize = 64u * 1024u, .isUniformBlockSize = false, }; Is DDR mode real better? According to the RT1060's datasheet, the below table illustrates the maximum frequency of FlexSPI operation, as the MIMXRT1060's onboard QSPI flash is IS25WP064AJBLE, it doesn't contain the MQS pin, it means set MCR0.RXCLKsrc=1 (Internal dummy read strobe and loopbacked from DQS) is the most optimized option. operation mode RXCLKsrc=0 RXCLKsrc=1 RXCLKsrc=3 SDR 60 MHz 133 MHz 166 MHz DDR 30 MHz 66 MHz 166 MHz In another word, QSPI can run up to 133 MHz in SDR mode versus 66 MHz in DDR mode. From the perspective of throughput capacity, they're almost the same. It seems like DDR mode is not a better option for IS25WP064AJBLE and the following experiment will validate the assumption. Experiment mbedtls_benchmark I use the mbedtls_benchmark as the first testing demo and I run the demo under the below conditions: 100MH, SDR mode; 133MHz, SDR mode; 66MHz, DDR mode; According to the corresponding printout information (as below shows), I make a table for comparison and I mark the worst performance of implementation items among the above three conditions, just as Fig 4 shows. SDR Mode run at 100 MHz. FlexSPI clock source is 3, FlexSPI Div is 6, PllPfd2Clk is 720000000 mbedTLS version 2.16.6 fsys=600000000 Using following implementations: SHA: DCP HW accelerated AES: DCP HW accelerated AES GCM: Software implementation DES: Software implementation Asymmetric cryptography: Software implementation MD5 : 18139.63 KB/s, 27.10 cycles/byte SHA-1 : 44495.64 KB/s, 12.52 cycles/byte SHA-256 : 47766.54 KB/s, 11.61 cycles/byte SHA-512 : 2190.11 KB/s, 267.88 cycles/byte 3DES : 1263.01 KB/s, 462.49 cycles/byte DES : 2962.18 KB/s, 196.33 cycles/byte AES-CBC-128 : 52883.94 KB/s, 10.45 cycles/byte AES-GCM-128 : 1755.38 KB/s, 329.33 cycles/byte AES-CCM-128 : 2081.99 KB/s, 279.72 cycles/byte CTR_DRBG (NOPR) : 5897.16 KB/s, 98.15 cycles/byte CTR_DRBG (PR) : 4489.58 KB/s, 129.72 cycles/byte HMAC_DRBG SHA-1 (NOPR) : 1297.53 KB/s, 448.03 cycles/byte HMAC_DRBG SHA-1 (PR) : 1205.51 KB/s, 486.04 cycles/byte HMAC_DRBG SHA-256 (NOPR) : 1786.18 KB/s, 327.70 cycles/byte HMAC_DRBG SHA-256 (PR) : 1779.52 KB/s, 328.93 cycles/byte RSA-1024 : 202.33 public/s RSA-1024 : 7.00 private/s DHE-2048 : 0.40 handshake/s DH-2048 : 0.40 handshake/s ECDSA-secp256r1 : 9.00 sign/s ECDSA-secp256r1 : 4.67 verify/s ECDHE-secp256r1 : 5.00 handshake/s ECDH-secp256r1 : 9.33 handshake/s   DDR Mode run at 66 MHz. FlexSPI clock source is 2, FlexSPI Div is 5, PllPfd2Clk is 396000000 mbedTLS version 2.16.6 fsys=600000000 Using following implementations: SHA: DCP HW accelerated AES: DCP HW accelerated AES GCM: Software implementation DES: Software implementation Asymmetric cryptography: Software implementation MD5 : 16047.13 KB/s, 27.12 cycles/byte SHA-1 : 44504.08 KB/s, 12.54 cycles/byte SHA-256 : 47742.88 KB/s, 11.62 cycles/byte SHA-512 : 2187.57 KB/s, 267.18 cycles/byte 3DES : 1262.66 KB/s, 462.59 cycles/byte DES : 2786.81 KB/s, 196.44 cycles/byte AES-CBC-128 : 52807.92 KB/s, 10.47 cycles/byte AES-GCM-128 : 1311.15 KB/s, 446.53 cycles/byte AES-CCM-128 : 2088.84 KB/s, 281.08 cycles/byte CTR_DRBG (NOPR) : 5966.92 KB/s, 97.55 cycles/byte CTR_DRBG (PR) : 4413.15 KB/s, 130.42 cycles/byte HMAC_DRBG SHA-1 (NOPR) : 1291.64 KB/s, 449.47 cycles/byte HMAC_DRBG SHA-1 (PR) : 1202.41 KB/s, 487.05 cycles/byte HMAC_DRBG SHA-256 (NOPR) : 1748.38 KB/s, 328.16 cycles/byte HMAC_DRBG SHA-256 (PR) : 1691.74 KB/s, 329.78 cycles/byte RSA-1024 : 201.67 public/s RSA-1024 : 7.00 private/s DHE-2048 : 0.40 handshake/s DH-2048 : 0.40 handshake/s ECDSA-secp256r1 : 8.67 sign/s ECDSA-secp256r1 : 4.67 verify/s ECDHE-secp256r1 : 4.67 handshake/s ECDH-secp256r1 : 9.00 handshake/s   Fig 4 Performance comparison We can find that most of the implementation items are achieve the worst performance when QSPI works in DDR mode with 66 MHz. Coremark demo The second demo is running the Coremark demo under the above three conditions and the result is illustrated below. SDR Mode run at 100 MHz. FlexSPI clock source is 3, FlexSPI Div is 6, PLL3 PFD0 is 720000000 2K performance run parameters for coremark. CoreMark Size : 666 Total ticks : 391889200 Total time (secs): 16.328717 Iterations/Sec : 2449.671999 Iterations : 40000 Compiler version : MCUXpresso IDE v11.3.1 Compiler flags : Optimization most (-O3) Memory location : STACK seedcrc : 0xe9f5 [0]crclist : 0xe714 [0]crcmatrix : 0x1fd7 [0]crcstate : 0x8e3a [0]crcfinal : 0x25b5 Correct operation validated. See readme.txt for run and reporting rules. CoreMark 1.0 : 2449.671999 / MCUXpresso IDE v11.3.1 Optimization most (-O3) / STACK   SDR Mode run at 133 MHz. FlexSPI clock source is 3, FlexSPI Div is 4, PLL3 PFD0 is 664615368 2K performance run parameters for coremark. CoreMark Size : 666 Total ticks : 391888682 Total time (secs): 16.328695 Iterations/Sec : 2449.675237 Iterations : 40000 Compiler version : MCUXpresso IDE v11.3.1 Compiler flags : Optimization most (-O3) Memory location : STACK seedcrc : 0xe9f5 [0]crclist : 0xe714 [0]crcmatrix : 0x1fd7 [0]crcstate : 0x8e3a [0]crcfinal : 0x25b5 Correct operation validated. See readme.txt for run and reporting rules. CoreMark 1.0 : 2449.675237 / MCUXpresso IDE v11.3.1 Optimization most (-O3) / STACK   DDR Mode run at 66 MHz. FlexSPI clock source is 2, FlexSPI Div is 5, PLL3 PFD0 is 396000000 2K performance run parameters for coremark. CoreMark Size : 666 Total ticks : 391890772 Total time (secs): 16.328782 Iterations/Sec : 2449.662173 Iterations : 40000 Compiler version : MCUXpresso IDE v11.3.1 Compiler flags : Optimization most (-O3) Memory location : STACK seedcrc : 0xe9f5 [0]crclist : 0xe714 [0]crcmatrix : 0x1fd7 [0]crcstate : 0x8e3a [0]crcfinal : 0x25b5 Correct operation validated. See readme.txt for run and reporting rules. CoreMark 1.0 : 2449.662173 / MCUXpresso IDE v11.3.1 Optimization most (-O3) / STACK   After comparing the CoreMark scores, it gets the lowest CoreMark score when QSPI works in DDR mode with 66 MHz. However, they're actually pretty close. Through the above two testings, we can get the DDR mode maybe not a better option, at least for the i.MX RT10xx series MCU.
查看全文
Note: for similar EVKs, see: Using J-Link with MIMXRT1060-EVKB or MIMXRT1040-EVK Using J-Link with MIMXRT1170-EVKB Using J-Link with MIMXRT1160-EVK or MIMXRT1170-EVK This article provides details using a J-Link debug probe with either of these EVKs.  There are two options: the onboard debug circuit can be updated with Segger J-Link firmware, or an external J-Link debug probe can be attached to the EVK.  Using the onboard debug circuit is helpful as no other debug probe is required.  However, the onboard debug circuit will no longer power the EVK when updated with the J-Link firmware.  Appnote AN13206 has more details on this, and the comparison of the firmware options for the debug circuit.  This article details the steps to use either J-Link option.   Using external J-Link debug probe Segger offers several J-Link probe options.  To use one of these probes with these EVKs, configure the EVK with these settings: Remove jumpers J47 and J48, to disconnect the SWD signals from onboard debug circuit.  These jumpers or installed by default. Use default power selection on J1 with pins 5-6 shorted. Connect the J-Link probe to J21, 20-pin dual-row 0.1" header. Power the EVK with one of the power supply options.  Typically USB connector J41 is used to power the board, and provides a UART/USB bridge through the onboard debug circuit.   Using onboard debug circuit with J-Link firmware Follow Appnote AN13206 to program the J-Link firmware to the EVK Install jumpers J47 and J48, to connect the SWD signals from onboard debug circuit.  These jumpers or installed by default. Plug USB cable to J41.  This provides connection for J-Link debugger and UART/USB bridge.  However, with J-Link firmware, J41 no longer powers the EVK Power the EVK with another source.  Here we will use another USB port.  Move the jumper on J1 to short pins 3-4 (default shorts pins 5-6) Connect a 2nd USB cable to J9 to power the EVK.  The green LED next to J1 will be lit when the EVK is properly powered.
查看全文
Created by:  jeremyzhou Introduction Normal Cortex-M core-based MCUs generally have built-in parallel NOR Flash. The parallel NOR Flash is directly hung on the Cortex-M core high-performance AHB bus. If a well-known IDE supports the MCU, it should integrate the corresponding Flash driver algorithm which enables the developer to program and debug the MCU in the IDE. However, the i.MX RT series MCU doesn't contain the internal flash, how do developers debug these MCUs with online XIP (eXecute-In-Place)? Take easy, i.MXRT can support external parallel NOR and serial NOR to run the XIP, benefit from saving the number of pins, serial NOR Flash is most commonly used and FlexSPI supports XIP feature which makes online debug available. The article introduces the mechanism of debugging the external serial NOR flash with the RT MCU and illustrates the steps of modifying the flash driver algorithm of MCUXpresso. CoreSight Technical The i.MX RT series MCU is based on the Cortex-M core and the CoreSight Technical is a new debugging architecture launched by ARM in 2004 and is also a part of the core authorization, supports the debug and trace feature for Cortex-M core-based MCU. CoreSight is very powerful. It contains many debugging components (ie various protocols). The following figure is from the CoreSight Technical Introduction Manual, which shows the connections between various debugging components under the CoreSight architecture. Fig 1 CoreSight Technical This article does not mainly aim to introduce CoreSight technical. Therefore, for CoreSight, we only need to know that it in charge of the main debugging work and the CoreSight can access the system memory and peripheral register from the AMBA bus through the DAP component in real-time, definitely, it includes the code in the external serial Flash. FlexSPI module To implement debugging in serial Flash, the code must be XIP in serial Flash, that is, the CPU must be able to fetch instructions and data from any address in serial Flash in real-time. The serial Flash mentioned in this article generally refers to the 4-wire SPI Interface NOR Flash and the SPI mode can be Single/Dual/Quad/Octal. No matter which SPI mode is, the Flash is essentially serial Flash, and the address lines and data lines are not only shared but also serial. According to conventional knowledge, to implement the XIP, Flash should be a parallel bus interface and hung on AMBA, further, this parallel bus should have independent address lines and data lines, and the width of the address lines correspond to the size of Flash. So why can run XIP in serial Flash with i.MXRT? The answer is the FlexSPI peripheral. Figure 2 is the FlexSPI module block diagram. On the right side of the block diagram is the signal connection between FlexSPI and external serial Flash. The left side is the connection between FlexSPI and the internal bus of the i.MXRT system. There are two types of bus interface: 32bit IPS BUS (manual manipulate the FlexSPI register sends Flash reading and writing commands) and 64bit AHB BUS (FlexSPI translates the AHB access address and automatically sends the corresponding Flash reading and writing commands) which is the key feature enables the XIP available. Fig 2 FlexSPI module In the Reference manual, it lists detailed information about the AHB bus: - AHB RX Buffer implemented to reduce read latency. Total AHB RX Buffer size: 128 * 64 Bits - 16 AHB masters supported with priority for reading access - 4 flexible and configurable buffers in AHB RX Buffer - AHB TX Buffer implemented to buffer all write data from one AHB burst. AHB TX Buffer size: 8 * 64 Bits - All AHB masters share this AHB TX Buffer. No AHB master number limitation for Write Access. In addition, the AHB bus includes the below-enhanced features to optimize the reading of Serial Flash memory. - Cachable and Non-Cachable access - Prefetch Enable/Disable - Burst size: 8/16/32/64 bits - All burst type: SINGLE/INCR/WRAP4/INCR4/WRAP8/INCR8/WRAP16/INCR16 Debugging process of serial Flash Fig 3 illustrates the debugging process of serial Flash with the RT series MCU and in basic, the overview of the debugging process is not complicated. When you click IDE debugging icon, the Flash driver algorithm (executable file) pre-installed in the IDE will be downloaded to the internal FlexRAM of i.MXRT via the debugger firstly. The Flash driver algorithm provides FlexSPI initialization, erase and programming APIs, etc. Next, the debugger caches the application code (binary machine code) in FlexRAM in segments prior to calling the Flash programming API to implement the program work. After completing programming application code (from FlexRAM to Flash), CoreSight will take over the debugging work. At this time, the CPU can access the serial Flash that connects the FlexSPI module through the AHB bus, in another word, CoreSight can control and track code in real-time, and single-step debugging is available too in the IDE. Fig 3 Flash Driver of MCUXpresso IDE The latest version (24.12) of MCUXpresso IDE supports all i.Mx RT series MCU (as the following figure shows).   Fig 4 MCUXpresso IDE supported Parts The  developer should select a suitable flash driver file to apply to his board (Fig 5). Fig 5 Flash driver files   For more details about the flashdrivers supported by the MCUXpresso IDE, please refer to the MCUXpresso IDE  User Guide. Specifically check the two following sections : Flash drivers using SFDP (LPC and i.MX RT) and i.MX RT QSPI and Hyper Flash frivers   As mentioned above, the RT series MCUs don't have an internal flash, so they must use an either external parallel or serial NOR. For IDE providers, it's too hard to provide enough flash drivers to fit all external NOR flashes, the workload is huge, so IDEs general provide the flash driver files for mainstream Serial NOR, especially, 4-wire SPI Interface NOR Flash, it means we need to modify or tune the flash driver to fit our specific application. Add new flash driver of MCUXpresso IDE Before start, we should realize that MCUXpresso IDE is different from MDK/IAR. The flash driver algorithms of MDK and IAR are independent of the specific debug tools and they are able to use with all supported debug tools (JLink/DAPLink, etc). For MCUXpresso IDE, the flash driver algorithms are only able to use with the CMSIS-DAP type debug tool. For instance, when you use JLink with MCUXPresso IDE, it will use the flash driver algorithm of Jlink instead of its own. There's a real case from a customer: He currently designs his new card reader module based on RT1024 and he plans to make a board without external RAM and Flash. In other words, he only utilizes the internal 4MB flash and 256KB FlexRAM which consist of SRAM_DTC(64KB), SRAM_ITC(64KB), SRAM_OC(128KB). So he wants to configure the 256KB RAM area as normal 256KB RAM without being allocated to ITCM and DTCM. He follows the thread to reconfigure the FlexRAM, but he still encounters the below problem (as Fig 6 shows ) when entering debug mode. Fig 6 According to the debug failure log, we can come to a conclusion that the flash drive file: MIMXRT1020.cfx needs to be updated, and the following steps illustrate how to do it. a) Select a source project Flashdriver projects  on latest versions of the MCUXpresso IDE are delivered in Linkserver package. You can install Linkersever independently or during the MCUXPresso IDE installation. Therefore If you already installed MCUXpresso IDE you do not need to manually install the Linkserver, unless you want the latest version of Linkserver. If you are using MCUXpresso IDEv24 or above flashdriver projects can be found at  C:\nxp\LinkServer_24.12.21\Examples\Flashdrivers\NXP subdirectory within the MCUXpresso's Linkserver installation directory (as Fig 7 shows) and iMXRT folder contains some flash driver projects for external flash parts that work with the RT series MCU (as Fig 8 shows).   Fig 7   Fig 8 Select the flash driver project which is the closest to the target as a prototype, in this case, we select the iMXRT1020_QSPI project, extract the project file and import them in the MCUXpresso IDE (as Fig 9). Fig 9 b) Modify pin assignment The RT1024 integrates a 4 MB QSPI flash as an "internal flash", it is connected to different FlexSPI pins versus to the default pins of the iMXRT1020_QSPI project just as the below table shows. FlexSPI pin RT1020 RT1024 FLEXSPI_A_DQS GPIO_SD_B1_05 GPIO_SD_B1_05 FLEXSPI_A_SS0_B GPIO_SD_B1_11 GPIO_AD_B1_05 FLEXSPI_A_SCLK GPIO_SD_B1_07 GPIO_AD_B1_01 FLEXSPI_A_DATA0 GPIO_SD_B1_08 GPIO_AD_B1_02 FLEXSPI_A_DATA1 GPIO_SD_B1_10 GPIO_AD_B1_04 FLEXSPI_A_DATA2 GPIO_SD_B1_09 GPIO_AD_B1_03 FLEXSPI_A_DATA3 GPIO_SD_B1_06 GPIO_AD_B1_00 So it needs to adjust the pin initialization in the BOARD_InitPins() function in pin_mux.c. /* FUNCTION ************************************************************************************************************ * * Function Name : BOARD_InitPins * Description : Configures pin routing and optionally pin electrical features. * * END ****************************************************************************************************************/ void BOARD_InitPins(void) { CLOCK_EnableClock(kCLOCK_Iomuxc); /* iomuxc clock (iomuxc_clk_enable): 0x03u */ IOMUXC_SetPinMux( IOMUXC_GPIO_AD_B0_06_LPUART1_TX, /* GPIO_AD_B0_06 is configured as LPUART1_TX */ 0U); /* Software Input On Field: Input Path is determined by functionality */ IOMUXC_SetPinMux( IOMUXC_GPIO_AD_B0_07_LPUART1_RX, /* GPIO_AD_B0_07 is configured as LPUART1_RX */ 0U); /* Software Input On Field: Input Path is determined by functionality */ IOMUXC_SetPinMux( IOMUXC_GPIO_SD_B1_05_FLEXSPI_A_DQS, /* GPIO_SD_B1_05 is configured as FLEXSPI_A_DQS */ 1U); /* Software Input On Field: Force input path of pad GPIO_SD_B1_05 */ // IOMUXC_SetPinMux( // IOMUXC_GPIO_SD_B1_06_FLEXSPI_A_DATA03, /* GPIO_SD_B1_06 is configured as FLEXSPI_A_DATA03 */ // 1U); /* Software Input On Field: Force input path of pad GPIO_SD_B1_06 */ // IOMUXC_SetPinMux( // IOMUXC_GPIO_SD_B1_07_FLEXSPI_A_SCLK, /* GPIO_SD_B1_07 is configured as FLEXSPI_A_SCLK */ // 1U); /* Software Input On Field: Force input path of pad GPIO_SD_B1_07 */ // IOMUXC_SetPinMux( // IOMUXC_GPIO_SD_B1_08_FLEXSPI_A_DATA00, /* GPIO_SD_B1_08 is configured as FLEXSPI_A_DATA00 */ // 1U); /* Software Input On Field: Force input path of pad GPIO_SD_B1_08 */ // IOMUXC_SetPinMux( // IOMUXC_GPIO_SD_B1_09_FLEXSPI_A_DATA02, /* GPIO_SD_B1_09 is configured as FLEXSPI_A_DATA02 */ // 1U); /* Software Input On Field: Force input path of pad GPIO_SD_B1_09 */ // IOMUXC_SetPinMux( // IOMUXC_GPIO_SD_B1_10_FLEXSPI_A_DATA01, /* GPIO_SD_B1_10 is configured as FLEXSPI_A_DATA01 */ // 1U); /* Software Input On Field: Force input path of pad GPIO_SD_B1_10 */ // IOMUXC_SetPinMux( // IOMUXC_GPIO_SD_B1_11_FLEXSPI_A_SS0_B, /* GPIO_SD_B1_11 is configured as FLEXSPI_A_SS0_B */ // 1U); /* Software Input On Field: Force input path of pad GPIO_SD_B1_11 */ IOMUXC_SetPinMux( IOMUXC_GPIO_AD_B1_00_FLEXSPI_A_DATA03, /* GPIO_AD_B1_00 is configured as FLEXSPI_A_DATA03 */ 1U); /* Software Input On Field: Force input path of pad GPIO_AD_B1_00 */ IOMUXC_SetPinMux( IOMUXC_GPIO_AD_B1_01_FLEXSPI_A_SCLK, /* GPIO_AD_B1_01 is configured as FLEXSPI_A_SCLK */ 1U); /* Software Input On Field: Force input path of pad GPIO_AD_B1_01 */ IOMUXC_SetPinMux( IOMUXC_GPIO_AD_B1_02_FLEXSPI_A_DATA00, /* GPIO_AD_B1_02 is configured as FLEXSPI_A_DATA00 */ 1U); /* Software Input On Field: Force input path of pad GPIO_AD_B1_02 */ IOMUXC_SetPinMux( IOMUXC_GPIO_AD_B1_03_FLEXSPI_A_DATA02, /* GPIO_AD_B1_03 is configured as FLEXSPI_A_DATA02 */ 1U); /* Software Input On Field: Force input path of pad GPIO_AD_B1_03 */ IOMUXC_SetPinMux( IOMUXC_GPIO_AD_B1_04_FLEXSPI_A_DATA01, /* GPIO_AD_B1_04 is configured as FLEXSPI_A_DATA01 */ 1U); /* Software Input On Field: Force input path of pad GPIO_AD_B1_04 */ IOMUXC_SetPinMux( IOMUXC_GPIO_AD_B1_05_FLEXSPI_A_SS0_B, /* GPIO_AD_B1_05 is configured as FLEXSPI_A_SS0_B */ 1U); /* Software Input On Field: Force input path of pad GPIO_AD_B1_05 */ IOMUXC_SetPinConfig( IOMUXC_GPIO_AD_B0_06_LPUART1_TX, /* GPIO_AD_B0_06 PAD functional properties : */ 0x10B0u); /* Slew Rate Field: Slow Slew Rate Drive Strength Field: R0/6 Speed Field: medium(100MHz) Open Drain Enable Field: Open Drain Disabled Pull / Keep Enable Field: Pull/Keeper Enabled Pull / Keep Select Field: Keeper Pull Up / Down Config. Field: 100K Ohm Pull Down Hyst. Enable Field: Hysteresis Disabled */ IOMUXC_SetPinConfig( IOMUXC_GPIO_AD_B0_07_LPUART1_RX, /* GPIO_AD_B0_07 PAD functional properties : */ 0x10B0u); /* Slew Rate Field: Slow Slew Rate Drive Strength Field: R0/6 Speed Field: medium(100MHz) Open Drain Enable Field: Open Drain Disabled Pull / Keep Enable Field: Pull/Keeper Enabled Pull / Keep Select Field: Keeper Pull Up / Down Config. Field: 100K Ohm Pull Down Hyst. Enable Field: Hysteresis Disabled */ IOMUXC_SetPinConfig( IOMUXC_GPIO_SD_B1_05_FLEXSPI_A_DQS, /* GPIO_SD_B1_05 PAD functional properties : */ 0x10F1u); /* Slew Rate Field: Fast Slew Rate Drive Strength Field: R0/6 Speed Field: max(200MHz) Open Drain Enable Field: Open Drain Disabled Pull / Keep Enable Field: Pull/Keeper Enabled Pull / Keep Select Field: Keeper Pull Up / Down Config. Field: 100K Ohm Pull Down Hyst. Enable Field: Hysteresis Disabled */ IOMUXC_SetPinConfig( IOMUXC_GPIO_SD_B1_06_FLEXSPI_A_DATA03, /* GPIO_SD_B1_06 PAD functional properties : */ 0x10F1u); /* Slew Rate Field: Fast Slew Rate Drive Strength Field: R0/6 Speed Field: max(200MHz) Open Drain Enable Field: Open Drain Disabled Pull / Keep Enable Field: Pull/Keeper Enabled Pull / Keep Select Field: Keeper Pull Up / Down Config. Field: 100K Ohm Pull Down Hyst. Enable Field: Hysteresis Disabled */ IOMUXC_SetPinConfig( IOMUXC_GPIO_SD_B1_07_FLEXSPI_A_SCLK, /* GPIO_SD_B1_07 PAD functional properties : */ 0x10F1u); /* Slew Rate Field: Fast Slew Rate Drive Strength Field: R0/6 Speed Field: max(200MHz) Open Drain Enable Field: Open Drain Disabled Pull / Keep Enable Field: Pull/Keeper Enabled Pull / Keep Select Field: Keeper Pull Up / Down Config. Field: 100K Ohm Pull Down Hyst. Enable Field: Hysteresis Disabled */ IOMUXC_SetPinConfig( IOMUXC_GPIO_SD_B1_08_FLEXSPI_A_DATA00, /* GPIO_SD_B1_08 PAD functional properties : */ 0x10F1u); /* Slew Rate Field: Fast Slew Rate Drive Strength Field: R0/6 Speed Field: max(200MHz) Open Drain Enable Field: Open Drain Disabled Pull / Keep Enable Field: Pull/Keeper Enabled Pull / Keep Select Field: Keeper Pull Up / Down Config. Field: 100K Ohm Pull Down Hyst. Enable Field: Hysteresis Disabled */ IOMUXC_SetPinConfig( IOMUXC_GPIO_SD_B1_09_FLEXSPI_A_DATA02, /* GPIO_SD_B1_09 PAD functional properties : */ 0x10F1u); /* Slew Rate Field: Fast Slew Rate Drive Strength Field: R0/6 Speed Field: max(200MHz) Open Drain Enable Field: Open Drain Disabled Pull / Keep Enable Field: Pull/Keeper Enabled Pull / Keep Select Field: Keeper Pull Up / Down Config. Field: 100K Ohm Pull Down Hyst. Enable Field: Hysteresis Disabled */ IOMUXC_SetPinConfig( IOMUXC_GPIO_SD_B1_10_FLEXSPI_A_DATA01, /* GPIO_SD_B1_10 PAD functional properties : */ 0x10F1u); /* Slew Rate Field: Fast Slew Rate Drive Strength Field: R0/6 Speed Field: max(200MHz) Open Drain Enable Field: Open Drain Disabled Pull / Keep Enable Field: Pull/Keeper Enabled Pull / Keep Select Field: Keeper Pull Up / Down Config. Field: 100K Ohm Pull Down Hyst. Enable Field: Hysteresis Disabled */ IOMUXC_SetPinConfig( IOMUXC_GPIO_SD_B1_11_FLEXSPI_A_SS0_B, /* GPIO_SD_B1_11 PAD functional properties : */ 0x10F1u); /* Slew Rate Field: Fast Slew Rate Drive Strength Field: R0/6 Speed Field: max(200MHz) Open Drain Enable Field: Open Drain Disabled Pull / Keep Enable Field: Pull/Keeper Enabled Pull / Keep Select Field: Keeper Pull Up / Down Config. Field: 100K Ohm Pull Down Hyst. Enable Field: Hysteresis Disabled */ } c) Modify linker file According to Fig 3, a flash driver should be downloaded into FlexRAM on the target MCU during the debuggingprocess, for the iMXRT1020_QSPI project, the flash driver needs to be downloaded to DTCM (0x2000_0000~0x2001_0000), however, to meet the customer's demand, the whole of FlexRAM is reconfigured to SRAM_OC in the ResetISR() function. In another word, there's no DTCM area to load the flash driver and it causes the above debug failure. So we need to use the SRAM_OC instead of DTCM to load the flash driver just like the below shows. In the FlashDriver_32Kbuffer.ld of iMXRT1020_QSPI project: /* * Linker script for NXP LPC546xx SPIFI Flash Driver (Messaged) */ MEMORY { /*SRAM (rwx) : ORIGIN = 0x20000000, LENGTH = (64 * 1024)*/ SRAM (rwx) : ORIGIN = 0x20200000, LENGTH = (64 * 1024) } /* stack size : multiple of 8*/ __stack_size = (4 * 1024); /* flash image buffer size : multiple of page size*/ __cache_size = (32 * 1024); /* Supported operations bit map * 0x40 = New device info available after Init() call * This setting must match the actual target flash driver build! */ __opmap_val = 0x1000; /* Actual placement of flash driver code/data controlled via standard file */ INCLUDE "../../LPCXFlashDriverLib/linker/placement.ld" d) Recompile In the LPCXFlashDriverLib project, select the Release_SectorHashing option prior to clicking the Build icon to generate libLPCXFlashDriverLib.a file (as Fig 10 shows). Fig 10 Next, in the iMXRT1020_QSPI project, select the MIMXRT1020-EVK_IS25LP064 option (as Fig 11 shows), then click the Build icon to generate a new flash driver file that resides in ~\Examples\Flashdrivers\NXP\iMXRT\iMXRT1020_QSPI\iMXRT1020_QSPI\builds directory. Fig 11 Note: I've attached a test project which is based on the hello_world demo that comes from the RT1024's SDK library, in addition, the attachment also contains the new flash driver and corresponding debug script files, so please give it a try.   Debug the  flash driver of MCUXpresso IDE As mentioned on other parts of this document you could change the pin mux assignments and edit the flashdriver. However, before testing your custom flash driver you could do a simple debug. All the flashdriver projects come with a debug build configuration.  Make sure debug build configuration is enabled as shown in Fig 12 Fig 12  After enabling debug build you can simply trigger a standard debug operation, and the IDE will load into SRAM a simple test code. The test code will detect the flash and perform an erase procedure. See Fig 13 Fig 13 In the terminal you will see the output log from the test program.       Edit changes by @diego_charles : Update Fig 7, Fig 8 and a) Select source project , updated  Flash Driver of MCUXpresso IDE section, updated Flash Driver of MCUXpresso IDE section, fig 4, added  Debug the flash Driver of MCUXpresso IDE section    
查看全文
Realize a panoramic video layer with OpenGL
查看全文
RT10xx SAI basic and SDCard wave file play 1. Introduction NXP RT10xx's audio modules are SAI, SPDIF, and MQS. The SAI module is a synchronous serial interface for audio data transmission. SPDIF is a stereo transceiver that can receive and send digital audio, MQS is used to convert I2S audio data from SAI3 to PWM, and then can drive external speakers, but in practical usage, it still need to add the amplifier drive circuit. When we use the SAI module, it will be related to the audio file play and the data obtained. This article will be based on the MIMXRT1060-EVK board, give the RT10xx SAI module basic knowledge, PCM waveform format, the audio file cut, and conversion tool, use the MCUXpresso IDE CFG peripheral tool to create the SAI project, play the audio data, it will also provide the SDcard with fatfs system to read the wave file and play it. 2. Basic Knowledge and the tools Before entering the project details and testing, just provide some SAI module knowledge, wave file format information, audio convert tools. 2.1 SAI module basic RT10xx SAI module can support I2S, AC97, TDM, and codec/DSP interface. SAI module contains Transmitter and Receiver, the related signals:     SAI_MCLK: master clock, used to generate the bit clock, master output, slave input.     SAI_TX_BCLK: Transmit bit clock, master output, slave input     SAI_TX_SYNC: Transmit Frame sync, master output, slave input, L/R channel select     SAI_TX_DATA[4]:Transmit data line, 1-3 share with RX_DATA[1-3]     SAI_RX_BCLK: receiver bit clock     SAI_RX_SYNC: receiver frame sync     SAI_RX_DATA[4]: receiver data line SAI module clocks: audio master clock, bus clock, bit clock SAI module Frame sync has 3 modes:      1)Transmit and receive using its own BCLK and SYNC      2)Transmit async, receive sync: use transmit BCLK and SYNC, transmit enable at first, disable at last.      3)Transmit sync, receive async: use receive BCLK and SYNC, receiver enable at first, disable at last. Valid frame sync is also ignored (slave mode) or not generated (master mode) for the first four-bit clock cycles after enabling the transmitter or receiver. Pic 1 SAI module clock structure: Pic 2 SAI module 3 clock sources:  PLL3_PFD3, PLL5, PLL4 In the above picture, SAI1_CLK_ROOT, which can be used as the MCLK, the BCLK is: BCLK= master clock/(TCR2[DIV]+1)*2 Sample rate = Bitclockfreq /(bitwidth*channel) 2.2 waveform audio file format WAVE file is used to save the PCM encode data, WAVE is using the RIFF format, the smallest unit in the RIFF file is the CK struct, CKID is the data type, the value can be: “RIFF”,“LIST”,“fmt”, “data” etc. RIFF file is little-endian. RIFF structure: typedef unsigned long DWORD;//4B typedef unsigned char BYTE;//1B typedef DWORD FOURCC; // 4B typedef struct { FOURCC ckID; //4B DWORD ckSize; //4B union { FOURCC fccType; // RIFF form type 4B BYTE ckData[ckSize]; //ckSize*1B } ckData; } RIFFCK; Pic 3 Take a 16Khz 2 channel wave file as the example: Pic 4 Yellow: CKID  Green: data length   Purple: data The detailed analysis as follows: Pic 5 We can find, the real audio data, except the wave header, the data size is 1279860bytes. 2.3 Audio file convert In practical usage, the audio file may not the required channel and the sample rate configuration, or the format is not the wave, or the time is too long, then we can use some tool to convert it to your desired format. We can use the ffmpeg tool: https://ffmpeg.org/ About the details, check the ffmpeg document, normally we use these command: mp3 file converts to 16k, 16bit, 2 channel wave file: ffmpeg -i test.mp3 -acodec pcm_s16le -ar 16000 -ac 2 test.wav or: ffmpeg -i test.mp3 -aq 16 -ar 16000 -ac 2 test.wav test.wav, cut 35s from 00:00:00, and can convert save to test1.wav: ffmpeg -ss 00:00:00 -i test.wav -t 35.0 -c copy test1.wav Pic 6 Pic 7 2.4 Obtain wave L/R channel audio data Just like the SDK code, save the L/R audio data directly in the RT RAM array, so here, we need to obtain the audio data from the wav file. We can use the python readout the wav header, then get the audio data size, and save the audio data to one array in the .h files. The related Python code can be: import sys import wave def wav2hex(strWav, strHex): with wave.open(strWav, "rb") as fWav: wavChannels = fWav.getnchannels() wavSampleWidth = fWav.getsampwidth() wavFrameRate = fWav.getframerate() wavFrameNum = fWav.getnframes() wavFrames = fWav.readframes(wavFrameNum) wavDuration = wavFrameNum / wavFrameRate wafFramebytes = wavFrameNum * wavChannels * wavSampleWidth print("Channels: {}".format(wavChannels)) print("Sample width: {}bits".format(wavSampleWidth * 8)) print("Sample rate: {}kHz".format(wavFrameRate/1000)) print("Frames number: {}".format(wavFrameNum)) print("Duration: {}s".format(wavDuration)) print("Frames bytes: {}".format(wafFramebytes)) fWav.close() pass with open(strHex, "w") as fHex: # Print WAV parameters fHex.write("/*\n"); fHex.write(" Channels: {}\n".format(wavChannels)) fHex.write(" Sample width: {}bits\n".format(wavSampleWidth * 8)) fHex.write(" Sample rate: {}kHz\n".format(wavFrameRate/1000)) fHex.write(" Frames number: {}\n".format(wavFrameNum)) fHex.write(" Duration: {}s\n".format(wavDuration)) fHex.write(" Frames bytes: {}\n".format(wafFramebytes)) fHex.write("*/\n\n") # Print WAV frames fHex.write("uint8_t music[] = {\n") print("Transferring...") i = 0 while wafFramebytes > 0: if(wafFramebytes < 16): BytesToPrint = wafFramebytes else: BytesToPrint = 16 fHex.write(" ") for j in range(0, BytesToPrint): if j != 0: fHex.write(' ') fHex.write("0x{:0>2x},".format(wavFrames[i])) i+=1 j+=1 fHex.write("\n") wafFramebytes -= BytesToPrint fHex.write("};\n") fHex.close() print("Done!") wav2hex(sys.argv[1], sys.argv[2]) Take the music1.wave as an example: Pic 8 2.4 Audio data relationship with audio wave 16bit data range is: -32768 to 32767, the goldwave related value range is(-1~1).Use goldwave tool to open the example music1.wav, check the data in 1s position, the left channel relative data is -0.08227, right channel relative data is -0.2257. Pic 9                                                                          pic 10 Now, calculate the L/R real data, and find the position in the music1.h. Pic 11 From pic 8, we can know, the real wave R/L data from line 11, each line contains 16 bytes of data. So, from music1.wav related value, we can calculate the related data, and compare it with the real data in the array, we can find, it is totally the same. 3. SAI MCUXpresso project creation Based on SDK_2.9.2_EVK-MIMXRT1060, create one SAI DMA audio play project. The audio data can use the above music1.h. Create one bare-metal project: Drivers check: clock, common, dmamux, edma,gpio,i2c,iomuxc,lpuart,sai,sai_edma,xip_device Utilities check:       Debug_console,lpuart_adapter,serial_manager,serial_manager_uart Board components check:       Xip_board Abstraction Layer check:       Codec, codec_wm8960_adapter,lpi2c_adapter Software Components check:       Codec_i2c,lists,wm8960 After the creation of the project, open the clocks, configure the clock, core, flexSPI can use the default one, we mainly configure the SAI1 related clocks: Pic 12 Select the SAI1 clock source as PLL4, PLL4_MAIN_CLK configure as 786.48MHz. SAI1 clock configure as 6.144375MHz. After the configuration, update the code. Open Pins tool, configure the SAI1 related pins, as the codec also need the I2C, so it contains the I2C pin configuration. Pic 13 Update the code. Open peripherals, configure DMA, SAI, NVIC. Pic 14 Pic 15 DMA配置如下: pic16 After configuration, generate the code. In the above configuration, we have finished the SAI DMA transfer configuration, SAI master mode, 16bits, the sample rate is 16kHz, 2channel, DMA transfer, bit clock is 512Khz, the master clock is 6.1443Mhz. void callback(I2S_Type *base, sai_edma_handle_t *handle, status_t status, void *userData) { if (kStatus_SAI_RxError == status) { } else { finishIndex++; emptyBlock++; /* Judge whether the music array is completely transfered. */ if (MUSIC_LEN / BUFFER_SIZE == finishIndex) { isFinished = true; finishIndex = 0; emptyBlock = BUFFER_NUM; tx_index = 0; cpy_index = 0; } } } int main(void) { sai_transfer_t xfer; /* Init board hardware. */ BOARD_ConfigMPU(); BOARD_InitBootPins(); BOARD_InitBootClocks(); BOARD_InitBootPeripherals(); #ifndef BOARD_INIT_DEBUG_CONSOLE_PERIPHERAL /* Init FSL debug console. */ BOARD_InitDebugConsole(); #endif PRINTF(" SAI wav module test!\n\r"); /* Use default setting to init codec */ if (CODEC_Init(&codecHandle, &boardCodecConfig) != kStatus_Success) { assert(false); } /* delay for codec output stable */ DelayMS(DEMO_CODEC_INIT_DELAY_MS); CODEC_SetVolume(&codecHandle,2U,50); // set 50% volume EnableIRQ(DEMO_SAI_IRQ); SAI_TxEnableInterrupts(DEMO_SAI, kSAI_FIFOErrorInterruptEnable); PRINTF(" MUSIC PLAY Start!\n\r"); while (1) { PRINTF(" MUSIC PLAY Again\n\r"); isFinished = false; while (!isFinished) { if ((emptyBlock > 0U) && (cpy_index < MUSIC_LEN / BUFFER_SIZE)) { /* Fill in the buffers. */ memcpy((uint8_t *)&buffer[BUFFER_SIZE * (cpy_index % BUFFER_NUM)], (uint8_t *)&music[cpy_index * BUFFER_SIZE], sizeof(uint8_t) * BUFFER_SIZE); emptyBlock--; cpy_index++; } if (emptyBlock < BUFFER_NUM) { /* xfer structure */ xfer.data = (uint8_t *)&buffer[BUFFER_SIZE * (tx_index % BUFFER_NUM)]; xfer.dataSize = BUFFER_SIZE; /* Wait for available queue. */ if (kStatus_Success == SAI_TransferSendEDMA(DEMO_SAI, &SAI1_SAI_Tx_eDMA_Handle, &xfer)) { tx_index++; } } } } }   4. SAI test result     To check the real L/R data sendout situation, we modify the music array first 16 bytes data as: 0x55,0xaa,0x01,0x00,0x02,0x00,0x03,0x00,0x04,0x00,0x05,0x00,0x06,0x00,0x07,0x00 Then test SAI_MCLK,SAI_TX_BCLK,SAI_TX_SYNC,SAI_TXD pin wave, and compare with the defined data, because the polarity is configured as active low, it is falling edge output, sample at rising edge. The test point on the MIMXRT1060-EVK board is using the codec pin position: Pic 17 4.1 Logic Analyzer tool wave Pic 18 MCLK clock frequency is 6.144375Mhz, BCLK is 512KHz, SYNC is 16KHz. Pic 19 The first frame data is:1010101001010101 0000000000000001 0XAA55  0X0001 It is the same as the array defined L/R data. SYNC low is Left 16 bit, High is right 16 bit. 4.2 Oscilloscope test wave Just like the logic analyzer, the oscilloscope wave is the same: Pic 20 Add the music.h to the project, and let the main code play the music array data in loop, we will hear the music clear when insert the headphone to on board J12 or add a speaker. 5. SAI SDcard wave music play This part will add the sd card, fatfs system, to read out the 16bit 16K 2ch wave file in the sd card, and play it in loop. 5.1 driver add     Code is based on SDK_2.9.2_EVK-MIMXRT1060, just on the previous project, add the sdcard, sd fatfs driver, now the bare-metal driver situation is: Drivers check: cache, clock, common, dmamux, edma,gpio,i2c,iomuxc,lpuart,sai,sai_edma,sdhc, xip_device Utilities check:       Debug_console,lpuart_adapter,serial_manager,serial_manager_uart Middleware check:       File System->FAT File System->fatfs+sd, Memories Board components check:       Xip_board Abstraction Layer check:       Codec, codec_wm8960_adapter,lpi2c_adapter Software Components check:       Codec_i2c,lists,wm8960 5.2 WAVE header analyzer with code    From previous content, we can know the wav header structure, we need to play the wave file from the sd card, then we need to analyze the wave header to get the audio format, audio data-related information. The header analysis code is: uint8_t Fun_Wave_Header_Analyzer(void) { char * datap; uint8_t ErrFlag = 0; datap = strstr((char*)Wav_HDBuffer,"RIFF"); if(datap != NULL) { wav_header.chunk_size = ((uint32_t)*(Wav_HDBuffer+4)) + (((uint32_t)*(Wav_HDBuffer + 5)) << + (((uint32_t)*(Wav_HDBuffer + 6)) << 16) +(((uint32_t)*(Wav_HDBuffer + 7)) << 24); movecnt += 8; } else { ErrFlag = 1; return ErrFlag; } datap = strstr((char*)(Wav_HDBuffer+movecnt),"WAVEfmt"); if(datap != NULL) { movecnt += 8; wav_header.fmtchunk_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); wav_header.audio_format = ((uint16_t)*(Wav_HDBuffer+movecnt+4) + (uint16_t)*(Wav_HDBuffer+movecnt+5)); wav_header.num_channels = ((uint16_t)*(Wav_HDBuffer+movecnt+6) + (uint16_t)*(Wav_HDBuffer+movecnt+7)); wav_header.sample_rate = ((uint32_t)*(Wav_HDBuffer+movecnt+8)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 9)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 10)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 11)) << 24); wav_header.byte_rate = ((uint32_t)*(Wav_HDBuffer+movecnt+12)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 13)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 14)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 15)) << 24); wav_header.block_align = ((uint16_t)*(Wav_HDBuffer+movecnt+16) + (uint16_t)*(Wav_HDBuffer+movecnt+17)); wav_header.bps = ((uint16_t)*(Wav_HDBuffer+movecnt+18) + (uint16_t)*(Wav_HDBuffer+movecnt+19)); movecnt +=(4+wav_header.fmtchunk_size); } else { ErrFlag = 1; return ErrFlag; } datap = strstr((char*)(Wav_HDBuffer+movecnt),"LIST"); if(datap != NULL) { movecnt += 4; wav_header.list_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); movecnt +=(4+wav_header.list_size); } //LIST not Must datap = strstr((char*)(Wav_HDBuffer+movecnt),"data"); if(datap != NULL) { movecnt += 4; wav_header.datachunk_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); movecnt += 4; ErrFlag = 0; } else { ErrFlag = 1; return ErrFlag; } PRINTF("Wave audio format is %d\r\n",wav_header.audio_format); PRINTF("Wave audio channel number is %d\r\n",wav_header.num_channels); PRINTF("Wave audio sample rate is %d\r\n",wav_header.sample_rate); PRINTF("Wave audio byte rate is %d\r\n",wav_header.byte_rate); PRINTF("Wave audio block align is %d\r\n",wav_header.block_align); PRINTF("Wave audio bit per sample is %d\r\n",wav_header.bps); PRINTF("Wave audio data size is %d\r\n",wav_header.datachunk_size); return ErrFlag; } Mainly divide RIFF to 4 parts: “RIFF”,“fmt”,“LIST”,“data”. The 4 bytes data follows the “data” is the whole audio data size, it can be used to the fatfs to read the audio data. The above code also recodes the data position, then when using the fatfs read the wave, we can jump to the data area directly. 5.3 SD card wave data play     Define the array audioBuff[4* 512], used to read out the sd card wave file, and use these data send to the SAI EDMA and transfer it to the I2S interface until all the data is transmitted to the I2S interface.     Callback record each 512 bytes data send out finished, and judge the transmit data size is reached the whole wave audio data size. 5.4 sd card wave play result    Prepare one wave file, 16bit 16k sample rate, 2 channel file, named as music.wav, put in the sd card which already does the fat32 format, insert it to the MIMXRT1060-EVK J39, run the code, will get the printf information: Please insert a card into the board. Card inserted. Make file system......The time may be long if the card capacity is big. SAI wav module test! MUSIC PLAY Start! Wave audio format is 1 Wave audio channel number is 2 Wave audio sample rate is 16000 Wave audio byte rate is 64000 Wave audio block align is 4 Wave audio bit per sample is 16 Wave audio data size is 2728440 Playback is begin! Playback is finished! At the same time, after inserting the headphone or the speaker into the J12, we can hear the music. Attachment is the mcuxpresso10.3.0 and the wave samples.  
查看全文
There is an issue with the DCD file used in the SDK 2.9.0 release for the i.MX RT1170 processor. When the included DCD file is used in a project to configure the SDRAM memory on the EVK, the refresh for the memory is not enabled. This can lead to corruption/data loss over time.   To fix the problem, replace the dcd.c file in your project with the attached file instead.   We are working on a fix, and a new revision of the SDK will be released soon.
查看全文
Symptoms   Some of us may have experienced the issue that when we put the heap to DTCM, everything is OK. That’s the default settings for MCUXpresso SDK demos. But when we put the heap on cached memory like OCRAM or SDRAM, much of the middleware does not function correctly. This issue happens on USB stack, LwIP and SDcard. USB enumeration failed,  ethernet drop packets, the application no longer writes to SD card, system hanging indefinitely on uninitialized semaphores…   Diagnosis   To understanding this issue, we need to understand the i.MXRT L1 Cache. AN12042 describes the technology of the i.MXRT cache system.       The i.MXRT series implement a CPU core platform described in Figure1. The L1 I/D-Cache is embedded in the core platform. The data cache is 4-way set-associate and instruction cache is 2-way set-associate with cache line size of 32 bytes. It connects with the SIM_M7 bus fabric master port by AXI bus. The subsystem of internal/external memory like OCRAM(FlexRAM banks configured as OCRAM), FlexSPI (Serial NOR, NAND Flash and Hyper Flash/RAM etc) and SEMC(SDRAM, PNOR Flash, NAND Flash etc.) are connected to the bus fabric slave port. CPU core access the subsystem through this bus fabric by L1 cache. The ITCM/DTCM is accessed directly by CPU core, bypass the L1 cache.   OCRAM and SDRAM is cacheable by default.  The cache brings a great performance boost, but the user must pay attention to the cache maintenance for data coherency.  To avoid data coherency issue, the easiest way is to use non-cacheable buffers.  DTCM/ITCM is Tightly-Coupled Memories, core can access it directly (cache is not involved). That can explain why all SDK demos work correctly by default.   Solution   Put critical code and data into TCM, it is non-cacheable, which is the fastest way for CPU to access the code and data.  But forcing all global data into 128KB DTCM is constraining in many cases. Users can split a non-cache memory region from OCRAM or SDRAM, and put the buffers into this region by the linker of toolchain.   Next I will take evkmimxrt1060_host_msd_command_freertos demo for example to illustrate how to make USB HOST stack to run on OCRAM.  MCUxpresso IDE 11.2.1 is used for this demo.  1    Buffer definition  In USB stack, some important data structures are defined with macros USB_GLOBAL, USB_DMA_DATA_NONINIT_SUB, USB_DMA_DATA_INIT_SUB and USB_CONTROLLER_DATA; These structures are defined in the usb stack by default. We can see these structures in usb_device_ehci.c and usb_host_ehci.c (take usb host as an example).   In usb_device_ehci.c /* Apply for QH buffer, 2048-byte alignment */ USB_RAM_ADDRESS_ALIGNMENT(2048) USB_CONTROLLER_DATA static uint8_t qh_buffer[(USB_DEVICE_CONFIG_EHCI - 1) * 2048 +   2 * USB_DEVICE_CONFIG_ENDPOINTS * 2 * sizeof(usb_device_ehci_qh_struct_t)]; /* Apply for DTD buffer, 32-byte alignment */ USB_RAM_ADDRESS_ALIGNMENT(32) USB_CONTROLLER_DATA static usb_device_ehci_dtd_struct_t s_UsbDeviceEhciDtd[USB_DEVICE_CONFIG_EHCI]                                                                        [USB_DEVICE_CONFIG_EHCI_MAX_DTD];  In usb_host_ehci.c  /* EHCI controller driver instances. */ #if (USB_HOST_CONFIG_EHCI == 1U) USB_RAM_ADDRESS_ALIGNMENT(4096) USB_CONTROLLER_DATA static uint8_t s_UsbHostEhciFrameList1[USB_HOST_CONFIG_EHCI_FRAME_LIST_SIZE * 4]; static uint8_t usbHostEhciFramListStatus[1] = {0};   USB_RAM_ADDRESS_ALIGNMENT(64) USB_CONTROLLER_DATA static usb_host_ehci_data_t s_UsbHostEhciData1; #elif (USB_HOST_CONFIG_EHCI == 2U) USB_RAM_ADDRESS_ALIGNMENT(4096) USB_CONTROLLER_DATA static uint8_t s_UsbHostEhciFrameList1[USB_HOST_CONFIG_EHCI_FRAME_LIST_SIZE * 4]; USB_RAM_ADDRESS_ALIGNMENT(4096) USB_CONTROLLER_DATA static uint8_t s_UsbHostEhciFrameList2[USB_HOST_CONFIG_EHCI_FRAME_LIST_SIZE * 4]; static uint8_t usbHostEhciFramListStatus[2] = {0, 0}; USB_RAM_ADDRESS_ALIGNMENT(64) USB_CONTROLLER_DATA static usb_host_ehci_data_t s_UsbHostEhciData1; USB_RAM_ADDRESS_ALIGNMENT(64) USB_CONTROLLER_DATA static usb_host_ehci_data_t s_UsbHostEhciData2; #else #error "Please increase the instance count." #endif     2    Linker file : partition a RAM block from OCRAM for non-cacheable buffers         Using managed linker script to configure memory RAM2 as a non-cacheable area.   3    MPU configuratins   ( board.c )  MPU divides the memory map into a few regions, and defines the memory attributes of each region. In this step, we need to configure the SRAM_OC_NCACHE_128(RAM2) as non-cacheable      /* Region 13 setting: Memory with  non-cacheable */     MPU->RBAR = ARM_MPU_RBAR(13, 0x202a0000);     MPU->RASR = ARM_MPU_RASR(0, ARM_MPU_AP_FULL, 1, 0, 0, 0, 0, ARM_MPU_REGION_SIZE_128KB);   Now, SRAM_OC_NCACHE_128 (RAM2) is a non-cacheable section. Variables in  *(NonCacheable.init) and  *( NonCacheable) will be put to SRAM_OC_NCACHE_128.   4   Put USB variables into SRAM_OC_NCACHE_128(RAM2)  This is done by the following macros.  #define USB_LINK_NONCACHE_NONINIT_DATA  _Pragma("location = \"NonCacheable\"")  Relative source code is in file usb_misc.h   #if (defined(DATA_SECTION_IS_CACHEABLE) && (DATA_SECTION_IS_CACHEABLE)) #define USB_GLOBAL USB_LINK_NONCACHE_NONINIT_DATA #define USB_BDT USB_LINK_NONCACHE_NONINIT_DATA #define USB_DMA_DATA_NONINIT_SUB USB_LINK_NONCACHE_NONINIT_DATA #define USB_DMA_DATA_INIT_SUB USB_LINK_DMA_INIT_DATA(NonCacheable.init) #define USB_CONTROLLER_DATA USB_LINK_NONCACHE_NONINIT_DATA #else #define USB_GLOBAL USB_LINK_USB_GLOBAL_BSS #define USB_BDT USB_LINK_USB_BDT_BSS #define USB_DMA_DATA_NONINIT_SUB #define USB_DMA_DATA_INIT_SUB #define USB_CONTROLLER_DATA #endif   Please put macro “DATA_SECTION_IS_CACHEABLE=1” in the preprocessor define.     5    build and run project  evkmimxrt1060_host_msd_command_freertos, success!  Reference: Using the i.MXRT L1 Cache https://www.nxp.com.cn/docs/en/application-note/AN12042.pdf             
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/Design-an-IoT-edge-node-for-CV-application-base-on-the-i/ta-p/1127423 
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-Community-Articles/Effortless-GUI-Development-with-NXP-Microcontrollers/ba-p/1131179  
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-345190  
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-on-i-MX-RT1064-EVK/ta-p/1123602 
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/RT1050-HAB-Encrypted-Image-Generation-and-Analysis/ta-p/1124877  
查看全文
In the tutorial, I'd like to show the steps of deploying an image classification model on i.MX RT1060 to enabling you to classify fashion images and categories. In the first part of this tutorial, we will review the Fashion MNIST dataset, including how to download it to your system. From there we’ll define a simple CNN network using the TensorFlow platform. Next, we’ll train our CNN model on the Fashion MNIST dataset, train it, and review the results. Finally, we'll optimize the model, after that, the model will be smaller and increase inferencing speed, which is valuable for source-limited devices such as MCU. Let’s go ahead and get started! Fashion MNIST dataset The Fashion MNIST dataset was created by the e-commerce company, Zalando. Fig 1 Fashion MNIST dataset As they note on their official GitHub repo for the Fashion MNIST dataset, there are a few problems with the standard MNIST digit recognition dataset: It’s far too easy for standard machine learning algorithms to obtain 97%+ accuracy. It’s even easier for deep learning models to achieve 99%+ accuracy. The dataset is overused. MNIST cannot represent modern computer vision tasks. Zalando, therefore, created the Fashion MNIST dataset as a drop-in replacement for MNIST. 60,000 training examples 10,000 testing examples 10 classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot 28×28 grayscale images The code below loads the Fashion-MNIST dataset using the TensorFlow and creates a plot of the first 25 images in the training dataset. import tensorflow as tf import numpy as np # For easy reset of notebook state. tf.keras.backend.clear_session() # load dataset fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() lass_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] plt.figure(figsize=(8,8)) for i in range(25): plt.subplot(5,5,i+1,) plt.tight_layout() plt.imshow(train_images[i]) plt.xlabel(lass_names[train_labels[i]]) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.show() Fig 2 Running the code loads the Fashion-MNIST train and test dataset and prints their shape. Fig 3 We can see that there are 60,000 examples in the training dataset and 10,000 in the test dataset and that images are indeed square with 28×28 pixels. Creating model We need to define a neural network model for the image classify purpose, and the model should have two main parts: the feature extraction and the classifier that makes a prediction. Defining a simple Convolutional Neural Network (CNN) For the convolutional front-end, we build 3 layers of convolution layer with a small filter size (3,3) and a modest number of filters followed by a max-pooling layer. The last filter map is flattened to provide features to the classifier. As we know, it's a multi-class classification task, so we will require an output layer with 10 nodes in order to predict the probability distribution of an image belonging to each of the 10 classes. In this case, we will require the use of a softmax activation function. And between the feature extractor and the output layer, we can add a dense layer to interpret the features. All layers will use the ReLU activation function and the He weight initialization scheme, both best practices. We will use the Adam optimizer to optimize the sparse_categorical_crossentropy loss function, suitable for multi-class classification, and we will monitor the classification accuracy metric, which is appropriate given we have the same number of examples in each of the 10 classes. The below code will define and run it will show the struct of the model. # Define a Model model = tf.keras.models.Sequential() # First Convolution ,Kernel:16*3*3 model.add( tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_uniform',input_shape=(28, 28, 1))) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Second Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Third Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Flatten()) model.add( tf.keras.layers.Dense(32, activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Dense(10, activation='softmax')) Fig 4 Training Model After the model is defined, we need to train it. The model will be trained using 5-fold cross-validation. The value of k=5 was chosen to provide a baseline for both repeated evaluation and to not be too large as to require a long running time. Each validation set will be 20% of the training dataset or about 12,000 examples. The training dataset is shuffled prior to being split and the sample shuffling is performed each time so that any model we train will have the same train and validation datasets in each fold, providing an apples-to-apples comparison. We will train the baseline model for a modest 20 training epochs with a default batch size of 32 examples. The validation set for each fold will be used to validate the model during each epoch of the training run, so we can later create learning curves, and at the end of the run, we use the test dataset to estimate the performance of the model. As such, we will keep track of the resulting history from each run, as well as the classification accuracy of the fold. The train_model() function below implements these behaviors, taking the training dataset and test dataset as arguments, and returning a list of accuracy scores and training histories that can be later summarized. from sklearn.model_selection import KFold # train a model using k-fold cross-validation def train_model(dataX, dataY, n_folds=5): scores, histories = list(), list() # prepare cross validation kfold = KFold(n_folds, shuffle=True, random_state=1) for train_ix, validate_ix in kfold.split(dataX): # select rows for train and test trainX, trainY, validate_X, validate_Y = dataX[train_ix], dataY[train_ix], dataX[validate_ix], dataY[validate_ix] # fit model history = model.fit(trainX, trainY, epochs=20, batch_size=32, validation_data=(validate_X, validate_Y), verbose=0) # evaluate model _, acc = model.evaluate(validate_X, validate_Y, verbose=0) print("Accurary: {:.4f},Total number of figures is {:0>2d}".format(acc * 100.0, len(testY))) # append scores scores.append(acc) histories.append(history) return scores, histories Module Summary After the model has been trained, we can present the results. There are two key aspects to present: the diagnostics of the learning behavior of the model during training and the estimation of the model performance. These can be implemented using separate functions. First, the diagnostics involve creating a line plot showing model performance on the train and validate set during each fold of the k-fold cross-validation. These plots are valuable for getting an idea of whether a model is overfitting, underfitting, or has a good fit for the dataset. We will create a single figure with two subplots, one for loss and one for accuracy. Blue lines will indicate model performance on the training dataset and orange lines will indicate performance on the hold-out validate dataset. The summarize_diagnostics() function below creates and shows this plot given the collected training histories. # plot diagnostic learning curves def summarize_diagnostics(histories): for i in range(len(histories)): # plot loss plt.subplot(2,1,1) plt.title('Cross Entropy Loss') plt.plot(histories[i].history['loss'], color='blue', label='train') plt.plot(histories[i].history['val_loss'], color='orange', label='test') # plot accuracy plt.subplot(2,1,2) plt.title('Classification Accuracy') plt.plot(histories[i].history['accuracy'], color='blue', label='train') plt.plot(histories[i].history['val_accuracy'], color='orange', label='test') plt.show() Fig 5 Next, the classification accuracy scores collected during each fold can be summarized by calculating the mean and standard deviation. This provides an estimate of the average expected performance of the model trained on the test dataset, with an estimate of the average variance in the mean. We will also summarize the distribution of scores by creating and showing a box and whisker plot. The summarize_performance() function below implements this for a given list of scores collected during model training. # summarize model performance def summarize_performance(scores): # print summary print('Accuracy: mean={:.4f} std={:.4f}, n={:0>2d}'.format(np.mean(trained_scores)*100, np.std(trained_scores)*100, len(scores))) # box and whisker plots of results plt.boxplot(scores) plt.show()   Fig 6 Verifying predictions According to the above figure, we see that the final trained model can get up to around 87.6% accuracy when predicting the test dataset. And with the trained model, running the below code will demonstrate the result of predictions about some images. def plot_image(i, predictions_array, true_label, img): true_label, img = true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): true_label = true_label[i] plt.grid(False) plt.xticks(range(10)) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') predictions = model.predict(test_images) # Plot the first X test images, their predicted labels, and the true labels. # Color correct predictions in blue and incorrect predictions in red. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions[i], test_labels) plt.tight_layout() plt.show()   Fig 7 Model quantization Post-training quantization is a conversion technique that can reduce model size while also improving CPU and hardware accelerator latency, with little degradation in model accuracy, especially it's crucial to embedded platforms, as it lacks the compute-intensive performance, the Flash and RAM memory is also very limited. TensorFlow Lite is able to be used to convert an already-trained float TensorFlow model to the TensorFlow Lite format. In addition, the TensorFlow Lite provides several approaches to optimize the mode, among these ways, Integer quantization is an optimization strategy that converts 32-bit floating-point numbers (such as weights and activation outputs) to the nearest 8-bit fixed-point numbers. This results in a smaller model and increased inferencing speed, which is very valuable for low-power devices such as microcontrollers. The below codes show how to implement the Integer quantization of the trained model, and after running these codes, we can find that the size of Tensorflow Lite mode reduces almost 64.9 KB versus the original model, becomes about 32% of the original size(Fig 8). import os # Convert using integer-only quantization def representative_data_gen(): for input_value in tf.data.Dataset.from_tensor_slices(tf.cast(train_images,tf.float32)).shuffle(500).batch(1).take(150): yield [input_value] # Convert using dynamic range quantization converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model_quant = converter.convert() # Save the model to disk open("model_dynamic_range_quantization.tflite", "wb").write(tflite_model_quant) ## Size difference Dynamic_range_quantization_model_size = os.path.getsize("model_dynamic_range_quantization.tflite") print("Dynamic range quantization model is %d bytes" % Dynamic_range_quantization_model_size) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_data_gen # Ensure that if any ops can't be quantized, the converter throws an error converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # Set the input and output tensors to uint8 (APIs added in r2.3) converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 tflite_model_advanced_quant = converter.convert() # Save the model to disk open("model_integer_only_quantization.tflite", "wb").write(tflite_model_advanced_quant) Integer_only_quantization_model_size = os.path.getsize("model_integer_only_quantization.tflite") print("Integer_only_quantization_model is %d bytes" % Integer_only_quantization_model_size) difference = Dynamic_range_quantization_model_size - Integer_only_quantization_model_size print("Difference is %d bytes" % difference) Fig 8 Evaluating the TensorFlow Lite model Now we'll run inferences using the TensorFlow Lite Interpreter to compare the model accuracies. First, we need a function that runs inference with a given model and images, and then returns the predictions: # Helper function to run inference on a TFLite model def run_tflite_model(tflite_file, test_image_indices): # Initialize the interpreter interpreter = tf.lite.Interpreter(model_path=str(tflite_file)) interpreter.allocate_tensors() input_details = interpreter.get_input_details()[0] output_details = interpreter.get_output_details()[0] predictions = np.zeros((len(test_image_indices),), dtype=int) for i, test_image_index in enumerate(test_image_indices): test_image = test_images[test_image_index] test_label = test_labels[test_image_index] # Check if the input type is quantized, then rescale input data to uint8 if input_details['dtype'] == np.uint8: input_scale, input_zero_point = input_details["quantization"] test_image = test_image / input_scale + input_zero_point test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"]) interpreter.set_tensor(input_details["index"], test_image) interpreter.invoke() output = interpreter.get_tensor(output_details["index"])[0] predictions[i] = output.argmax() return predictions Next, we'll compare the performance of the original model and the quantized model on one image. model_basic_quantization.tflite is the original TensorFlow Lite model with floating-point data. model_integer_only_quantization.tflite is the last model we converted using integer-only quantization (it uses uint8 data for input and output). Let's create another function to print our predictions and run it for testing. import matplotlib.pylab as plt # Change this to test a different image test_image_index = 1 ## Helper function to test the models on one image def test_model(tflite_file, test_image_index, model_type): global test_labels predictions = run_tflite_model(tflite_file, [test_image_index]) plt.imshow(test_images[test_image_index].reshape(28,28)) template = model_type + " Model \n True:{true}, Predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0]))) plt.grid(False) Fig 9 Fig 10 Then evaluate the quantized model by using all the test images we loaded at the beginning of this tutorial. After summarizing the prediction result of the test dataset, we can see that the prediction accuracy of the quantized model decrease 7% less than the original model, it's not bad. # Helper function to evaluate a TFLite model on all images def evaluate_model(tflite_file, model_type): test_image_indices = range(test_images.shape[0]) predictions = run_tflite_model(tflite_file, test_image_indices) accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images) print('%s model accuracy is %.4f%% (Number of test samples=%d)' % ( model_type, accuracy, len(test_images))) Deploying model Converting TensorFlow Lite model to C file The following code runs xxd on the quantized model, writes the output to a file called model_quantized.cc, in the file, the model is defined as an array of bytes, and prints it to the screen. The output is very long, so we won’t reproduce it all here, but here’s a snippet that includes just the beginning and end. # Save the file as a C source file xxd -i model_integer_only_quantization.tflite > model_quantized.cc # Print the source file cat model_quantized.cc Fig 11 Deploying the C file to project We use the tensorflow_lite_cifar10 demo as a prototype, then replace the original model and do some code modification, below is the code in the modified main file. #include "board.h" #include "fsl_debug_console.h" #include "pin_mux.h" #include "timer.h" #include <iomanip> #include <iostream> #include <string> #include <vector> #include "tensorflow/lite/kernels/register.h" #include "tensorflow/lite/model.h" #include "tensorflow/lite/optional_debug_tools.h" #include "tensorflow/lite/string_util.h" #include "get_top_n.h" #include "model.h" #define LOG(x) std::cout // ---------------------------- Application ----------------------------- // Lenet Mnist model input data size (bytes). #define LENET_MNIST_INPUT_SIZE 28*28*sizeof(char) // Lenet Mnist model number of output classes. #define LENET_MNIST_OUTPUT_CLASS 10 // Allocate buffer for input data. This buffer contains the input image // pre-processed and serialized as text to include here. uint8_t imageData[LENET_MNIST_INPUT_SIZE] = { #include "clothes_select.inc" }; /* Tresholds */ #define DETECTION_TRESHOLD 60 /*! * @brief Initialize parameters for inference * * @param reference to flat buffer * @param reference to interpreter * @param pointer to storing input tensor address * @param verbose mode flag. Set true for verbose mode */ void InferenceInit(std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor** input_tensor, bool isVerbose) { model = tflite::FlatBufferModel::BuildFromBuffer(Fashion_MNIST_model, Fashion_MNIST_model_len); if (!model) { LOG(FATAL) << "Failed to load model\r\n"; return; } tflite::ops::builtin::BuiltinOpResolver resolver; tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (!interpreter) { LOG(FATAL) << "Failed to construct interpreter\r\n"; return; } int input = interpreter->inputs()[0]; const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); if (interpreter->AllocateTensors() != kTfLiteOk) { LOG(FATAL) << "Failed to allocate tensors!"; return; } /* Get input dimension from the input tensor metadata assuming one input only */ *input_tensor = interpreter->tensor(input); auto data_type = (*input_tensor)->type; if (isVerbose) { const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); LOG(INFO) << "input: " << inputs[0] << "\r\n"; LOG(INFO) << "number of inputs: " << inputs.size() << "\r\n"; LOG(INFO) << "number of outputs: " << outputs.size() << "\r\n"; LOG(INFO) << "tensors size: " << interpreter->tensors_size() << "\r\n"; LOG(INFO) << "nodes size: " << interpreter->nodes_size() << "\r\n"; LOG(INFO) << "inputs: " << interpreter->inputs().size() << "\r\n"; LOG(INFO) << "input(0) name: " << interpreter->GetInputName(0) << "\r\n"; int t_size = interpreter->tensors_size(); for (int i = 0; i < t_size; i++) { if (interpreter->tensor(i)->name) { LOG(INFO) << i << ": " << interpreter->tensor(i)->name << ", " << interpreter->tensor(i)->bytes << ", " << interpreter->tensor(i)->type << ", " << interpreter->tensor(i)->params.scale << ", " << interpreter->tensor(i)->params.zero_point << "\r\n"; } } LOG(INFO) << "\r\n"; } } /*! * @brief Runs inference input buffer and print result to console * * @param pointer to image data * @param image data length * @param pointer to labels string array * @param reference to flat buffer model * @param reference to interpreter * @param pointer to input tensor */ void RunInference(const uint8_t* image, size_t image_len, const std::string* labels, std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor* input_tensor) { /* Copy image to tensor. */ memcpy(input_tensor->data.uint8, image, image_len); /* Do inference on static image in first loop. */ auto start = GetTimeInUS(); if (interpreter->Invoke() != kTfLiteOk) { LOG(FATAL) << "Failed to invoke tflite!\r\n"; return; } auto end = GetTimeInUS(); const float threshold = (float)DETECTION_TRESHOLD /100; std::vector<std::pair<float, int>> top_results; int output = interpreter->outputs()[0]; TfLiteTensor *output_tensor = interpreter->tensor(output); TfLiteIntArray* output_dims = output_tensor->dims; // assume output dims to be something like (1, 1, ... , size) auto output_size = output_dims->data[output_dims->size - 1]; /* Find best image candidates. */ GetTopN<uint8_t>(interpreter->typed_output_tensor<uint8_t>(0), output_size, 1, threshold, &top_results, false); if (!top_results.empty()) { auto result = top_results.front(); const float confidence = result.first; const int index = result.second; if (confidence * 100 > DETECTION_TRESHOLD) { LOG(INFO) << "----------------------------------------\r\n"; LOG(INFO) << " Inference time: " << (end - start) / 1000 << " ms\r\n"; LOG(INFO) << " Detected: " << std::setw(10) << labels[index] << " (" << (int)(confidence * 100) << "%)\r\n"; LOG(INFO) << "----------------------------------------\r\n\r\n"; } } } /*! * @brief Main function */ int main(void) { const std::string labels[] = {"T-shirt/top", "Trouser","Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"}; /* Init board hardware. */ BOARD_ConfigMPU(); BOARD_InitPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); InitTimer(); std::unique_ptr<tflite::FlatBufferModel> model; std::unique_ptr<tflite::Interpreter> interpreter; TfLiteTensor* input_tensor = 0; InferenceInit(model, interpreter, &input_tensor, false); LOG(INFO) << "Fashion MNIST object recognition example using a TensorFlow Lite model.\r\n"; LOG(INFO) << "Detection threshold: " << DETECTION_TRESHOLD << "%\r\n"; /* Run inference on static ship image. */ LOG(INFO) << "\r\nStatic data processing:\r\n"; RunInference((uint8_t*)imageData, (size_t)LENET_MNIST_INPUT_SIZE, labels, model, interpreter, input_tensor); while(1) {} } Testing result After deploying the model in the demo project, then we'll run this demo on the MIMXRT1060 (Fig 12) board for testing. Fig 12 Run the below code to covert the Fashion MNIST image to text The process_image() function can convert a Fashion MNIST image to an include file as static data, then include this file in the demo project. def process_image(image, output_path, num_batch=1): img_data = np.transpose(image, (2, 0, 1)) # Repeat image for batch processing (resulting tensor is NCHW or NHWC) img_data = np.reshape(img_data, (num_batch, img_data.shape[0], img_data.shape[1], img_data.shape[2])) img_data = np.repeat(img_data, num_batch, axis=0) img_data = np.reshape(img_data, (num_batch, img_data.shape[1], img_data.shape[2], img_data.shape[3])) # Serialize image batch img_data_bytes = bytearray(img_data.tobytes(order='C')) image_bytes_per_line = 20 with open(output_path, 'wt') as f: idx = 0 for byte in img_data_bytes: f.write('0X%02X, ' % byte) if idx % image_bytes_per_line == (image_bytes_per_line - 1): f.write('\n') idx = idx + 1 # Return serialized image size return len(img_data_bytes)      2. Run the demo project on board.
查看全文
                                      配置RT600开发环境 RT600开发入门培训视频。 https://www.nxp.com/document/guide/getting-started-with-i-mx-rt600-evaluation-kit:GS-MIMXRT685-EVK?&tid=vanGS-MIMXRT685-EVK#title2.1   下载I.MX RT600 SDK。下载链接: https://mcuxpresso.nxp.com/en/select?device=EVK-MIMXRT685     下载MCUXpresso IDE。注意需要安装MCUXpresso IDE 11.1.1及最新版本。https://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=MCUXPRESSO               下载安装LPCScrypt,可以将默认板载的CMSIS-DAP固件升级改为J-LINK。通过J-LINK,可以下载调试HiFi4 DSP固件。下载链接https://www.nxp.com/design/microcontrollers-developer-resources/lpc-microcontroller-utilities/lpcscrypt-v2-1-1:LPCSCRYPT?&tab=Design_Tools_Tab     下载安装J-LINK驱动。下载链接https://www.segger.com/downloads/jlink/   下载安装Cadence HiFi 4 DSP IDE for MIMXRT600。 第一次下载,注册用户https://tensilicatools.com/register/。国内用户注册时,如果页面没有出现下面人机身份验证,说明IP被GW Firewall屏蔽了。需要通过代理或者其他特殊手段,否则用户注册将无法成功提交。   下载HiFi DSP Development Tools for i.MX RT600开发工具。 https://tensilicatools.com/download/rt600-download-page/   申请License for RT600 SDK。注意输入绑定网卡MAC地址时,需要去除中间‘:’等字符,否则提示失败。   申请成功后,可以下载License文件。   启动Xplorer 8.0.13后,在菜单Help -- Xplorer License Keys安装License文件。安装成功后显示如下:     Xplorer下载调试器配置。 将xt-ocd.exe所在目录加入到系统Path环境变量。   使能”Use XOCD Manager”,指定Topology File   设置Download binary为Always,取消每次下载前都弹出提示框,节省下载时间。     通过J-Link下载HiFi4 DSP固件,可以单步调试代码。    
查看全文
MIMXRT1010 EVK (Chinese Version)  Design Files and Hardwre User's Guide 
查看全文
i.MX RT6xx The RT6xx is a crossover MCU family is a breakthrough product combining the best of MCU and DSP functionality for ultra-low power secure Machine Learning (ML) / Artificial Intelligence (AI) edge processing, performance-intensive far-field voice and immersive 3D audio playback applications. Fig 1 is the block diagram for the i.MX RT600. It consists of a Cortex-M33 core that runs up to 300 MHz with 32KB FlexSPI cache and an optional HiFi4 DSP that runs up to 600MHz with 96KB DSP cache and 128KB DSP TCM. It also contains a cryptography engine and DSP/Math accelerator in the PowerQuad co-processor. The device has 4.5MB on-chip SRAM. Key features include the rich audio peripherals, the high-speed USB with PHY and the advanced on-chip security. There is a Flexcomm peripheral that supports the configuration of numerous UARTs, SPI, I2C, I2S, etc. Fig 1 Create a eIQ (TensorFlow Lite library) demo In the latest version of SDK for the i.MX RT600, it still doesn't contain the demos about the Machine Learning (ML) / Artificial Intelligence (AI), so it needs the developers to create this kind of demo by themself. To implement it, port the eIQ demos cross from i.MX RT1050/1060 to i.MX RT685 is the quickest way. The below presents the steps of creating a eIQ (TensorFlow Lite library) demo. Greate a new C++ project Install SDK library Fig 2 Create a new C++ project using installed SDK Part In the MCUXpresso IDE User Guide, Chapter 5 Creating New Projects using installed SDK Part Support presents how to create a new project, please refer to it for details Porting tensorflow-lite Copy the tensorflow-lite library to the target project Copy the TensorFlow-lite library corresponding files to the target project Fig 3 Add the paths for the above files Fig 4 Fig 5 Fig 6 Porting main code The main() code is from the post: The “Hello World” of TensorFlow Lite Testing On the MIMXRT685 EVK Board (Fig 7), we record the input data: x_value and the inferenced output data: y_value via the Serial Port (Fig 8). Fig 7 Fig 8 In addition, we use Excel to display the received data against our actual values as the below figure shows. Fig 9 In general, In general, it has replicated the result of the The “Hello World” of TensorFlow Lite Troubleshoot In default, the created project doesn't support print float, so it needs to enable this feature by adding below symbols (Fig 10). Fig 10 When a neural network is executed, the results of one layer are fed into subsequent operations and so must be kept around for some time. The lifetimes of these activation layers vary depending on their position in the graph, and the memory size needed for each is controlled by the shape of the array that a layer writes out. These variations mean that it’s necessary to calculate a plan over time to fit all these temporary buffers into as small an area of memory as possible. Currently, this is done when the model is first loaded by the interpreter, so if the area is not big enough, you’ll see a crash event happen. Regard to this application demo, the default heap size is 4 KB, obviously, it's not big enough to store the model’s input, output, and intermediate tensors, as the codes will be stuck at hard-fault interrupt function (Fig 11). Fig 11 So, how large should we allocate the heap area? That’s a good question. Unfortunately, there’s not a simple answer. Different model architectures have different sizes and numbers of input, output, and intermediate tensors, so it’s difficult to know how much memory we’ll need. The number doesn’t need to be exact—we can reserve more memory than we need—but since microcontrollers have limited RAM, we should keep it as small as possible so there’s space for the rest of our program. We can do this through trial and error. For this application demo, the code works well after increasing ten times than the previous heap size (Fig 12). Fig 12
查看全文
Introduction A common need for GUI applications is to implement a clock function.  Whether it be to create a clock interface for the end user's benefit, or just to time animations or other actions, implementing an accurate clock is a useful and important feature for GUI applications.  The aim of this document is to help you implement clock functions in your AppWizard project.   Methods When implementing a real-time clock, there are a couple of general methods to do so.   Use an independent timer in your MCU Using animation objects Each of these methods have their advantages and disadvantages.  If you just need a timer that doesn't require extra code and you don't require control or assurance of precision, or maybe you can't spare another timer, using an animation object (method #2) may be a good option in that application.  If your application requires an assurance of precision or requires other real-time actions to be performed that AppWizard can't control, it is best to implement an independent timer in your MCU (method #1).  Method 1:  Independent MCU Timer Implementing a timer via an independent MCU timer allows better control and guarantees the precision because it isn't a shared clock and the developer can adjust the interrupt priorities such that the timer interrupt has the highest priority.  AppWizard timing uses a common timer and then time slices activities similar to how an operating system works.  It is for this reason that implementing an independent MCU timer is best when you need control over the precision of the timer or you need other real-time actions to be triggered by this timer.  When implementing a timer using an independent MCU timer (like the RTC module), an understanding of how to interact with Text widgets is needed. Let's look at this first.   Interacting with Text Widgets Editing Text widgets occurs through the use of the emWin library API (the emWin library is the underlying code that AppWizard builds upon). The Text widget API functions are documented in the emWin Graphic Library User Guide and Reference Manual, UM3001.  Most of the Text widget API functions require a Text widget handle.  Be sure to not confuse this handle for the AppWizard ID.  Imagine a clock example where there are two Text widgets in the interface:  one for the minutes and one for the seconds.  The AppWizard IDs of these objects might be ID_TEXT_MINS and ID_TEXT_SECONDS respectively (again, these are not to be confused with the handle to the Text widget for use by emWin library functions).  The first action software should take is to obtain the handle for the Text widgets.   This can be done using the WM_GetDialogItem function.  The code to get the active window handle and the handle for the two Text widgets is shown below: activeWin = WM_GetActiveWindow(); textBoxMins = WM_GetDialogItem(activeWin, ID_TEXT_MINS); textBoxSecs = WM_GetDialogItem(activeWin, ID_TEXT_SECONDS);‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Note that this function requires the handle to the parent window of the Text widget.  If your application has multiple windows or screens, you may need to be creative in how you acquire this handle, but for this example, the software can simply call the WM_GetActiveWindow function (since there is only one screen).  When to call these functions can be a bit tricky as well.  They can be called before the MainTask() function of the application is called and the application will not crash.  However, the handles won't be correct and the Text widgets will not be updated as expected.  It's recommended that these handles be initialized when the screen is initialized.  An example of how this would be done is shown below: void cbID_SCREEN_CLOCK(WM_MESSAGE * pMsg) { extern WM_HWIN activeWin; extern WM_HWIN textBoxMins; extern WM_HWIN textBoxSecs; extern WM_HWIN textBoxDbg; if(pMsg->MsgId == WM_INIT_DIALOG) { activeWin = WM_GetActiveWindow(); textBoxMins = WM_GetDialogItem(activeWin, ID_TEXT_MINS); textBoxSecs = WM_GetDialogItem(activeWin, ID_TEXT_SECONDS); textBoxDbg = WM_GetDialogItem(activeWin, ID_TEXT_DBG); } GUI_USE_PARA(pMsg); }‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Once the Text widget handles have been acquired, the text can be updated using the TEXT_SetText() function or the TEXT_SetDec() function in this case, because the Text widgets are configured for decimal mode, since we want to display numbers.  An example of the code to do this is shown below.  /* TEXT_SetDec(Text Widget Handle, Value as Int, Length, Shift, Sign, Leading Spaces) */ if(TEXT_SetDec(textBoxSecs, (int)gSecs, 2, 0, 0, 0)) { /* Perform action here if necessary */ } if(TEXT_SetDec(textBoxMins, (int)gMins, 2, 0, 0, 0)) { /* Perform action here if necessary */ } ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Method 2:  Animation Objects When implementing a real-time clock using animation objects, it is necessary to implement a loop.  This could be done outside of the AppWizard GUI (in your code) but because the timing precision can't be guaranteed, it's just as easy to implement a loop in the AppWizard GUI if you know how (it isn't very intuitive as to how to do this). Before examining the interactions to do this, let's look at the variables and objects needed to do this.  ID_VAR_SECS - This variable holds the current seconds value. ID_VAR_SECS_1 - This variable holds the next second value.  ID_TEXT_SECONDS - Text box that displays the current seconds value. ID_END_CNT - Variable that holds the value at which the seconds rolls over and increments the minute count ID_TEXT_MINS - Text box that holds the current minute count. ID_MIN_END_CNT - Variable that holds the value at which the minutes rolls over (which would also increment the hour count if the hours were implemented). ID_BUTTON_SECS - This is a hidden button that initiates actions when the seconds variable has reached the end count.  Now, here are the interactions used to implement the clock feature using animation interactions.  The heart of the loop are the interactions triggered by ID_VAR_SECS.  ID_VAR_SECS -> ID_VAR_SECS_1:  When ID_VAR_SECS changes, it needs to add one to ID_VAR_SECS_1 so that the animation will animate to one second from the current time. ID_VAR_SECS -> ID_TEXT_SECONDS:  When ID_VAR_SECS changes, it also needs to start the animation from the current value to the next second (ID_VAR_SECS_1). A very essential part of the loop is ensuring the animation restarts every time.  So ID_TEXT_SECONDS needs to change the value of ID_VAR_SECS when the animation ends. ID_VAR_SECS is changed to the current time value, ID_VAR_SECS_1. When the ID_TEXT_SECONDS animation ends, it must also decrement the ID_VAR_END_CNT variable.  This is analogous to the control variable of a "For" loop being updated. This is done using the ADDVALUE job, adding '-1' to the variable, ID_VAR_END_CNT. When ID_VAR_END_CNT changes, it updates the hidden button, ID_BUTTON_SECS, with the new value.  This is analogous to a "For" loop checking whether its control variable is still within its limits.   The interactions in group 5 are interactions that restart the loop when the seconds reach the count that we desire.  When the loop is restarted, the following actions must be taken: Set ID_VAR_SECS and ID_VAR_SECS_1 to the initial value for the next loop ('0' in this case).  Note that ID_VAR_SECS_1 MUST be set before ID_VAR_SECS.  Additionally, if the loop is to continue, ID_VAR_SECS and ID_VAR_SECS_1 must be set to the same value.   ID_TEXT_SECONDS is set to the initial value.  If this isn't done, then the text box will try to animate from the final value to the initial value and then will look "weird". ID_VAR_END_CNT is reset to its initial value (60 in this case).  ID_BUTTON_SECS is also responsible for updating the minutes values.  In this case, it's incrementing the ID_TEXT_MINS value (counting up in minutes) and decrementing the ID_VAR_MIN_END_CNT  Adjusting the time of an animation object The animation object (as well as other emWin objects) use the GUI_X_DELAY function for timing.  It is up to the host software to implement this function.  In the i.MX RT examples, the General Purpose Timer (GPT) is used for this timer.  So how the GPT is configured will affect the timing of the application and the how fast or slow the animations run. The GPT is configured in the function BOARD_InitGPT() which resides in the main source file.  The recommended way to adjust the speed of the timer is by changing the divider value to the GPT. Conclusion So we have seen two different methods of implementing a real-time clock in an AppWizard GUI application.  Those methods are: Use an independent timer in your MCU Using animation objects Using an independent timer in your MCU may be preferred as it allows for better control over the timing, can allow for real-time actions to be performed that AppWizard can't control, and provides some assurance of precision.  Using animation objects may be preferred if you just need a quick timer implementation that doesn't require you to manually add code to your project or use a second timer.  
查看全文
Overview of i.MX RT1050         The i.MX RT1050 is the industry's first crossover processor and combines the high-performance and high level of integration on an applications processors with the ease of use and real-time functionality of a micro-controller. The i.MX RT1050 runs on the Arm Cortex-M7 core at 600 MHz, it means that it definitely has the ability to do some complicated computing, such as floating-point arithmetic, matrix operation, etc. For general MCU, they're hard to conquer these complicated operations.         It has a rich peripheral which makes it suit for a variety of applications, in this demo, the PXP (Pixel Pipeline), CSI (CMOS Sensor Interface), eLCDIF (Enhanced LCD Interface) allows me to build up camera display system easily Fig 1 i.MX RT series           It has a rich peripheral which makes it suit for a variety of applications, in this demo, the PXP (Pixel Pipeline), CSI (CMOS Sensor Interface), eLCDIF (Enhanced LCD Interface) allows me to build up camera display system easily Fig 2 i.MX RT1050 Block Diagram Basic concept of Compute Vision (CV)          Machine Learning (ML) is moving to the edge because of a variety of reasons, such as bandwidth constraint, latency, reliability, security, ect. People want to have edge computing capability on embedded devices to provide more advanced services, like voice recognition for smart speakers and face detection for surveillance cameras. Fig 3 Reason        Convolutional Neural Networks (CNNs) is one of the main ways to do image recognition and image classification. CNNs use a variation of multilayer perception that requires minimal pre-processing, based on their shared-weights architecture and translation invariance characteristics. Fig 4 Structure of a typical deep neural network         Above is an example that shows the original image input on the left-hand side and how it progresses through each layer to calculate the probability on the right-hand side. Hardware MIMXRT1050 EVK Board; RK043FN02H-CT(LCD Panel) Fig 5 MIMXRT1050 EVK board Reference demo code emwin_temperature_control: demonstrates graphical widgets of the emWin library. cmsis_nn_cifar10: demonstrates a convolutional neural network (CNN) example with the use of convolution, ReLU activation, pooling and fully-connected functions from the CMSIS-NN software library. The CNN used in this example is based on the CIFAR-10 example from Caffe. The neural network consists of 3 convolution layers interspersed by ReLU activation and max-pooling layers, followed by a fully-connected layer at the end. The input to the network is a 32x32 pixel color image, which is classified into one of the 10 output classes. Note: Both of these two demo projects are from the SDK library Deploy the neuro network mode Fig 6 illustrates the steps of deploying the neuro network mode on the embedded platform. In the cmsis_nn_cifar10 demo project, it has provided the quantized parameters for the 3 convolution layer, so in this implementation, I use these parameters directly, BTW, I choose 100 images randomly from the Test set as a round of input to evaluate the accuracy of this model. And through several rounds of testing, I get the model's accuracy is about 65% as the below figure shows. Fig 6 Deploy the neuro network mode Fig 7 cmsis_nn_cifar10 demo project test result The CIFAR-10 dataset is a collection of images that are commonly used to train ML and computer vision algorithms, it consists of 60000 32x32 color images in 10 classes, with 6000 images per class ("airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"). There are 50000 training images and 10000 test images. Embedded platform software structure         After POR, various components are initialized, like system clock, pin mux, camera, CSI, PXP, LCD and emWin, etc. Then control GUI will show up in the LCD, press the Play button will display the camera video in the LCD, once an object into the camera's window, you can press the Capture button to pause the display and run the model to identify the object. Fig8 presents the software structure of this demo. Fig 8 Embedded platform software structure Object identify Test The three figures present the testing result.   Fig 9 Fig 10 Fig 11 Furture work          Use the Pytorch framework to train a better and more complicated convolutional network for object recognition usage.
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-341317
查看全文