i.MX RT Crossover MCUs Knowledge Base

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

i.MX RT Crossover MCUs Knowledge Base

讨论

排序依据:
A small project I worked on was to understand how RT1050 boot-up performs from different memory types. I used the LED_blinky code from the SDK as a baseline, and ran some tests on the EVKB board. The data I gathered is described below, as well as more detailed testing procedures. Testing Procedure The boot-up time will be defined as the time from which the processor first receives power, to when it executes the first line of code from the main() function. Time was measured using an oscilloscope (Tektronix TDS 2014) between the rising edge of the POR_B* signal to the following two points: FlexSPI_CS asserted (first read of the FlexSPI by the ROM)** GPIO Toggle in application code (signals beginning of code execution).*** *The POR_B signal was available to scope through header J26-1 **The FlexSPI_CS signal is available through a small pull-up resistor on the board, R356. A small wire was soldered alongside this resistor, and was probed on the oscilloscope. ***The GPIO pin that was used was the same one that connected to USER_LED (Active low). This pin could be scoped through header J22-5. TP 2, 3, 4, and 5 are used to ground the probe of the oscilloscope. This was all done in the EVKB evaluation board. Here are a couple of noteworthy points about the test ran: This report mostly emphasizes the time between the rise of the POR_B signal, and the first line of execution of code. However, there is a time between when power is first provided to the board and the POR_B system goes up. This is a matter of power electronics and can vary depending on the user application and design. Because of this, this report will not place a huge emphasis on this. The first actual lines of code of the application is actually configuring several pins of the processor. Only after these pins are executed, does the GPIO toggle low and the time is taken on the oscilloscope. However, these lines of configuration code are executed so rapidly, that the time is ignored for the test.   Clock Configurations The bootable image was flashed to the RT1050 in all three cases. Afterwards, in MCUXpresso, the debugger was configured with “Attach Only” set to true. A debug session was then launched, and after the processor finished executing code, it was paused and the register values were read according to the RT1050 Reference Manual, chapter 18, CCM Block Diagram.  Boot Configuration: Core Clock (MHz) * FlexSPI Clock (MHz) SEMC Clock (MHz) FlexSPI 130 99 SDRAM 396 130 99 SRAM 396 130 99 *The Core Clock speed was also verified by configuring clko1 as an output with the clock speed divided by 8. This frequency was measured using an oscilloscope and verified to be 396 MHz. Results The time to chip select pin represents the moment when the first flash read happens from the RT1050 processor. The time to GPIO output represents the boot-up time.   As expected, XiP Hyperflash boots faster than other memories. SRAM and SDRAM memories must copy to executable memory before executing which will take more time and therefore boot slower. In the sections below, a more thorough explanation is provided of how these tests were ran and why Hyperflash XiP is expected to be the fastest. Hyperflash XiP Boot Up Below is an outline of the steps of what we expect the Hyperflash XiP boot-up process to look like: Power On Reset (J26-1) Begin access to Flash memory (FlexSPI_SS0) Execute in place in flash (XiP) First line of code is exectuted (USER_LED) In MCUXpresso, the map file showed the following: The oscilloscope image is below:   SDRAM Boot Up The processor will bootup from ROM, which will be told to copy an application image from the serial NOR flash memory to SDRAM (serial NOR flash uses Hyperflash communication). The RT flashloader tool will let me load up the application to the flash to be configured to copy over memory to the SDRAM and execute to it.   It is expected that copying to SDRAM will be slower than executing in place from Hyperflash since an entire copying action must take place.   The SDRAM boot-up process looks like the following: Power On Reset (J26-1) Begin access to Flash memory (FlexSPI_SS0) Copy code to SDRAM Execute in place in SDRAM (FlexSPI_SS0) First line of code is executed (USER_LED)   In MCUXpresso, the map file showed the following:   In order to run this test, I followed these instructions: https://community.nxp.com/docs/DOC-340655. SRAM Boot Up For SRAM, a similar process to that of SDRAM is expected. The processor will first boot from internal ROM, and then go to Hyperflash. It will then copy over everything from Hyperflash to internal SRAM DTC memory and then execute from there.  The SRAM Boot Up Process follows as such: Power On Reset (J26-1) Begin access to Flash memory (FlexSPI_SS0) Copy code to SRAM Execute in place in SRAM (FlexSPI_SS0) First line of code is executed (USER_LED)   In MCUXpresso, the map file showed the following:   This document was generated from the following discussion: javascript:;
查看全文
RT106X secure JTAG test and IDE debug 1 Introduction     Regarding the usage of RT10XX Secure JTAG, the nxp.com has already released a very good application note AN12419 Secure JTAG for i.MXRT10xx: https://www.nxp.com/docs/en/application-note/AN12419.pdf This application note talks about the principle of Secure JTAG, how to modify the fuse to implement the Secure JTAG function, and the content of the related JLINKscript file, and then gives the use of JLINK commander to realize the identification of the ARM core. Usually, if the ARM core can be identified, it indicates that Secure JTAG connection has been passed. But in practical usage, I found many customers encounter the different issues, for example, the Secure JTAG could not find the ARM core directly, or the core identify is not stable, and some customers asked how to use common IDEs, such as MCUXPresso, IAR , MDK to add this Secure JTAG function to realize  Secure JTAG debugging.   For the test of secure JTAG, it also needs the cost, because the fuse needs to be modified. If the position of the fuse is accidentally modified, it may cause irreversible problems. Due to the different situations of customers, I also done more tests, borrowing boards with chip socket which can replace the different RT chip, I have tested RT1050, RT1060, RT1064, but in practical usage, there are still some customers mentioned that it will be reproduced on the EVK, so I also tested the secure JTAG function on the RT1060 and RT1064 EVK     This article will share all the previous relevant experience, so that latecomers can have a reference when encountering similar problems, and avoid unnecessary minefields. This document used the platform: MIMXRT1064-EVK revA: RT1060-EVK, RT1050-EVKB is similar SDK_2_13_0_EVK-MIMXRT1064 MCUXpresso IDE v11.7.1_9221 MDK V5.36: higher reversion is the same IAR 9.30.1: higher reversion is the same Segger JLINK plus JLINK driver version:V788D NXP-MCUBootUtility-5.1.0 2 RT1064 secure JTAG modification Under normal circumstances, it is not recommended for customers to burn all the related fuses directly and then test it directly. I usually proceeds step by step, hardware layout, to ensure that it can support JTAG, and then save the original read of the fuse, burn JTAG, test JTAG, and finally Burn and test other fuses for secure JTAG.    2.1 MIMXRT1064-EVK Hardware modification For RT10XX EVK, the board default situation is the same as the chip situation, which supports SWD. The JTAG pin is connected to other hardware modules from the hardware, so it will affect JTAG function. When it is determined to use JTAG function, the circuit needs to be modified, just like MIMXRT105060HDUG has said:    (1). Burn fuse DAP_SJC_SWD_SEL from ‘0’ to ‘1’ to choose JTAG. (2). DNP R323,R309,R152 to isolate JTAG multiplexed signals. (3). Keep off J47 to J50 to isolate board level debugger.     So, to the MIMXRT1064-EVK board, just need to remove R323, R309, R152, disconnect J47,J48,J49,J50, which is used to disconnect the on board debugger, then use the external Segger JLINK JTAG interface to connect the MIMXRT1064-EVK on board J21. 2.2 Original fuse map read First, the MIMXRT1064-EVK board enters the serial download mode, SW7: 1-OFF, 2-OFF, 3-OFF, 4-ON. Use MCUBootUtility tool to connect EVK, and read the initial fuse map, the situation is as follows:     Fig 1 2.3 JTAG Modification and test    Modify fuse to realize SWD to JTAG: 0X460[19] DAP_SJC_SWD_SEL=1   Fig 2     Use the JLINK commander, JTAG method to connect the board, to find the ARM CM7 core: Fig 3     If the ARM CM7 core can’t be identified, it means the hardware still have issues, or the fuse modified bit is not correct, just do the double check, make sure the ARM core can be found, then go to the next steps. 2.4 Secure JTAG Modification     Modify fuse bit to realize Secure JTAG:     0X460[23:22]:JTAG_SMODE =1     0X460[26]: KTE_FUSE=1     0X610,0X600 burn key: 0xedcba987654321, user also can burn with other custom keys, but need to record it, as the JLINKScript needs to use it.   Fig 4 In the above picture, the secure JTAG fuse and key fuse is finished, at last, to burn fuse 0X400[6]: SJC_RESP_LOCK=1, which is used to close the write and read to secret response key: Fig 5 Here, we can see, the 0X600,0X610 key area is shadow. Now, record the UUID0, UUID1, it will use the script to read out to check the UUID correction or not. 2.5 Secure JTAG JLINK commander test Because during the secure JTAG connection process, the JTAG_MOD pin needs to be pulled low and high, so a wire needs to be connected to pull JTAG_MOD low and high. MIMXRT1064-EVK can use J25_4, which is 3.3V, and JTAG_MOD signal point can use TP11 test point. By default, JTAG_MOD is pulled low. When it needs to be pulled high, it can be connected to J25_4.         During the test, it will need to use the JLINKScript, the content is as follows, also can check  the attached NXP_RT1064_SecureJTAG.JlinkScript file: int InitTarget(void) { int r; int v; int Key0; int Key1; JLINK_SYS_Report("***********************************************"); JLINK_SYS_Report("J-Link script: InitTarget() *"); JLINK_SYS_Report("NXP iMXRT, Enable Secure JTAG *"); JLINK_SYS_Report("***********************************************"); JLINK_SYS_MessageBox("Set pin JTAG_MOD => 1 and press any key to continue..."); // Secure response stored @ 0x600, 0x610 in eFUSE region (OTP memory) Key0 = 0x87654321; Key1 = 0xedcba9; JLINK_CORESIGHT_Configure("IRPre=0;DRPre=0;IRPost=0;DRPost=0;IRLenDevice=5"); CPU = CORTEX_M7; JLINK_SYS_Sleep(100); JLINK_JTAG_WriteIR(0xC); // Output Challenge instruction // Readback Challenge, Shift 64 dummy bits on TDI, TODO: receive Challenge bits on TDO JLINK_JTAG_StartDR(); JLINK_SYS_Report("Reading Challenge ID...."); JLINK_JTAG_WriteDRCont(0xffffffff, 32); // 32-bit dummy write on TDI / read 32 bits on TDO v = JLINK_JTAG_GetU32(0); JLINK_SYS_Report1("Challenge UUID0:", v); JLINK_JTAG_WriteDREnd(0xffffffff, 32); v = JLINK_JTAG_GetU32(0); JLINK_SYS_Report1("Challenge UUID1:", v); JLINK_JTAG_WriteIR(0xD); // Output Response instruction JLINK_JTAG_StartDR(); JLINK_JTAG_WriteDRCont(Key0, 32); JLINK_JTAG_WriteDREnd(Key1, 24); JLINK_SYS_MessageBox("Change pin JTAG_MOD => 0, press any key to continue..."); return 0; }   SecJtag.bat file content is: jlink.exe -JLinkScriptFile NXP_RT1064_SecureJTAG.JlinkScript -device MIMXRT1064XXX6A -if JTAG -speed 4000 -autoconnect 1 -JTAGConf -1,-1 This command is mainly used the JLINK commander and JLINKScript to realize the Secure JTAG connection. When test it, put the SecJtag.bat, JLink.exe, and NXP_RT1064_SecureJTAG.JlinkScript 3 files in the same folder. For testing, can change the board mode to the internal boot mode, SW7:1-OFF,2-OFF, 3-ON, 4-OFF. Run SecJtag.bat, the test situation is: It indicates to connect JTAG_MOD to higher level   Fig 6 Here, use the wire to connect the J25_4 and TP11, which is connect the JTAG_MOD=1, then click OK, go to the next step:   Fig 7 It can be seen here that the correct UUID has been recognized, which is consistent with the UUID read by MCUBootutility above. Many customers cannot read the correct UUID here, indicating that there is a problem with hardware modification, or fuse modification, or another. Or in the case, the JTAG pin in the app is not enabled, which will be described in detail later. Here disconnect the connection between TP11 and J25_4, the default is JTAG_MOD=0, click OK to continue Fig 8 Here, we can see, the ARM CM7 core is found, it means this hardware platform already realize the Secure JTAG connection. Now, can use the IDEs to do the debugging. 3. Secure JTAG debug function in 3 IDEs This chapter aims at how to use secure JTAG function in RT10XX three commonly used IDEs: MCUXpresso, IAR, MDK,  to implement secure JTAG code debug operation.    3.1 Software code prepare This article selects the SDK hello_world project as the test demo: SDK_2_13_0_EVK-MIMXRT1064\boards\evkmimxrt1064\demo_apps\hello_world     Two points should be noted here:  Do not use led_blinky directly, because the led control pin GPIO_AD_B0_09 used by the code is JTAG_TDI, which will cause the Secure JTAG connection to fail after downloading this code, because the pin function of JTAG has been changed. Add the pin configuration for JTAG in app code pinmux.c, otherwise there will be a phenomenon due to the lack of JTAG pin configuration, to the empty RT1064, which the chip that has not burned the code can use Secure JTAG connection, but once the code is burned, the connection will be failed. Add the following code to Pinmux.c: IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_11_JTAG_TRSTB, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_06_JTAG_TMS, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_07_JTAG_TCK, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_09_JTAG_TDI, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_10_JTAG_TDO, 0U); 3.2 MCUXpresso Secure JTAG debug    Use MCUXpresso IDE to import the SDK hello world demo, modify the pinmux.c, which add the JTAG pin function configuration.    Configure MCUXPresso ID’s debugger JLinkGDBServerCL.exe version as your used JLINK driver version, Window->preferences Fig 9 Run->Debug configurations, configure to JTAG, choose device as MIMXRT1064xxx6A, add the JLINKscriptfile   Fig10   Fig 11 Connect JTAG_MOD=1, which is connect TP11 to J25_4, connect OK.   Fig 12 We can see, it already gets the correct UUID, it also requires connect JTAG_MOD=0, here just leave the TP11 floating, then connect OK:   Fig 13 It can be seen that at this time, it has successfully entered the debug mode and can do debugging. For details, you can check the MCUXpresso11_7_1_MIMXRT1064_SJTAG.mp4 file in the attachment. The test experience here is that MCUXpresso V11.7.1 is found to be a bit unstable and needs to be tried a few more times, but the download of the higher version V11.8.0 version is very stable. If you can get a version higher than V11.7.1, it is recommended to use a higher version of MCUXpresso IDE . 3.3 IAR Secure JTAG debug Some customers need to use the IAR IDE to debug Secure JTAG function, you can use the hello world in the SDK demo, modify pinmux.c to add the JTAG pin configuration code.     The difference is:   (1) Run JLINK driver:JLinkDLLUpdater.exe   Fig 14 Just to refresh the JLINK driver to the IAR,MDK IDE. (2) Modify the file name of JLINKscript to be consistent with the name of the demo, and put it under the settings folder of the project folder. For example, the routine here is hello_world_flexspi_nor_debug, and the file name of JlinkScript is required: hello_world_flexspi_nor_debug.JlinkScript, so that IAR will automatically call the corresponding JlinkScript file   Fig15 (3) Configure IAR debugger as JLINK JTAG   Fig 16                                          Fig 17 Click debug button to enter debug mode:   Fig 18 It needs to set JTAG_MOD=1, just to connect TP11 to J25_4.   Fig 19 It needs to set JTAG_MOD=0, just leave the TP11 floating, click OK to continue.   Fig 20 We can see, the IAR already can do the secure JTAG debugging. 3.4 MDK Secure JTAG debug   For the MDK secure JTAG configuration, the basic requirement is:     (1) Modify pinmux.c code to enable the JTAG pin function     (2) Run JLINK driver, JLinkDLLUpdater.exe,refresh the driver to MDK     (3) JlinkScript file name changed to JLinkSettings.JlinkScript, copy it to the folder in the mdk project, then the MDK will call the JLINKscript file automatically   Fig 21       (4) Modify debugger to JLINK, then modify the interface to JTAG   Fig 22   Fig 23 So far, the Secure JTAG related configuration of MDK has been completed. From theory, it can be directly debugged to run. But I found some problems after many tests. For the code of RAM (hello_world debug), it is no problem to be able to perform secure JTAG debug, but for the code of flash (hello_world_flexspi_nor_debug), there is no problem through secure jtag download, but the debug will run the program abnormal, check the memory data in the flash, also get the wrong data     Fig 24 We can see, UUID also correct, normally, this issue is related to the flashloader during downloading, however, the flashloader of JLINK has not been directly accessed, so I tried to use RT-UFL as the flashloader, and the debugger was successful. If customers encounter similar problems when want to use the MDK to do the secure JTAG debugging, they can use RT-UFL as the flashloader. The reference document is: https://www.cnblogs.com/henjay724/p/13951686.html https://www.cnblogs.com/henjay724/p/15465655.html To summarize it here, copy the iMXRT_UFL file to the JLINK driver folder: C:\Program Files\SEGGER\JLINK\Devices\NXP Copy JLinkDevices.xml to folder: C:\Program Files\SEGGER\JLINK The Jlinkscript file add is the same as the Figure 21. Modify the JlinkSettings.ini file, device is MIMXRT1064_UFL, override =1.   Fig 25 Delete the program algorithm, will use the RT-UFL algorithm   Fig 26 Uncheck update target before Debugging   Fig 27 Enter debug mode:   Fig 28 Configure JTAG_MOD=1, connect TP11 to J25_4, click OK to continue:   Fig 29 Leave the TP11 as floating, click OK to enter the debug mode, the result is:   Fig 30 We can see, after changing the flashloader to the RT-UFL, MDK project Secure JTAG debug also works OK, the attachment also share the RT-UFL related files.  4. Summary For Secure JTAG, you need to modify the hardware to support JTAG function, modify the fuse to support secure JTAG, and modify the code pins to enable the JTAG function. For the IDE debug, you need to configure the relevant interface as JTAG and add the correct JlinkScriptfile, so that the secure JTAG function can be successfully run , and perform IDE code debugging. Attachments: evkmimxrt1064_hello_world_SJTAG.zip:MCUXpresso project EVK-MIMXRT1064-hello_world_iar.7z:IAR project EVK-MIMXRT1064-hello_world_mdk.7z:MDK project File\ NXP_RT1064_SecureJTAG.JlinkScript, JLINK script File\ SecJtag.bat, associate with JLink.exe and NXP_RT1064_SecureJTAG.JlinkScript to realize JLINK Commander connection, which will find the ARM core. File\ RT-UFL: RT ultra flashloader algorithm, source:https://github.com/JayHeng/RT-UFL   Here, really thanks so much for our expert @juying_zhong 's help with the Secure JTAG patient guide during my testing road!
查看全文
RT1170 AVB fresh tasting 1 Abstract AVB (Audio Video Bridging) is the audio and video bridging technology. AVB is also a time-sensitive network. It is mainly used to solve audio and video transmission problems within the local area network: delay problems and synchronization problems. AVB consists of a series of IEEE standards aimed at efficiently transmitting audio and video data in a local area network. The protocol of AVB is as follows: Fig 1 AVB is mainly a link layer protocol and coexists with the traditional TCP/IP protocol.     AVB related protocols include: gPTP (IEEE 802.1AS-2020): Precision time synchronization protocol AVTP (IEEE 1722-2016): Audio and video transmission protocol FQTSS (IEEE 802.1Q-2018, section 34): Credit Based Shaper protocol SRP (IEEE 802.1Q-2018, section 35) :Stream Reservation Protocol AVDECC (IEEE 1722.1-2013): Audio and video management protocol EST (IEEE 802.1Qbv-2015) FP (IEEE 802.3br-2016/IEEE 802.1Qbu-2016) The topology diagram of AVB is as follows: Fig 2 End Station:Listener andTalker Listener: Node that accepts audio and video data Talker: Node that outputs audio and video data AV Bridge:AVB bridge The purpose of this article is not to talk about the AVB protocol. As a first-time experience of AVB with RT1170, it mainly explains how to use the officially provided AVB/TSN protocol stack to implement the AVB audio data transmission function on the NXP MIMXRT1170-EVK board.。 2 .RT1170 AVB testing 2.1 Hardware prepare    2*MIMXRT1170-EVK REV C4,one is Talker,another is Listener.    Pin modification: remove R228, R234, R232, R229 resistor    Board default configuration:      J27:1-2      J5,J6,J7,J8:short connect      J38:5-6      SW1: 1-OFF,2-OFF,3-ON,4-OFF      SW2: 1-OFF,2-OFF,3-OFF,4-OFF,5-OFF,6-OFF,7-OFF,8-OFF,9-OFF,10-OFF      J11: code downloading and vcom port      J4  : 1G ENET, used for AVB communication Fig 3   2.2 Software tool prepare 2.2.1 Related code SDK_2_13_0_MIMXRT1170-EVK: https://mcuxpresso.nxp.com/en/builder?hw=MIMXRT1170-EVK&rel=667 Download the corresponding SDK according to the system which you need to use (windows/linux). For example, download the Linux version for Linux, and download the windows version for windows. This article takes the Linux compilation system as an example, so download the SDK Linux version. genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0.zip: https://mcuxpresso.nxp.com/en/dashboard?download=84124a72b3f5916f99168a06ef287f2f genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0 file includes: Fig 4 Binaries:include RT1170 AVB/TSN bin file, RT1050 AVB bin file. Doc:AVB/TSN stack related document, very important, recommend to read it at first. Genavb-apps-freertos-5_6_0:GenAVB/TSN app examples genavb-sdk –5_6_0.tar.gz GenAVB/TSN SDK supported configuration and device .patch:GenAVB/TSN patch to RT1170,RT1050 SDK   2.2.2 Related software tool Linux platform CMake (>= 3.10.2, tested with 3.16.3) make unzip patch Windows platform CMake (>= 3.10, tested with 3.22.1) minGW-w64 (tested with 4.3.5) 7zip Git with git bash (necessary for patch utility) This document use the linux ubuntu platform, the software version is: Fig 5 A small experience sharing, as the used Ubuntu version is lower, then the installed cmake version is also low, when install the cmake 3.16.3, meet some issues, here share the experience: At first, download cmake-3.16.3-Linux-x86_64.tar.gz: https://github.com/Kitware/CMake/releases/download/v3.16.3/cmake-3.16.3-Linux-x86_64.tar.gz unzip cmake-3.16.3-Linux-x86_64.tar.gz to get cmake-3.16.3-Linux-x86_64: tar -zxvf cmake-3.16.3-Linux-x86_64.tar.gz Add the linker to /usr/bin/cmake: sudo ln -s /home/nxa07323/TSN_GENAVB/cmake-3.16.3-Linux-x86_64/bin/cmake /usr/bin/cmake In /usr/bin/cmake path, use:ls -al, we can see the link information: Fig 6 After the above operation, use : cmake –version We can see the cmake version is cmake 3.16.3, which Fig 5 shows. 2.2.2 Code configuration   To build the AVB code, it also needs to use the RT1170 SDK, the steps are:   Unzip genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0.zip and SDK_2_13_0_MIMXRT1170-EVK_linux.zip Fig 7   Enter SDK_2_13_0_MIMXRT1170-EVK_linux folder path, add genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/ mcuxpresso-sdk-SDK_2_13_0_MIMXRT1170-EVK-5_6_0.patch  Patch add commander is: $ patch -p1 < path/to/mcuxpresso-sdk-SDK_2_13_0_MIMXRT1170-EVK-5_6_0.patch   The real operation command is: patch -p1 < /home/nxa07323/avbdoc/genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/mcuxpresso-sdk-SDK_2_13_0_MIMXRT1170-EVK-5_6_0.patch Fig 8    Add the linker for the SDK and AVB SDK, the structure is: genavb-apps-freertos-5_6_0 ├── boards │ ├── evkbimxrt1050 │ │ ├── demo_apps │ │ └── mcu-sdk -> /path/to/SDK_2_13_0_EVKB-IMXRT1050 (required for RT1052) │ ├── evkmimxrt1170 │ │ ├── demo_apps │ │ └── mcu-sdk -> /path/to/SDK_2_13_0_MIMXRT1170-EVK (required for RT1176) │ └── src │ └── demo_apps └── gen_avb -> /path/to/genavb-sdk-5_6_0 To the RT1170, it contains 2 link: 1)Add the SDK path for the avb sdk board level $ cd path/to/genavb-apps-freertos-5_6_0/boards/evkmimxrt1170 $ ln -s path/to/SDK_2_13_0_MIMXRT1170-EVK mcu-sdk The used situation: For path /home/nxa07323/avbdoc/genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/genavb-apps-freertos-5_6_0/boards/evkmimxrt1170 It needs to add the SDK path link: /home/nxa07323/avbdoc/SDK_2_13_0_MIMXRT1170-EVK_linux Firstly, unzip the genavb-apps-freertos-5_6_0.tar.gz: tar -zxvf genavb-apps-freertos-5_6_0.tar.gz Fig 9 The commander is: cd /home/nxa07323/avbdoc/genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/genavb-apps-freertos-5_6_0/boards/evkmimxrt1170 ln -s /home/nxa07323/avbdoc/SDK_2_13_0_MIMXRT1170-EVK_linux mcu-sdk Fig 10   We can see, in the path genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/genavb-apps-freertos-5_6_0/boards/evkmimxrt1170, already add the SDK path with link, the name is: mcu-sdk. 2)Add the GENAVB/TSN SDK path link to AVB app top level $ cd path/to/genavb-apps-freertos-5_6_0/ $ ln -s path/to/genavb-sdk-5_6_0 gen_avb To /home/nxa07323/avbdoc/genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/genavb-apps-freertos-5_6_0/ Add the AVB SDK path link:/home/nxa07323/avbdoc/SDK_2_13_0_MIMXRT1170-EVK_linux Unzip genavb-sdk-5_6_0.tar.gz: tar -zxvf genavb-sdk-5_6_0.tar.gz Fig 11 Link commander: cd /home/nxa07323/avbdoc/genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/genavb-apps-freertos-5_6_0 ln -s /home/nxa07323/avbdoc/genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/genavb-sdk-5_6_0 gen_avb Fig 12 We can see, /home/nxa07323/avbdoc/genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/genavb-apps-freertos-5_6_0 path link also added, the name is gen_avb. 2.3 code build   The build file path: genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/genavb-apps-freertos-5_6_0/boards/evkmimxrt1170/demo_apps/avb_tsn/avb_audio_app/armgcc use build_release.sh file for the linux version. Commander: ./build_release.sh Fig 13 Result: Fig 14 We can see, the avb_app.bin already be generated. File path: /home/nxa07323/avbdoc/genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0/genavb-apps-freertos-5_6_0/boards/evkmimxrt1170/demo_apps/avb_tsn/avb_audio_app/armgcc/release Open avb_app.bin file: Fig 15     We can see, the generated bin file’s RT1170 QSPI FCB. 2.4 Code programming 2.4.1 MSD downloading method     By referring the doc: genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0\doc\NXP_GenAVB_TSN_Stack_FreeRTOS_Eval_User_s_Guide-5_6_rev0.pdf. It is recommended to use the MSD method and directly copy app_avb.bin to the MSD of EVK. However, in the actual test, since there is no information for successful burning progress after copying, it is easy to cause problems. For example, after copying the bin file to the MSD disk (it seems that the copy has been completed), reset or power off the board, but the burning has not actually been completed. At this time, there will be a problem that the code has not been successfully burned, and AVB cannot be successfully run. Therefore, if you use the MSD method, after copying to the MSD, it is best to wait for a period of time to ensure that the code is successfully burned, such as 30s. Fig 16 However, after many tests, the MSD method cannot perform chip mass erase. For example, some filesystem configurations of AVB have been made, but the code needs to be re-downloaded for reconfiguration. It is found that even if app_avb.bin is burned again, the previous AVB configuration is still exists, so if you need a new system file configuration, it is recommended to perform a full chip erase first and then use this method to burn, or directly use the serial download mode to burn.     In addition, some customers may update EVK's opensda version, and sometimes the updated opensda may not have MSD. In this way, it is also recommended to use the following serial download MCUBootutility method to burn.    2.4.2 MCUBootutility downloading method At first, EVK enter the serial download mode: SW1:1-OFF,2-OFF,3-OFF,4-ON Use two USB cables to connect the J11 and J20 SDP, then power off and power on the board or reset the board to enter the serial download mode. MCUBootutility tool download link: https://github.com/JayHeng/NXP-MCUBootUtility/releases/tag/v5.3.0 Tool related document: https://github.com/JayHeng/NXP-MCUBootUtility   Use the generated avb_app.bin file, and burn it to the MIMXRT1170-EVK board: Fig 17 It should be noted here that from Figure 15 you can see that the bin file stores the FCB starting from 0, but for the MIMXRT1170 chip, the FCB storage is offset by 0X400, so the burning position of avb_app.bin needs to start from 0X30000400. Follow the 7-steps in Figure 17 to burn the bin file. After burning in this way, change SW1 of EVK to: 1-OFF, 2-OFF, 3-ON, 4-OFF, which is the internal boot mode.   AVB testing requires two MIMXRT1170-EVK development boards. The codes burned on the two boards are the same, avb_app.bin is burned in both.   After restarting and testing, you will find that the old filesystem has also been cleared. At this time, you can configure a new system file again.     2.5 talker listener configuration   Two MIMXRT1170-EVK board, one as AVB Talker, another as AVB Listener. Because the programmed app is the same, then just need to use the filesystem to configure which EVK is talker, which is listener.   This document use 2 EVK, and with the back-to-back method, the connection is: Fig 18 Talker: collect the Mic audio data, then transfer to the Listener through the AVB network Listener:receive the talker audio data, play the talker mic audio data through Audio Out J33 2.5.1 Talker filesystem configuration Talker related configuration commander is: ---------------------------------------- cd .. ls mkdir avb_app write avb_app/mclock_role 0 mkdir avdecc write avdecc/btb_mode 0 mkdir fgptp write fgptp/gmCapable 1 mkdir port0 write port0/hw_addr 00:22:33:44:55:66 ----------------------------------------------- The description for the talker configuration: avb_app/mclock_role=0, Media Clock Master. avdecc/btb_mode=0, avdecc back-to-back mode fgptp/gmCapable=1,gPTP grand master port0/hw_addr=00:22:33:44:55:66, configure the hardware address For hw_addr, it should be noted that the first byte must be 00. The author previously configured it as 11, but the communication was always unsuccessful. Through EVK's J11 serial port configuration, after powering on, you can see that the terminal prints a lot of data. First, you need to enter the shell and press the ISERT key. At this time, the terminal will appear >>. For specific shell commands, you can view the documentation: NXP_GenAVB_TSN_Stack_FreeRTOS_Eval_User_s_Guide-5_6_rev0.pdf, Chapter 6.1.1 Filesystem commands. The mainly used commander is: write: write a file with a given string cat: print the content of a file ls: list all files and directories in the current directory rm: remove a file or a directory (if it is empty) cd: change directory pwd: print working directory mkdir: create a directory The following picture is for the shell entry and the file system configuration: Fig 19  According to the Talker command mentioned above, configure the talker file system: Fig 20   Until now, the talker filesystem is configured. 2.5.2 Listener filesystem configuration   Listener related commander is: ------------------------------- cd .. ls mkdir avb_app write avb_app/mclock_role 1 mkdir avdecc write avdecc/btb_mode 1 write avdecc/talker_id 0x00049f4455660000 ------------------------------------------ The description for the listener configuration: avb_app/mclock_role =1, Media Clock Slave. avdecc/btb_mode=1, avdecc fast-connect back-to-back mode avdecc/talker_id =0x00049f4455660000,for fast connect mode configure the talker entity id。 Entity id configuration rule is: eui[0] = 0x00; eui[1] = 0x04; eui[2] = 0x9f; eui[3] = mac_addr[3]; eui[4] = mac_addr[4]; eui[5] = mac_addr[5]; eui[6] = 0x00; eui[7] = 0x00; mac_addr is determined by the hw_addr, eg, talker configure the hw_addr to: 00:22:33:44:55:66, this is the mac_addr[0-5] Then talker_id: eui[0] = 0x00; eui[1] = 0x04; eui[2] = 0x9f; eui[3] = mac_addr[3]=44; eui[4] = mac_addr[4]=55; eui[5] = mac_addr[5]=66; eui[6] = 0x00; eui[7] = 0x00; talker_id =0x00049f4455660000。 Next, configure the Listener file system: Press the computer's insert button to enter shell mode, and then do the filesystem configuration commander input: Fig 21   Until now, the Listener filesystem configuration is finished. 2.6 Test Result Let's start the connection test. Find a network cable to connect two MIMXRT1170-EVK boards, which is the 1G network port J4 of the Talker and the listener. Listener's J33 is inserted into the earphone to listen to the microphone audio data sent from the talker board. Connection picture is as follows: Fig 22 The test result is, when two board power on, after the short time sync, the audio sound collected from the Talker microphone can be heard in the Listener's headphones, indicating that the RT1170 AVB communication is working. Partial log diagrams are given below. The complete logs of talker and listener can be viewed in the attachment. Fig 23 Fig 24 Fig 25 Fig 26 3.Summarization After several testing, we can realize the RT1170 AVB audio transfer function. If the customer don’t want to build the project, just want to do the simple testing, they also can use the AVB/TSN stack’s generated bin file, the path is: \genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_0\binaries\genavb-avb_audio_app-evaluation-freertos_rt1176-5_6_0.tar\genavb-avb_audio_app-evaluation-freertos_rt1176-5_6_0\release\ avb_app.bin This document is just the AVB tasting, for the deeper knowledge, will learn and share it later. Meet issues during the testing: 1). The cmake install during the linux build, which can be found from chapter 2.2.2 Related software tool 2). The printf log is messing, in Tera Term, use the 115200 baudrate to printf, the default printf log is miss order, it is difficult to the detail content, just like this: Fig 27 Solution: Fig 28 Fig 29 After the above configuration, we can see the log is in order. 3). Board programming, at first use the MSD to download the code, but didn’t wait the 30s, then do the power off or the reset, it always causes MSD burning failure, and the generated avb_app.bin , didn’t find the FCB location is not do the 0x400 offset when use the mcubootutility, It also meet issues. So when use the mcubootutility, need to burn from 0x30000400, please refer to chapter 2.4 Code programming 4). Talker filesystem configuration for port0/hw_addr, the first bytes should be 0X00, if none 0, it will have the AVB communication issues, after modify the first byte to the 0X00 in hw_addr, the issue is solved.  
查看全文
i.MXRT1050 MCU supports 10M/100M Ethernet MAC. Nowadays, LAN8720A is a very common PHY used in many networking design. In this document, I will show you how to use LAN8720A with i.MXRT1050.  1. Schematic   In this design example,  ENET_RST  is connected to GPIO_AD_B1_04      ENET_INT is connected to GPIO_AD_B0_15      2. Source code modification In the i.MXRT1050 SDK, the source code files of the PHY are fsl_phy.c and fsl_phy.h. The registers of LAN8720A need to be added into the source code. Below is the registers of LAN8720A. The details can be found in the LAN8720A datasheet. ( The modified fsl_phy.c and fsl_phy.h are attached)   In the pinmux.c, modify the GPIO Mux setting of the ENET_INT and ENET_RST.   IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B1_04_GPIO1_IO20, 0U);                                      IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_15_GPIO1_IO15, 0U);                                      IOMUXC_SetPinConfig(IOMUXC_GPIO_AD_B1_04_GPIO1_IO20, 0xB0A9u);                                  IOMUXC_SetPinConfig(IOMUXC_GPIO_AD_B0_15_GPIO1_IO15, 0xB0A9u);                               This is the part of the source code to reset the PHY in the main() function. gpio_pin_config_t gpio_config = {kGPIO_DigitalOutput, 0, kGPIO_NoIntmode}; GPIO_PinInit(GPIO1, 20, &gpio_config); GPIO_PinInit(GPIO1, 15, &gpio_config); GPIO_WritePinOutput(GPIO1, 15, 1); GPIO_WritePinOutput(GPIO1, 20, 0); delay(); GPIO_WritePinOutput(GPIO1, 20, 1); For more example codes, please refer to the demo_apps/lwip in the i.MXRT SDK package. Reference: i.MXRT1050 web page : i.MX RT1050 MCU/Applications Crossover Processor | Arm® Cortex®-M7 @600 MHz, 512KB SRAM |NXP  MCUXpresso SDK web page : MCUXpresso SDK|NXP 
查看全文
Get 500 MHz for just $1 with NXP's new i.MX RT1010 crossover MCU.  Targeted for a variety of applications, this video highlights two very popular example use-cases for i.MX RT1010 -audio and motor control.
查看全文
Recently, we often encounter customers using i.MXRT for RS485 communication. Mostly the problem of receiving and sending direction conversion in the process of using. Taking iMXRT1050 and SN65HVD11QDR as examples, The document introduces the LPUART to RS485 circuit and the method of transceiver control. The working principle is as follows: LPUART TXD: Transmit Data LPUART RXD: Receive Date LPUART RTS_B: Request To Send   The main control methods are as follows: 1  Use TXD signal line to do hardware automatic transceiver control According to the UART protocol, when the line is idle, TX is logic high. After the NOT gate, the LOW level is added to the direction control terminal, so when the UART is not  transmitting data, RS485 is in the state of receiving data. 2   Use GPIO control & LPUART_RTS More detailed information, users can refer to the link: https://www.nxp.com/docs/en/application-note/AN12679.pdf Note: Using GPIO control, software needs to judge the timing of receiving and sending. If the control is not good, it is easy to lose data. In order to control it well, the software must respond to TX FIFO "empty" interrupt, or query the sending status register, and accurately grasp the control opportunity, so as to ensure that there is no error in sending and receiving. Combined with the above methods, some customers are using the following control: Best Regards
查看全文
RT10xx image reserve the APP FCB methods 1. Abstract     Regarding RT10XX programming, it is mainly divided into two categories: 1) Serial download mode with blhost proramming     To this method, we can use the MCUBootUtility tool, or blhost+elftosb+sdphost cmd method, we also can use the NXP SPT(MCUXpresso secure provisional Tool). This programming need to enter the serial download mode, then use the flashloader supported UART or the USB HID interface. 2) Use Programmer or debugger with flashdriver programming This method is usually through the SWD/JTAG download interface combined with the debugger + IDE, or directly software burning, the chip mode can be in the internal boot, or in the serial download mode, with the help of the flashloader to generate the flash burning algorithm file. Method 2, The burning method using the debugger tool usually ensures that the burning code is consistent with the original APP.     Method 1, Uses the blhost method to download, usually blhost will regenerate an FCB with a full-featured LUT to burn to the external flash, and then burn the app code with IVT, that is, without the FCB header of the original APP, and re-assemble a blhost generated FCB header and burn it separately. However, for some customers who need to read out the flash image and compare with the original APP image to check the difference after burning, the commonly used blhost method will have the problem of inconsistent FCB area matching. If the customer needs to use the blhost burning method in serial download mode, how to ensure that the flash image after burning is consistent with the original burning file? This article will take the MIMXRT1060-EVK development board as an example, and give specific methods for the command mode and SPT tool mode. 2 Blhost programming reserve APP FCB     From the old RT1060 SDK FCB file (below SDK2.12.0), evkmimxrt1060_flexspi_nor_config.c, we can see:   const flexspi_nor_config_t qspiflash_config = { .memConfig = { .tag = FLEXSPI_CFG_BLK_TAG, .version = FLEXSPI_CFG_BLK_VERSION, .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad, .csHoldTime = 3u, .csSetupTime = 3u, .sflashPadType = kSerialFlash_4Pads, .serialClkFreq = kFlexSpiSerialClk_100MHz, .sflashA1Size = 8u * 1024u * 1024u, .lookupTable = { // Read LUTs FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xEB, RADDR_SDR, FLEXSPI_4PAD, 0x18), FLEXSPI_LUT_SEQ(DUMMY_SDR, FLEXSPI_4PAD, 0x06, READ_SDR, FLEXSPI_4PAD, 0x04), }, }, .pageSize = 256u, .sectorSize = 4u * 1024u, .blockSize = 64u * 1024u, .isUniformBlockSize = false, };   This FCB LUT just contains the basic read command, normally, to the app booting, the FCB just need to provide the read command to the ROM, then it can boot normally.     But what happens to the memory downloaded by blhost? Based on the MIMXRT1060-EVK development board, the following shows how to use the command line mode corresponding to blhost to burn the SDK led_blinky project app, and read out the corresponding flash burning code to analysis. 2.1 Normal blhost download command line    This command line also the same as MCUBootUtility download log, source code is attached rt1060 cmd.bat. elftosb.exe -f imx -V -c imx_application_gen.bd -o ivt_evkmimxrt1060_iled_blinky_FCB.bin evkmimxrt1060_iled_blinky.s19 sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- write-file 0x20208200 ivt_flashloader.bin sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- jump-address 0x20208200 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 1 0 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 24 0 blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202000 4 0xc0000007 word  //option 0 blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202004 4 0 word                 //option1 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20202000                    blhost -t 2048000 -u 0x15A2,0x0073 -j -- flash-erase-region 0x60000000 0x8000 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20203000 4 0XF000000F word  blhost -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20203000                    blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x60001000 ivt_evkmimxrt1060_iled_blinky_FCB_nopadding.bin 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 0x8000 flexspiNorCfg.dat 9 The normal blhost programming is to use the cmd line method, and provide an app which is without the FCB header(Even app with the FCB, will exclude the FCB header at first), then use the elftosb.exe generate the app with IVT, eg ivt_evkmimxrt1060_iled_blinky_FCB_nopadding.bin, download the flashloader file ivt_flashloader to internal RAM, and jump to the flashloader, then use the fill-memory to fill option0, option1 to choose the proper external flash, and use the configure-memory to configure the flexSPI module, with the SFDP table which is got from get configure command, then fill the flexSPI LUT internal buffer. Next, fill-memory 0x20203000 4 0XF000000F associate with configure-memory will generate the full FCB header, burn it from flash address 0x60000000. At last, burn the app which contains IVT from flash address 0X60001000, until now, realize the whole app image programming. Pic 1 shows the comparison between the data read after programming and the original app data. It can be seen that the LUT of the FCB actually programmed on the left is not only contains read, but also contains read status, write enable, program and erase commands. The one on the right is the original app with FCB. The LUT of FCB only contains read commands for boot. So, if you want to keep the FCB header of the original APP instead of the header generated and burned by option0,1 configure-memory, how to do it? The method is that you can also use Option0, 1 to generate and fill in the LUT for flexSPI for communication use, but do not burn the corresponding generated FCB, just burn the FCB that comes with the original APP. pic1 2.2 Reuse option0 and option1 to program the original APP LUT The following command gives reuse option0 and option1, generates LUT and fills in flexSPI LUT for connection with external flash interface, but does not call:  fill-memory 0x20203000 4 0XF000000F and configure-memory 9 0x20203000, so that the generated FCB will not be burned to external memory.    Source file is attached rt1060 cmd_option01.bat. elftosb.exe -f imx -V -c imx_application_gen.bd -o ivt_evkmimxrt1060_iled_blinky_FCB.bin evkmimxrt1060_iled_blinky.s19 sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- write-file 0x20208200 ivt_flashloader.bin sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- jump-address 0x20208200 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 1 0 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 24 0 blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202000 4 0xc0000007 word blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202004 4 0 word blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20202000 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 2048000 -u 0x15A2,0x0073 -j -- flash-erase-region 0x60000000 0x8000 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x60000000 evkmimxrt1060_iled_blinky_FCB.bin 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 0x8000 flexspiNorCfg.dat 9 Pic 2 is the comparison between the read data after programming and the original programming data. It can be seen that the FCB programmed at this time is exactly the same as the original code FCB. Pic 2 2.3 use 1bit FCB file to configure LUT    The used file cfg_fdcb_RTxxx_1bit_sdr_flashA.bin is copied from MCUBOOTUtility: \NXP-MCUBootUtility-3.4.0\src\targets\fdcb_model . The configuration of Option0 and Option1 is usually for chips that can support SFDP table, but some flash chips cannot support SFDP table. At this time, you need to fill in the flexSPI LUT for the full LUT manually. The so-called full LUT command is not only read commands, but also supports erasing, program, etc. In this way, the flexSPI interface can be successfully connected to the external FLASH, and the corresponding functions of reading, erasing, and writing can be realized. Therefore, the method in this chapter is to use a single-line command, which is also a command supported by general chips, to enable the corresponding function of flexSPI, so it can complete the subsequent APP code programming.   Pic 3     We can see: 03H is read, 05H is read status register, 06H is write enable, D8H is the block 64K erase, 02H is the page program, 60H is the chip erase. This is the 1bit SPI method full function LUT command, which can realize the chip read, write and erase function.     The command line is, source file is attached rt1060 cmd_fdcb_1bit_sdr_flashA.bat: elftosb.exe -f imx -V -c imx_application_gen.bd -o ivt_evkmimxrt1060_iled_blinky_FCB.bin evkmimxrt1060_iled_blinky.s19 sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- write-file 0x20208200 ivt_flashloader.bin sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- jump-address 0x20208200 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 1 0 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 24 0 blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x20202000 cfg_fdcb_RTxxx_1bit_sdr_flashA.bin blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20202000 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 2048000 -u 0x15A2,0x0073 -j -- flash-erase-region 0x60000000 0x8000 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x60000000 evkmimxrt1060_iled_blinky_FCB.bin 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 0x8000 flexspiNorCfg.dat 9 In the command line, where option0,1 was previously filled in, instead of filling in the data of option0,1, the 512-byte Bin file of the complete FCB LUT command is directly given, and then the configure-memory command is used to configure the flashloader’s FlexSPI LUT with the FCB file. so that it can support read and write erase commands, etc. The comparison between the flash data and the original APP data when burning and reading is in the Pic 4, we can see, the readout data from the flash is totally the same as the original APP FCB. Pic 4 3,SPT program reserve APP FCB The NXP officially released MCUXPresso Secure Provisional Tool can support the function of retaining the customer's FCB, but the SPT tool currently uses the APP FCB to fill in the flashloader FlexSPI FCB. Therefore, if the customer directly uses the old SDK demo which just contains the read command in the LUT to generate an APP with FCB, then use the SPT tool to burn the flash, and choose to keep the customer FCB in the tool, you will encounter the problem of erasing failure. In this case, analyze the reason, we can know the FCB on the customer APP side needs to fill in the full FCB LUT command, that is, including reading, writing, erasing, etc. The following shows how the old original SDK led_blinky generates an image with an FCB header and writes it in the SPT tool. As you can see in Pic 5, the tool has information that if you use APP FCB, you need to ensure that the FCB LUT contains the read, erase, program commands. Pic 6 shows the programming situation of APP FCB LUT only including read. It has failed when doing erase. The reason is that there is no erase, program and other commands in the FlexSPI LUT command, so it will fail when doing the corresponding erasing or programming.   Pic 5 Pic 6 Pic 7 If you look at the specific command, as shown in Pic 7, you can find that the SPT tool directly uses the FCB header extracted from the APP image to flash the LUT of the flashloader FlexSPI, so there will be no erase and write commands, and it will fail when erasing. The following is how to fill in the LUT in the FCB of the SDK, open evkmimxrt1060_flexspi_nor_config.c, and modify the FCB as follows: const flexspi_nor_config_t qspiflash_config = {     .memConfig =         {             .tag              = FLEXSPI_CFG_BLK_TAG,             .version          = FLEXSPI_CFG_BLK_VERSION,             .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad,             .csHoldTime       = 3u,             .csSetupTime      = 3u,             .sflashPadType    = kSerialFlash_4Pads,             .serialClkFreq    = kFlexSpiSerialClk_100MHz,             .sflashA1Size     = 8u * 1024u * 1024u,             .lookupTable =                 {                   // Read LUTs                   FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xEB, RADDR_SDR, FLEXSPI_4PAD, 0x18),                   FLEXSPI_LUT_SEQ(DUMMY_SDR, FLEXSPI_4PAD, 0x06, READ_SDR, FLEXSPI_4PAD, 0x04),                   // Read status                   [4*1] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x05, READ_SDR, FLEXSPI_1PAD, 0x04),                   //write Enable                   [4*3] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x06, STOP, FLEXSPI_1PAD, 0),                   // Sector Erase byte LUTs                   [4*5] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x20, RADDR_SDR, FLEXSPI_1PAD, 0x18),                   // Block Erase 64Kbyte LUTs                   [4*8] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xD8, RADDR_SDR, FLEXSPI_1PAD, 0x18),                    //Page Program - single mode                   [4*9] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x02, RADDR_SDR, FLEXSPI_1PAD, 0x18),                   [4*9+1] = FLEXSPI_LUT_SEQ(WRITE_SDR, FLEXSPI_1PAD, 0x04, STOP, FLEXSPI_1PAD, 0x0),                   //Erase whole chip                   [4*11] =FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x60, STOP, FLEXSPI_1PAD, 0),                                       },         },     .pageSize           = 256u,     .sectorSize         = 4u * 1024u,     .blockSize          = 64u * 1024u,     .isUniformBlockSize = false, }; Please note, after the internal SDK team modification, from SDK_2_12_0_EVK-MIMXRT1060, the evkmimxrt1060_flexspi_nor_config.c already add LUT cmd to the full FCB LUT function. Use the above FCB to generate the APP, then use the SPT tool to burn the app with customer FCB again, we can see, the programming is working now. Pic 8 In summary, if you need to reserve the customer FCB, you can use the above method, but if you use the SPT tool, you need to add read, write, and erase commands to the LUT of the code FCB to ensure that flexSPI successfully operates the external flash.
查看全文
Frequently, we receive questions related to the DQS pins present on i.MXRT whether how to use it and what it is exactly the function of this as well as why it is important to use it. The goal of this document is answering common questions and expose the most common mistakes when connecting this pin. Lets start defining why DQS signal is helpful for memory interfaces; DQS stands as data strobe and it is the clock signal for the data lines used to solve an issue during the memory read. The controller must first transmit the clock to memory, where it arrives x ns later, then the memory sends data bits to the controller and this takes x nanoseconds. There is a clock skew, which limits how fast you can transmit. On iMXRT family it is present on FlexSPI and SEMC interfaces where you can connect multiple memories, it also allows to have multiple configurations as not all memories provide DQS signal on the memory. The next section will detail the particular configuration on each memory interface, RT1170 data will be used but information on RT10xx family is also applicable on this.  SEMC SEMC has two configurations for DQS pad, on DQSMD register.   For DQSMD = 0: We do not have an exact maximum/minimum for the achievable frequency, we only know that when DQSMD we will not reach the maximum SEMC frequency on SDRAM. There could be variations on the frequency on this mode. It is impossible to run at the max 200MHz 1 and meet this input timing spec on datasheet, so the clock frequency needs to be decreased to ensure you still meet timing., this depends on the data output delay spec for the memory that is being used. For DQSMD = 1: As the signal delay is calculated in DQS pad, 200MHz 1 frequency can be achieved on this mode, please consider that the pin needs to be floating or apply extra capacitance on special cases which will be discussed below. As SDRAM device don't output DQS signal, so it take DQS pad as loopback and measure signals delay, and take this delay to compensate and get the correct data strobe point, this can cover most application case and get the good performance, however, if external signal delay is big, it has the complicated topology and long trace, so it can't take DQS pad delay to compensate external SDRAM signal delay. There are two methods to adjust the delay, the first one is using Delay Chain Control Register(DCCR) while the other one is adding capacitance to the DQS pad; unfortunately there is no formula to calculate the register value and capacitance as this is related to SDRAM signal layout, different layout will get the different signal delay. There are some particular cases where more than 3 SDRAMs were added to RT1xx, since the combined memories capacitance exceeded the pad capacitance there were issues using the memory at the max speed; this was solved by adding extra capacitance to DQS pin.   FlexSPI FlexSPI DQS pins behaves similarly as the one we found on SEMC with some difference on the available configuration and maximum speeds. For FlexSPI device there are three different modes of configuration controlled by the RXCLKSRC field on MCR0 register. RXCLKsrc=0x0 (Internal dummy read strobe and internal loopback) In this mode DQS pin not used so an alternative option for this pin can be configured, however the achieved frequency is the lowest as the timings for highest speeds cannot be achieved.   RXCLKsrc=0x1 (Internal dummy read strobe and loopback from DQS pad) In this mode FlexSPI uses DQS pin and it must be configured for the FlexSPI function, it is not an option to use it for a different purpose in this mode. The internally generated read strobe is sent to the DQS pin and is sampled at the pin to match more closely the data pin timings. The timing for sampling with an internal dummy read strobe loopback is very similar to the timing for loopback from pad but it can achieve a higher frequency than loopbacking internally however not the highest one. Similarly to the described on the SEMC side, there are some special cases where signal delay is big, the design has a complicated topology or long traces were the solution is adding extra capacitance to DQS pad, As this is dependent of design there is no formula to calculate the needed capacitance.   RXCLKsrc=0x3 (Flash-memory-provided read strobe) In this mode DQS signal is provided by the connected memory, this mode allows maximum frequency for the memory however only certain memories provide this signal. The FlexSPI controller delays the read strobe for one half cycle of the serial root clock (with DLL), then samples read data with the delayed strobe. Conclusion On i.mxRT family commonly uses external memories to execute code or access important data where good performance on the device is needed. To optimize the access speed of the memory DQS signal is always needed as it may limit the speed rate. 1 Please consult the device specific datasheet for detailed rates.   
查看全文
i.MX RT1050 is the first set of processors in NXP's crossover processor family, combining the high-performance and high level of integration on an applications processors with the ease of use and real-time functionality of a microcontroller. As the first device in a new family, we have had some learning and improvements that have come along the way. There have been some changes and improvements to the processor and also our enablement for the device. This can result in some revisions of hardware and software not being directly compatible with each other out of the box. In particular, some software that has been released for the A0 silicon revision (found on EVK boards) doesn't run on the A1 silicon revision (EVKB boards). In order to minimize the risk of compatibility issues, we recommend that all customers move to SDK 2.3.1 or higher. The SDK 2.3.1 is listed as supporting the EVKB hardware specifically, but the SDK is compatible with the EVK (non-B) hardware. We also recommend that customers using the DAPLink firmware for the OpenSDA debugging circuit built into the EVK/EVKB update to the latest version available on the www.nxp.com/opensda site. The flashloader package has also been updated. Rev 1.1 or later should be used (Flashloader i.MX-RT1050). There are many application notes available for RT1050. Many of these application notes were written based on the original silicon revision and early releases of enablement software. We are in the process of reviewing the published application notes and application note software to prioritize updating them where needed based on the latest enablement and recommendations. If you are in a situation where you need to use SDK 2.3.0 on A1 silicon, the most likely problem area involves some new clock gate bits that were added on the A1 silicon revision. These bits weren't present on the A0 silicon, so SDK 2.3.0 will clear them which disables external memory interfaces. If you comment out  the call to BOARD_BootClockGate() that is in the BOARD_BootClockRUN function (found in the clock_config.c file), that should allow the SDK 2.3.0 software to run on an A1 silicon/EVKB. For more information: MCUXpresso SDK RT1050 migration app note  i.MX RT1050 CMSIS-DAP drag-and-drop programming 
查看全文
In the tutorial, I'd like to show the steps of deploying an image classification model on i.MX RT1060 to enabling you to classify fashion images and categories. In the first part of this tutorial, we will review the Fashion MNIST dataset, including how to download it to your system. From there we’ll define a simple CNN network using the TensorFlow platform. Next, we’ll train our CNN model on the Fashion MNIST dataset, train it, and review the results. Finally, we'll optimize the model, after that, the model will be smaller and increase inferencing speed, which is valuable for source-limited devices such as MCU. Let’s go ahead and get started! Fashion MNIST dataset The Fashion MNIST dataset was created by the e-commerce company, Zalando. Fig 1 Fashion MNIST dataset As they note on their official GitHub repo for the Fashion MNIST dataset, there are a few problems with the standard MNIST digit recognition dataset: It’s far too easy for standard machine learning algorithms to obtain 97%+ accuracy. It’s even easier for deep learning models to achieve 99%+ accuracy. The dataset is overused. MNIST cannot represent modern computer vision tasks. Zalando, therefore, created the Fashion MNIST dataset as a drop-in replacement for MNIST. 60,000 training examples 10,000 testing examples 10 classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot 28×28 grayscale images The code below loads the Fashion-MNIST dataset using the TensorFlow and creates a plot of the first 25 images in the training dataset. import tensorflow as tf import numpy as np # For easy reset of notebook state. tf.keras.backend.clear_session() # load dataset fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() lass_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] plt.figure(figsize=(8,8)) for i in range(25): plt.subplot(5,5,i+1,) plt.tight_layout() plt.imshow(train_images[i]) plt.xlabel(lass_names[train_labels[i]]) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.show() Fig 2 Running the code loads the Fashion-MNIST train and test dataset and prints their shape. Fig 3 We can see that there are 60,000 examples in the training dataset and 10,000 in the test dataset and that images are indeed square with 28×28 pixels. Creating model We need to define a neural network model for the image classify purpose, and the model should have two main parts: the feature extraction and the classifier that makes a prediction. Defining a simple Convolutional Neural Network (CNN) For the convolutional front-end, we build 3 layers of convolution layer with a small filter size (3,3) and a modest number of filters followed by a max-pooling layer. The last filter map is flattened to provide features to the classifier. As we know, it's a multi-class classification task, so we will require an output layer with 10 nodes in order to predict the probability distribution of an image belonging to each of the 10 classes. In this case, we will require the use of a softmax activation function. And between the feature extractor and the output layer, we can add a dense layer to interpret the features. All layers will use the ReLU activation function and the He weight initialization scheme, both best practices. We will use the Adam optimizer to optimize the sparse_categorical_crossentropy loss function, suitable for multi-class classification, and we will monitor the classification accuracy metric, which is appropriate given we have the same number of examples in each of the 10 classes. The below code will define and run it will show the struct of the model. # Define a Model model = tf.keras.models.Sequential() # First Convolution ,Kernel:16*3*3 model.add( tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_uniform',input_shape=(28, 28, 1))) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Second Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Third Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Flatten()) model.add( tf.keras.layers.Dense(32, activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Dense(10, activation='softmax')) Fig 4 Training Model After the model is defined, we need to train it. The model will be trained using 5-fold cross-validation. The value of k=5 was chosen to provide a baseline for both repeated evaluation and to not be too large as to require a long running time. Each validation set will be 20% of the training dataset or about 12,000 examples. The training dataset is shuffled prior to being split and the sample shuffling is performed each time so that any model we train will have the same train and validation datasets in each fold, providing an apples-to-apples comparison. We will train the baseline model for a modest 20 training epochs with a default batch size of 32 examples. The validation set for each fold will be used to validate the model during each epoch of the training run, so we can later create learning curves, and at the end of the run, we use the test dataset to estimate the performance of the model. As such, we will keep track of the resulting history from each run, as well as the classification accuracy of the fold. The train_model() function below implements these behaviors, taking the training dataset and test dataset as arguments, and returning a list of accuracy scores and training histories that can be later summarized. from sklearn.model_selection import KFold # train a model using k-fold cross-validation def train_model(dataX, dataY, n_folds=5): scores, histories = list(), list() # prepare cross validation kfold = KFold(n_folds, shuffle=True, random_state=1) for train_ix, validate_ix in kfold.split(dataX): # select rows for train and test trainX, trainY, validate_X, validate_Y = dataX[train_ix], dataY[train_ix], dataX[validate_ix], dataY[validate_ix] # fit model history = model.fit(trainX, trainY, epochs=20, batch_size=32, validation_data=(validate_X, validate_Y), verbose=0) # evaluate model _, acc = model.evaluate(validate_X, validate_Y, verbose=0) print("Accurary: {:.4f},Total number of figures is {:0>2d}".format(acc * 100.0, len(testY))) # append scores scores.append(acc) histories.append(history) return scores, histories Module Summary After the model has been trained, we can present the results. There are two key aspects to present: the diagnostics of the learning behavior of the model during training and the estimation of the model performance. These can be implemented using separate functions. First, the diagnostics involve creating a line plot showing model performance on the train and validate set during each fold of the k-fold cross-validation. These plots are valuable for getting an idea of whether a model is overfitting, underfitting, or has a good fit for the dataset. We will create a single figure with two subplots, one for loss and one for accuracy. Blue lines will indicate model performance on the training dataset and orange lines will indicate performance on the hold-out validate dataset. The summarize_diagnostics() function below creates and shows this plot given the collected training histories. # plot diagnostic learning curves def summarize_diagnostics(histories): for i in range(len(histories)): # plot loss plt.subplot(2,1,1) plt.title('Cross Entropy Loss') plt.plot(histories[i].history['loss'], color='blue', label='train') plt.plot(histories[i].history['val_loss'], color='orange', label='test') # plot accuracy plt.subplot(2,1,2) plt.title('Classification Accuracy') plt.plot(histories[i].history['accuracy'], color='blue', label='train') plt.plot(histories[i].history['val_accuracy'], color='orange', label='test') plt.show() Fig 5 Next, the classification accuracy scores collected during each fold can be summarized by calculating the mean and standard deviation. This provides an estimate of the average expected performance of the model trained on the test dataset, with an estimate of the average variance in the mean. We will also summarize the distribution of scores by creating and showing a box and whisker plot. The summarize_performance() function below implements this for a given list of scores collected during model training. # summarize model performance def summarize_performance(scores): # print summary print('Accuracy: mean={:.4f} std={:.4f}, n={:0>2d}'.format(np.mean(trained_scores)*100, np.std(trained_scores)*100, len(scores))) # box and whisker plots of results plt.boxplot(scores) plt.show()   Fig 6 Verifying predictions According to the above figure, we see that the final trained model can get up to around 87.6% accuracy when predicting the test dataset. And with the trained model, running the below code will demonstrate the result of predictions about some images. def plot_image(i, predictions_array, true_label, img): true_label, img = true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): true_label = true_label[i] plt.grid(False) plt.xticks(range(10)) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') predictions = model.predict(test_images) # Plot the first X test images, their predicted labels, and the true labels. # Color correct predictions in blue and incorrect predictions in red. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions[i], test_labels) plt.tight_layout() plt.show()   Fig 7 Model quantization Post-training quantization is a conversion technique that can reduce model size while also improving CPU and hardware accelerator latency, with little degradation in model accuracy, especially it's crucial to embedded platforms, as it lacks the compute-intensive performance, the Flash and RAM memory is also very limited. TensorFlow Lite is able to be used to convert an already-trained float TensorFlow model to the TensorFlow Lite format. In addition, the TensorFlow Lite provides several approaches to optimize the mode, among these ways, Integer quantization is an optimization strategy that converts 32-bit floating-point numbers (such as weights and activation outputs) to the nearest 8-bit fixed-point numbers. This results in a smaller model and increased inferencing speed, which is very valuable for low-power devices such as microcontrollers. The below codes show how to implement the Integer quantization of the trained model, and after running these codes, we can find that the size of Tensorflow Lite mode reduces almost 64.9 KB versus the original model, becomes about 32% of the original size(Fig 8). import os # Convert using integer-only quantization def representative_data_gen(): for input_value in tf.data.Dataset.from_tensor_slices(tf.cast(train_images,tf.float32)).shuffle(500).batch(1).take(150): yield [input_value] # Convert using dynamic range quantization converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model_quant = converter.convert() # Save the model to disk open("model_dynamic_range_quantization.tflite", "wb").write(tflite_model_quant) ## Size difference Dynamic_range_quantization_model_size = os.path.getsize("model_dynamic_range_quantization.tflite") print("Dynamic range quantization model is %d bytes" % Dynamic_range_quantization_model_size) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_data_gen # Ensure that if any ops can't be quantized, the converter throws an error converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # Set the input and output tensors to uint8 (APIs added in r2.3) converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 tflite_model_advanced_quant = converter.convert() # Save the model to disk open("model_integer_only_quantization.tflite", "wb").write(tflite_model_advanced_quant) Integer_only_quantization_model_size = os.path.getsize("model_integer_only_quantization.tflite") print("Integer_only_quantization_model is %d bytes" % Integer_only_quantization_model_size) difference = Dynamic_range_quantization_model_size - Integer_only_quantization_model_size print("Difference is %d bytes" % difference) Fig 8 Evaluating the TensorFlow Lite model Now we'll run inferences using the TensorFlow Lite Interpreter to compare the model accuracies. First, we need a function that runs inference with a given model and images, and then returns the predictions: # Helper function to run inference on a TFLite model def run_tflite_model(tflite_file, test_image_indices): # Initialize the interpreter interpreter = tf.lite.Interpreter(model_path=str(tflite_file)) interpreter.allocate_tensors() input_details = interpreter.get_input_details()[0] output_details = interpreter.get_output_details()[0] predictions = np.zeros((len(test_image_indices),), dtype=int) for i, test_image_index in enumerate(test_image_indices): test_image = test_images[test_image_index] test_label = test_labels[test_image_index] # Check if the input type is quantized, then rescale input data to uint8 if input_details['dtype'] == np.uint8: input_scale, input_zero_point = input_details["quantization"] test_image = test_image / input_scale + input_zero_point test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"]) interpreter.set_tensor(input_details["index"], test_image) interpreter.invoke() output = interpreter.get_tensor(output_details["index"])[0] predictions[i] = output.argmax() return predictions Next, we'll compare the performance of the original model and the quantized model on one image. model_basic_quantization.tflite is the original TensorFlow Lite model with floating-point data. model_integer_only_quantization.tflite is the last model we converted using integer-only quantization (it uses uint8 data for input and output). Let's create another function to print our predictions and run it for testing. import matplotlib.pylab as plt # Change this to test a different image test_image_index = 1 ## Helper function to test the models on one image def test_model(tflite_file, test_image_index, model_type): global test_labels predictions = run_tflite_model(tflite_file, [test_image_index]) plt.imshow(test_images[test_image_index].reshape(28,28)) template = model_type + " Model \n True:{true}, Predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0]))) plt.grid(False) Fig 9 Fig 10 Then evaluate the quantized model by using all the test images we loaded at the beginning of this tutorial. After summarizing the prediction result of the test dataset, we can see that the prediction accuracy of the quantized model decrease 7% less than the original model, it's not bad. # Helper function to evaluate a TFLite model on all images def evaluate_model(tflite_file, model_type): test_image_indices = range(test_images.shape[0]) predictions = run_tflite_model(tflite_file, test_image_indices) accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images) print('%s model accuracy is %.4f%% (Number of test samples=%d)' % ( model_type, accuracy, len(test_images))) Deploying model Converting TensorFlow Lite model to C file The following code runs xxd on the quantized model, writes the output to a file called model_quantized.cc, in the file, the model is defined as an array of bytes, and prints it to the screen. The output is very long, so we won’t reproduce it all here, but here’s a snippet that includes just the beginning and end. # Save the file as a C source file xxd -i model_integer_only_quantization.tflite > model_quantized.cc # Print the source file cat model_quantized.cc Fig 11 Deploying the C file to project We use the tensorflow_lite_cifar10 demo as a prototype, then replace the original model and do some code modification, below is the code in the modified main file. #include "board.h" #include "fsl_debug_console.h" #include "pin_mux.h" #include "timer.h" #include <iomanip> #include <iostream> #include <string> #include <vector> #include "tensorflow/lite/kernels/register.h" #include "tensorflow/lite/model.h" #include "tensorflow/lite/optional_debug_tools.h" #include "tensorflow/lite/string_util.h" #include "get_top_n.h" #include "model.h" #define LOG(x) std::cout // ---------------------------- Application ----------------------------- // Lenet Mnist model input data size (bytes). #define LENET_MNIST_INPUT_SIZE 28*28*sizeof(char) // Lenet Mnist model number of output classes. #define LENET_MNIST_OUTPUT_CLASS 10 // Allocate buffer for input data. This buffer contains the input image // pre-processed and serialized as text to include here. uint8_t imageData[LENET_MNIST_INPUT_SIZE] = { #include "clothes_select.inc" }; /* Tresholds */ #define DETECTION_TRESHOLD 60 /*! * @brief Initialize parameters for inference * * @param reference to flat buffer * @param reference to interpreter * @param pointer to storing input tensor address * @param verbose mode flag. Set true for verbose mode */ void InferenceInit(std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor** input_tensor, bool isVerbose) { model = tflite::FlatBufferModel::BuildFromBuffer(Fashion_MNIST_model, Fashion_MNIST_model_len); if (!model) { LOG(FATAL) << "Failed to load model\r\n"; return; } tflite::ops::builtin::BuiltinOpResolver resolver; tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (!interpreter) { LOG(FATAL) << "Failed to construct interpreter\r\n"; return; } int input = interpreter->inputs()[0]; const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); if (interpreter->AllocateTensors() != kTfLiteOk) { LOG(FATAL) << "Failed to allocate tensors!"; return; } /* Get input dimension from the input tensor metadata assuming one input only */ *input_tensor = interpreter->tensor(input); auto data_type = (*input_tensor)->type; if (isVerbose) { const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); LOG(INFO) << "input: " << inputs[0] << "\r\n"; LOG(INFO) << "number of inputs: " << inputs.size() << "\r\n"; LOG(INFO) << "number of outputs: " << outputs.size() << "\r\n"; LOG(INFO) << "tensors size: " << interpreter->tensors_size() << "\r\n"; LOG(INFO) << "nodes size: " << interpreter->nodes_size() << "\r\n"; LOG(INFO) << "inputs: " << interpreter->inputs().size() << "\r\n"; LOG(INFO) << "input(0) name: " << interpreter->GetInputName(0) << "\r\n"; int t_size = interpreter->tensors_size(); for (int i = 0; i < t_size; i++) { if (interpreter->tensor(i)->name) { LOG(INFO) << i << ": " << interpreter->tensor(i)->name << ", " << interpreter->tensor(i)->bytes << ", " << interpreter->tensor(i)->type << ", " << interpreter->tensor(i)->params.scale << ", " << interpreter->tensor(i)->params.zero_point << "\r\n"; } } LOG(INFO) << "\r\n"; } } /*! * @brief Runs inference input buffer and print result to console * * @param pointer to image data * @param image data length * @param pointer to labels string array * @param reference to flat buffer model * @param reference to interpreter * @param pointer to input tensor */ void RunInference(const uint8_t* image, size_t image_len, const std::string* labels, std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor* input_tensor) { /* Copy image to tensor. */ memcpy(input_tensor->data.uint8, image, image_len); /* Do inference on static image in first loop. */ auto start = GetTimeInUS(); if (interpreter->Invoke() != kTfLiteOk) { LOG(FATAL) << "Failed to invoke tflite!\r\n"; return; } auto end = GetTimeInUS(); const float threshold = (float)DETECTION_TRESHOLD /100; std::vector<std::pair<float, int>> top_results; int output = interpreter->outputs()[0]; TfLiteTensor *output_tensor = interpreter->tensor(output); TfLiteIntArray* output_dims = output_tensor->dims; // assume output dims to be something like (1, 1, ... , size) auto output_size = output_dims->data[output_dims->size - 1]; /* Find best image candidates. */ GetTopN<uint8_t>(interpreter->typed_output_tensor<uint8_t>(0), output_size, 1, threshold, &top_results, false); if (!top_results.empty()) { auto result = top_results.front(); const float confidence = result.first; const int index = result.second; if (confidence * 100 > DETECTION_TRESHOLD) { LOG(INFO) << "----------------------------------------\r\n"; LOG(INFO) << " Inference time: " << (end - start) / 1000 << " ms\r\n"; LOG(INFO) << " Detected: " << std::setw(10) << labels[index] << " (" << (int)(confidence * 100) << "%)\r\n"; LOG(INFO) << "----------------------------------------\r\n\r\n"; } } } /*! * @brief Main function */ int main(void) { const std::string labels[] = {"T-shirt/top", "Trouser","Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"}; /* Init board hardware. */ BOARD_ConfigMPU(); BOARD_InitPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); InitTimer(); std::unique_ptr<tflite::FlatBufferModel> model; std::unique_ptr<tflite::Interpreter> interpreter; TfLiteTensor* input_tensor = 0; InferenceInit(model, interpreter, &input_tensor, false); LOG(INFO) << "Fashion MNIST object recognition example using a TensorFlow Lite model.\r\n"; LOG(INFO) << "Detection threshold: " << DETECTION_TRESHOLD << "%\r\n"; /* Run inference on static ship image. */ LOG(INFO) << "\r\nStatic data processing:\r\n"; RunInference((uint8_t*)imageData, (size_t)LENET_MNIST_INPUT_SIZE, labels, model, interpreter, input_tensor); while(1) {} } Testing result After deploying the model in the demo project, then we'll run this demo on the MIMXRT1060 (Fig 12) board for testing. Fig 12 Run the below code to covert the Fashion MNIST image to text The process_image() function can convert a Fashion MNIST image to an include file as static data, then include this file in the demo project. def process_image(image, output_path, num_batch=1): img_data = np.transpose(image, (2, 0, 1)) # Repeat image for batch processing (resulting tensor is NCHW or NHWC) img_data = np.reshape(img_data, (num_batch, img_data.shape[0], img_data.shape[1], img_data.shape[2])) img_data = np.repeat(img_data, num_batch, axis=0) img_data = np.reshape(img_data, (num_batch, img_data.shape[1], img_data.shape[2], img_data.shape[3])) # Serialize image batch img_data_bytes = bytearray(img_data.tobytes(order='C')) image_bytes_per_line = 20 with open(output_path, 'wt') as f: idx = 0 for byte in img_data_bytes: f.write('0X%02X, ' % byte) if idx % image_bytes_per_line == (image_bytes_per_line - 1): f.write('\n') idx = idx + 1 # Return serialized image size return len(img_data_bytes)      2. Run the demo project on board.
查看全文
Wireless module combinations from Tables 2 and 3 are not updated with the latest SDK 2.12.1 in the user manual UM11441. Major updates in Table 2 and Table 3: u-blox modules are supported only on rt1060 Murata modules are only tested with i.MX RT1060 EVKB and i.MX RT1040 EVK platforms Modified Murata modules names Renamed i.MX RT685S EVK to IMXRT685-AUD-EVK Please refer to the attached PDF for updated information.
查看全文
The i.MX RT600 crossover MCU combines an ultra-low power MCU with a high performance DSP to enable the next generation of ML/AI, voice and audio applications. Get started today and order your MIMXRT685-EVK.
查看全文
  RT1050 CSI OV7670 camera eLCD display 1.Abstract OV7670 is a CMOS VGA image sensor with small size and low operating voltage. It is controlled by the SCCB bus and can output 8-bit image data of various resolutions with a frame rate of up to 30 frames/second and low cost. This article mainly implements the use of CSI on RT10XX to obtain OV7670 camera data, and displays it using the eLCDIF display module that comes with RT10XX. The camera and display use RGB565 format. The camera resolution configuration is QVGA 320*240, the LCD is NXP official EVKB matching LCD RK043FN02H, the resolution is 480*272, and the frame rate is 30FPS. This article is based on NXP official RT1050 SDK: SDK_2_14_0_EVKB-IMXRT1050\boards\evkbimxrt1050\driver_examples\csi\rgb565 Porting the OV7670 driver to implement the CSI method to collect OV7670 image data and display it on the LCD through the eLCDIF module   2. Principle explanation    Here is a brief explanation of relevant knowledge. 2.1 RGB565 Color mode As a basic color coding format for images, RGB565 refers to a pixel that occupies 2 bytes of data and is usually used in images and display devices. R red, G green, B blue, the actual display can obtain different other colors according to the configuration of the three primary colors. Each pixel bit can display 65536 (2^16) colors. The specific allocation is as follows:   Fig 1 From the above figure, we can know that the 2-byte data displayed in pure red, green and blue is: Red: 0xf800, Green: 0X07E0, Blue: 0X001F 2.2 OV7670 camera hardware and waveform situation The OV7670 module used is as follows: Fig 2 Pin situation No Signal Description 1 PWDN Power consumption selection mode, pull down for normal use 2 RET Reset port, pull high for normal use 3 D0 Data port output bit 0 4 D1 Data port output bit 1 5 D2 Data port output bit 2 6 D3 Data port output bit 3 7 D4 Data port output bit 4 8 D5 Data port output bit 5 9 D6 Data port output bit 6 10 D7 Data port output bit 7 11 XLK Clock signal input signal 12 PLK Pixel clock output signal 13 HS Horizontal synchronization signal output signal 14 VS Frame sync clock output signal 15 SDA SCCB Interface data control 16 SCL SCCB Interface clock control 17 GND GND 18 3.3V 3.3V power RGB565 output data timing: Fig 3 2.2 CSI frame synchronization signal timing waveform Fig 4 2.3 LCD display wave Fig 5 Therefore, the data of OV7670 is obtained through CSI and then stored in the buffer. The eLCDIF then retrieves the data from the buffer and displays it on the LCD screen to display the real-time collection and reality of the camera data. 3 Software and hardware realize    The test platform is based on NXP MIMXRT1050-EVKB revA1 version: https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/i-mx-rt1050-evaluation-kit:MIMXRT1050-EVK LCD为:https://www.nxp.com/part/RK043FN02H-CT#/ 3.1 Hardware connection As can be seen from Figure 2, the universal module purchased is a 2.54mm direct plug mode, but the CSI interface used on the MIMXRT1050-EVKB board is an FPC interface, so an adapter board is required to switch from FPC to 2.54mm direct plug mode. The wiring diagram is as follows:    Fig 6 The actual overall hardware connection situation is as follows: Fig 7 3.2 Software prepare Regarding the SDK driver of OV7670, the RT SDK does not provide it directly, but the FRDM-K82 SDK provides relevant drivers that can be transplanted to the RT1050 SDK.       SDK version:SDK_2_14_0_EVKB-IMXRT1050\boards\evkbimxrt1050\driver_examples\csi\rgb565 The code replaces the original OV7725 code, replaces the relevant driver with the OV7670 driver, modifies the OV7670 code, matches it to the RT1050 CSI code, and adds IO signal control for OV7670 RST and PWDN. The reason for adding RST and PWDN control is that it was found Some modules, if the RST pin is not closed and delayed to open, will cause the problem of unsuccessful acquisition. However, with the addition of RST and PWDN control, currently OV7670 from different manufacturers can successfully acquire and display stably. For the specific OV7670 code, you can view the attached source code. The camera initialization code is as follows:   static void APP_InitCamera(void) { const camera_config_t cameraConfig = { .pixelFormat = kVIDEO_PixelFormatRGB565, .bytesPerPixel = APP_BPP, .resolution = FSL_VIDEO_RESOLUTION(320, 240), /* Set the camera buffer stride according to panel, so that if * camera resoution is smaller than display, it can still be shown * correct in the screen. */ .frameBufferLinePitch_Bytes = DEMO_BUFFER_WIDTH * APP_BPP, .interface = kCAMERA_InterfaceGatedClock, .controlFlags = DEMO_CAMERA_CONTROL_FLAGS, .framePerSec = 30, }; memset(s_frameBuffer, 0, sizeof(s_frameBuffer)); BOARD_InitCameraResource(); CAMERA_RECEIVER_Init(&cameraReceiver, &cameraConfig, NULL, NULL); if (kStatus_Success != CAMERA_DEVICE_Init(&cameraDevice, &cameraConfig)) { PRINTF("Camera device initialization failed\r\n"); while (1) { ; } } CAMERA_DEVICE_Start(&cameraDevice); /* Submit the empty frame buffers to buffer queue. */ for (uint32_t i = 0; i < APP_FRAME_BUFFER_COUNT; i++) { CAMERA_RECEIVER_SubmitEmptyBuffer(&cameraReceiver, (uint32_t)(s_frameBuffer[i])); } } The resolution here is QVGA 320*240, which does not match the 480*272 of the LCD, but it does not matter. In fact, the size of 320*240 is displayed in the LCD. If you want to display it to 480*272, you can also configure the size through PXP. For more code details, see the attached code package. 4. Summary This article aims to provide a demo of RT OV7670 CSI+eLCDIF acquisition and display. let’s go directly to the finished product effect video. You can see that the relative display is relatively clear, and the refresh effect is also good.
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-on-i-MX-RT1064-EVK/ta-p/1123602 
查看全文
There is an issue with the DCD file used in the SDK 2.9.0 release for the i.MX RT1170 processor. When the included DCD file is used in a project to configure the SDRAM memory on the EVK, the refresh for the memory is not enabled. This can lead to corruption/data loss over time.   To fix the problem, replace the dcd.c file in your project with the attached file instead.   We are working on a fix, and a new revision of the SDK will be released soon.
查看全文
Obtaining the footprint for Kinetis/LPC/i.MXRT part numbers is very straightforward using the Microcontroller Symbols, Footprints and Models Library homepage, on the following link: https://www.nxp.com/design/software/models/microcontroller-symbols-footprints-and-models:MCUCAD?tid=vanMCUCAD What some users may not be aware of is that the BXL file available for NXP Kinetis/LPC/i.MXRT part numbers also contain the 3D model of the package, which is often needed when working on the industrial design of your application. You may follow the steps below to export the 3D model of the package in STEP (Standard for the Exchange of Product Data) format using the Ultra Librarian software, which can be downloaded from the link on the models library homepage. A STEP (.step,stp) file stores the model in ASCII format. This format can be imported into many CAD suites that allow to work with 3D solids. First, obtain the BXL file for the part number you are interested in. In this example the MIMXRT1052CVL5B.blx.   Then, open the Ultra Librarian project and load this file using the “Load Data” button, and select the “3D Step Model” checkbox from the Select Tools options. Finally, select the Export to Select Tools option. Once the exporting process is finished, the step file will be available on the path UltraLibrarian/Library/Exported.  The STEP (.stp) file can be opened in CAD suites that support solid 3D objects, like FreeCAD which is open source.
查看全文
Face recognition Actually, face recognition technology is used in many scenes in our daily life, for instance, when taking pictures with the mobile phone, the camera software will automatically recognize the faces in the lens and focus, scan face for real-name verification when registering the App and scan face for pay, etc. The basic steps of face recognition are shown in the below figure. Firstly, the camera captures image data, then through preprocessing such as noise elimination and image format conversion, the image data will be transmitted to the processor for face detection and recognition calculations. After recognizing the face successful, continue to do the follow-up operations. Fig1 The basic steps of face recognition i.MX RT106F MCU based solution for face recognition The below figure is the block diagram of i.MX RT106F MCU-based solution for face recognition provided by the NXP. Comparing with the general processor (CPU) solution, it has comparative advantages in cost and power consumption. Further, the PCB size will be smaller too and the MCU usually can boot up within a few hundred milliseconds even with RTOS, versus to the boot-up speed of the processor (CPU) equipped with a Linux system that is about 10 seconds, it will give customers a better user experience. Fig2 i.MX RT106F MCU based solution for face recognition Of course, the i.MX RT106F MCU-based solution face recognition solution is not intended to replace the solution based on the processor (CPU). As aforementioned, face recognition technology has a lot of application cases, and it will definitely be used in more fields in the future, so the MCU-based face recognition solution provides customers and the market with another choice. i.MX RT106F MCU The i.MX RT106F face recognition crossover processor is an EdgeReady™ solution-specific variant of the i.MX RT1060 family of crossover processors, targeting face recognition applications. It features NXP’s advanced implementation of the Arm Cortex®-M7 core, which operates at speeds up to 600 MHz to provide high CPU performance and the best real-time response. i.MX RT106F based solutions enable system designers to easily and inexpensively add face recognition capabilities to a wide variety of smart appliances, smart homes, smart retail, and smart industrial devices. The i.MX RT106F is licensed to run the OASIS Lite library for face recognition (as the below figure shows) which include: Face detection Anti-spoofing Face tracking Face alignment Glass detection Face recognition Confidence measure Face recognition quantified results, etc Fig3 OASIS Recognition Software Pipeline sln_viznas_iot_elock_oobe The sln_viznas_iot_elock_oobe project is the application on the SLN-VIZNAS-IOT (as the below figure shows, regarding the Bootstrap and Bootloader in the software flowchart, I will introduce them in the future). The following development work is based on the sln_viznas_iot_elock_oobe project, however, I need to sketch the basic workflow of it prior to starting real development work. Fig4 SLN-VIZNAS-IOT software flowchart sln_viznas_iot_elock_oobe's workflow flow In the Camera_Start() function, the task (Camera_Init_Task) completes the initialization of the RGB and IR cameras, then creates a task (Camera_Task); In the Display_Start() function, after the task (Display_Init_Task) completes the initialization of the display medium (USB or LCD), it immediately creates the task (Display_Task) and sends the message queue s_DisplayReqMsg.id = QMSG_DISPLAY_FRAME_REQ to the task (Camera_Task), then the pDispData will point to the s_BufferLcd[0] array for storing the image data to be displayed; In the Oasis_Start() function, firstly, OASISLT_init() completes the initialization of the OAISIT library, then creates a task (Oasis_Task) to send the message queues gFaceDetReqMsg.id = QMSG_FACEREC_FRAME_REQ and gFaceInfoMsg.id = QMSG_FACEREC_INFO_UPDATE to the task (Camera_Task) to make the pDetIR and pDetRGB point to the face block diagram captured by the RGB and IR cameras, and update the content pointed by infoMsgIn. After the camera is initialized, the RGB camera works at first. After the image data is captured, an interrupt is triggered and the callback function Camera_Callback() sends the message queue DQMsg.id = QMSG_CAMERA_DQ to the task (Camera_Task), and DQIndex++; CAMERA_RECEIVER_GetFullBuffer() extracts the image data captured by the RGB camera, and sends the message queue DPxpMsg.id = QMSG_PXP_DISPLAY to the task (PXP_Task) created in the APP_PXP_Start() function and EQIndex++, meanwhile switch the camera from RGB to IR. After the APP_PXPStartCamera2Display() function in the task (PXP_Task) completes processing, it sends the message queue s_DResMsg.id = QMSG_PXP_DISPLAY to the task (Camera_Task), and the task (Camera_Task) sends the message queue DresMsg.id = QMSG_DISPLAY_FRAME_RES to the task (Display_Task) after receiving the above message queue. The task (Display_Task) completes display, then it sends the message queue s_DisplayReqMsg.id = QMSG_DISPLAY_FRAME_REQ to the task (Camera_Task) to make pDispData point to the s_BufferLcd[1] array; After the IR camera completes capturing work, CAMERA_RECEIVER_GetFullBuffer() extracts the image data and sends the message queue DPxpMsg.id = QMSG_PXP_DISPLAY to the (PXP_Task) task created in the APP_PXP_Start() function, continue to execute EQIndex++ and switch to RGB camera again, and repeat the steps 5. Finally, send the message queue FPxpMsg.id = QMSG_PXP_FACEREC to the task (PXP_Task) and set irReady = true. After the task (PXP_Task) receives the above message queue, it calls APP_PXPStartCamera2DetBuf() and after completes the processing, sends the message queue s_FResMsg.id = QMSG_PXP_FACEREC to the task (Camera_Task); CAMERA_RECEIVER_GetFullBuffer() extracts the image data collected by the RGB camera, repeat step 5, when (pDetRGB && irReady) condition is met, send the message queue FPxpMsg.id = QMSG_PXP_FACEREC to the task (PXP_Task) and set irReady = false, pDetRGB = NULL, pDetIR = NULL. After the task (PXP_Task) receives the above message queue, it calls APP_PXPStartCamera2DetBuf() and after completes the processing, sends the message queue s_FResMsg.id = QMSG_PXP_FACEREC to the task (Camera_Task). At this time, the (!pDetIR && !pDetRGB) condition is met and the Queue message FResMsg.id = QMSG_FACEREC_FRAME_RES is sent to the task (Oasis_Task), run OASISLT_run_extend to perform face recognition calculation, and send the message queue gFaceDetReqMsg.id = QMSG_FACEREC_FRAME_REQ to the task (Camera_Task) to make the pDetIR and pDetRGB point to the face block diagram captured by the RGB and IR cameras again. keep repeat steps 6 and 7; Fig5 sln_viznas_iot_elock_oobe's workflow flow Smart Coffee machine Fig 6 is the workflow of the smart coffee machine that I want to develop for, as there is no LCD board on hand, in the below development process, I will select Win10's camera (as the below figure shows) to output the captured image, further, take advantage of the Shell command to simulate the LCD's touch feature to interact with the board.   Fig6 workflow of the smart coffee machine Fig7 Camera Code modification In the commondef.h, add a new member variable 'uint16_t coffee_taste' in Union FeatureItem to stand for the favorite coffee taste; typedef union { struct { /*put char/unsigned char together to avoid padding*/ unsigned char magic; char name[FEATUREDATA_NAME_MAX_LEN]; int index; // this id identify a feature uniquely,we should use it as a handler for feature add/del/update/rename uint16_t id; uint16_t pad; // Add a new component uint16_t coffee_taste; /*put feature in the last so, we can take it as dynamic, size limitation: * (FEATUREDATA_FLASH_PAGE_SIZE * 2 - 1 - FEATUREDATA_NAME_MAX_LEN - 4 - 4 -2)/4*/ float feature[0]; }; unsigned char raw[FEATUREDATA_FLASH_PAGE_SIZE * 2]; } FeatureItem; // 1kB   In featuredb.h, add two member functions into class FeatureDB:  set_taste()  and  get_taste() , and add the definition of the above two member functions in featuredb.cpp; class FeatureDB { public: FeatureDB(); ~FeatureDB(); int add_feature(uint16_t id, const std::string name, float *feature); int update_feature(uint16_t id, const std::string name, float *feature); int del_feature(uint16_t id, std::string name); int del_feature(const std::string name); int del_feature_all(); std::vector<std::string> get_names(); int get_name(uint16_t id, std::string &name); std::vector<uint16_t> get_ids(); int ren_name(const std::string oldname, const std::string newname); int feature_count(); int get_free(int &index); int database_save(int count); int get_feature(uint16_t id, float *feature); void set_autosave(bool auto_save); bool get_autosave(); //Add two customize member functions int set_taste(const std::string username, uint16_t taste_number); int get_taste(const std::string username); private: bool auto_save; int load_feature(); int erase_feature(int index); int save_feature(int index = 0); int reassign_feature(); int get_free_mapmagic(); int get_remain_map(); }; int FeatureDB::set_taste(const std::string username, uint16_t taste_number) { int index = FEATUREDATA_MAX_COUNT; for (int i = 0; i < FEATUREDATA_MAX_COUNT; i++) { if (s_FeatureData.item[i].magic == FEATUREDATA_MAGIC_VALID) { if (!strcmp(username.c_str(), s_FeatureData.item[i].name)) { index = i; } } } if (index != FEATUREDATA_MAX_COUNT) { s_FeatureData.item[index].coffee_taste = taste_number; return 0; } else { return -1; } } int FeatureDB::get_taste(const std::string username) { int index = FEATUREDATA_MAX_COUNT; int taste_number; for (int i = 0; i < FEATUREDATA_MAX_COUNT; i++) { if (s_FeatureData.item[i].magic == FEATUREDATA_MAGIC_VALID) { if (!strcmp(username.c_str(), s_FeatureData.item[i].name)) { index = i; } } } if (index != FEATUREDATA_MAX_COUNT) { taste_number = s_FeatureData.item[index].coffee_taste; return taste_number; } else { return -1; } }   In database.h, add the declarations of  DB_Set_Taste()  and  DB_Get_Taste()  functions, and in database.cpp, add the related codes of the above two functions. These two functions are equivalent to encapsulating the newly added member functions set_taste() and get_taste() of the FeatureDB class; int DB_Del(uint16_t id, std::string name); int DB_Del(string name); int DB_DelAll(); int DB_Ren(const std::string oldname, const std::string newname); int DB_GetFree(int &index); int DB_GetNames(std::vector<std::string> *names); int DB_Count(int *count); int DB_Save(int count); int DB_GetFeature(uint16_t id, float *feature); int DB_Add(uint16_t id, float *feature); int DB_Add(uint16_t id, std::string name, float *feature); int DB_Update(uint16_t id, float *feature); int DB_GetIDs(std::vector<uint16_t> &ids); int DB_GetName(uint16_t id, std::string &names); int DB_GenID(uint16_t *id); int DB_SetAutoSave(bool auto_save); // Add two customize functions int DB_Set_Taste(const std::string username, const uint16_t taste); int DB_Get_Taste(const std::string username); int DB_Set_Taste(const std::string username, const uint16_t taste) { int ret = DB_MGMT_FAILED; ret = DB_Lock(); if (DB_MGMT_OK == ret) { ret = s_DB->set_taste(username, taste); DB_UnLock(); } return ret; } int DB_Get_Taste(const std::string username) { int ret = DB_MGMT_FAILED; ret = DB_Lock(); if (DB_MGMT_OK == ret) { ret = s_DB->get_taste(username); DB_UnLock(); } return ret; } In sln_api.h, add the declarations of the functions  VIZN_SetTaste() ,  VIZN_GetTaste()  and  VIZN_Is_Rec_User() , and add the codes of the above three functions in sln_api.cpp. The VIZN_SetTaste() and VIZN_GetTaste() functions are equivalent to the encapsulation of the DB_Set_Taste() and DB_Get_Taste() functions. Why is it so complicated? To follow the code layering mechanism of the elock_oobe project and reduce the difficulty of code implementation through code layered encapsulation. /** * @brief Set user's favorite coffee taste. * * @Param clientHandle The client handler which required this action * @Param userName Pointer to a buffer which contains the name of the new user. * @Param taste Coffee taste */ vizn_api_status_t VIZN_SetTaste(VIZN_api_client_t *clientHandle, char *UserName, cfg_Coffee_taste taste); /** * @brief Set user's favorite coffee taste. * * @Param clientHandle The client handler which required this action * @Param userName Pointer to a buffer which contains the name of the new user. * @Param taste Pointer to the Coffee taste */ vizn_api_status_t VIZN_GetTaste(VIZN_api_client_t *clientHandle, char *UserName, int *taste); vizn_api_status_t VIZN_Is_Rec_User(VIZN_api_client_t *clientHandle, char *UserName); ~~~~~~~~~ vizn_api_status_t VIZN_SetTaste(VIZN_api_client_t *clientHandle, char *UserName, cfg_Coffee_taste taste) { int32_t status; if (!IsValidUserName(UserName)) { return kStatus_API_Layer_RenameUser_InvalidUserName; } status = DB_Set_Taste(std::string(UserName), (uint16_t)taste); if (status == 0) { return kStatus_API_Layer_Success; } else if (status == -1) { return kStatus_API_Layer_SetTaste_Failed; } } vizn_api_status_t VIZN_GetTaste(VIZN_api_client_t *clientHandle, char *UserName, int *taste) { int32_t status; if (!IsValidUserName(UserName)) { return kStatus_API_Layer_RenameUser_InvalidUserName; } *taste = DB_Get_Taste(std::string(UserName)); if (*taste != -1) { return kStatus_API_Layer_Success; } else { return kStatus_API_Layer_GetTaste_Failed; } } vizn_api_status_t VIZN_Is_Rec_User(VIZN_api_client_t *clientHandle, char *UserName) { if (!IsValidUserName(UserName)) { return kStatus_API_Layer_RenameUser_InvalidUserName; } return kStatus_API_Layer_Success; } In sln_api_init.cpp, declare the variable:  std::string Current_User = "" ; which is used to store the name corresponding to the face after recognition, and add the processing function  Coffee_Rec()  after successful face recognition in the structure variable ops2; std::string Current_User = " "; //Add customize function int Coffee_Rec(VIZN_api_client_t *pClient, face_info_t face_info); client_operations_t ops2 = { .detect = NULL, .recognize = Coffee_Rec,//NULL, .enrolment = NULL, }; //Add customize function int Coffee_Rec(VIZN_api_client_t *pClient, face_info_t face_info) { Current_User = face_info.name; return 1; } In sln_timers.h, increase MS_SYSTEM_LOCKED to extend the locked status time to 25 seconds; ~~~~~~~~ #define MS_SYSTEM_LOCKED 25000 //2000 // MS in which the board is in a locked state after a reg/rec. ~~~~~~~~ In sln_cli.cpp, add three Shell commands: order, set_taste, get_taste to stand for the operations of brewing coffee, setting coffee taste, and checking coffee taste; SHELL_COMMAND_DEFINE(set_taste, (char *)"\r\n\"set_taste username <0|1|2|3|~>\": set user's favorite taste\r\n" "0 - Cappuccino\r\n" "1 - Black Coffee\r\n" "2 - Coffee latte\r\n" "3 - Flat White\r\n" "4 - Cortado\r\n" "5 - Mocha\r\n" "6 - Con Panna\r\n" "7 - Lungo\r\n" "8 - Ristretto\r\n" "9 - Others \r\n", FFI_CLI_SetTasteCommand, SHELL_IGNORE_PARAMETER_COUNT); SHELL_COMMAND_DEFINE(get_taste, (char *)"\r\n\"get_taste username\": return user's favorite taste \r\n", FFI_CLI_GetTasteCommand, SHELL_IGNORE_PARAMETER_COUNT); SHELL_COMMAND_DEFINE(order, (char *)"\r\n\"order <0|1|2|3|~>\": order a favorite taste \r\n", FFI_CLI_OrderCommand, SHELL_IGNORE_PARAMETER_COUNT); ~~~~~~ static shell_status_t FFI_CLI_SetTasteCommand(shell_handle_t shellContextHandle, int32_t argc, char **argv) { if (argc != 3) { SHELL_Printf(shellContextHandle, "Wrong parameters\r\n"); return kStatus_SHELL_Error; } return UsbShell_QueueSendFromISR(shellContextHandle, argc, argv, SHELL_EV_FFI_CLI_SET_TASTE); } static shell_status_t FFI_CLI_GetTasteCommand(shell_handle_t shellContextHandle, int32_t argc, char **argv) { if (argc != 2) { SHELL_Printf(shellContextHandle, "Wrong parameters\r\n"); return kStatus_SHELL_Error; } return UsbShell_QueueSendFromISR(shellContextHandle, argc, argv, SHELL_EV_FFI_CLI_GET_TASTE); } shell_status_t FFI_CLI_OrderCommand(shell_handle_t shellContextHandle, int32_t argc, char **argv) { if (argc > 2) { SHELL_Printf(shellContextHandle, "Wrong parameters\r\n"); return kStatus_SHELL_Error; } return UsbShell_QueueSendFromISR(shellContextHandle, argc, argv, SHELL_EV_FFI_CLI_ORDER); } ~~~~~~ shell_status_t RegisterFFICmds(shell_handle_t shellContextHandle) { SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(list)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(add)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(del)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(rename)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(verbose)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(camera)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(version)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(save)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(updateotw)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(reset)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(emotion)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(liveness)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(detection)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(display)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(wifi)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(app_type)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(low_power)); // Add three Shell commands SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(order)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(set_taste)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(get_taste)); return kStatus_SHELL_Success; } In sln_cli.cpp, it needs to add corresponding codes for handle order, set_taste, get_taste instructions in task UsbShell_CmdProcess_Task else if (queueMsg.shellCommand == SHELL_EV_FFI_CLI_SET_TASTE) { int coffee_taste = atoi(queueMsg.argv[2]); if (coffee_taste >= Cappuccino && coffee_taste <= Others) { status = VIZN_SetTaste(&VIZN_API_CLIENT(Shell),(char *)queueMsg.argv[1], (cfg_Coffee_taste)coffee_taste); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s like coffee taste: %s \r\n", queueMsg.argv[1], Coffee_type[coffee_taste]); } else { SHELL_Printf(shellContextHandle, "Cannot set coffee taste\r\n"); } } else { SHELL_Printf(shellContextHandle, "Unsupported coffee taste\r\n"); } } else if (queueMsg.shellCommand == SHELL_EV_FFI_CLI_GET_TASTE) { int get_taste_num = 0; status = VIZN_GetTaste(&VIZN_API_CLIENT(Shell),(char *)queueMsg.argv[1], &get_taste_num); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s like coffee taste: %s \r\n", queueMsg.argv[1], Coffee_type[(cfg_Coffee_taste)(get_taste_num)]); } else { SHELL_Printf(shellContextHandle, "Cannot get coffee taste\r\n"); } } else if (queueMsg.shellCommand == SHELL_EV_FFI_CLI_ORDER) { status = VIZN_Is_Rec_User(&VIZN_API_CLIENT(Shell),(char *)Current_User.c_str()); if (status == kStatus_API_Layer_Success) { if (queueMsg.argc == 1) { int get_taste_num = 0; status = VIZN_GetTaste(&VIZN_API_CLIENT(Shell),(char*)Current_User.c_str(), &get_taste_num); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s order the a cup of %s \r\n", Current_User.c_str(), Coffee_type[(cfg_Coffee_taste)(get_taste_num)]); } else { SHELL_Printf(shellContextHandle, "Sorry, please order again, Current user is %s\r\n",Current_User.c_str()); } } else if(queueMsg.argc == 2) { int coffee_taste = atoi(queueMsg.argv[1]); if (coffee_taste >= Cappuccino && coffee_taste <= Others) { status = VIZN_SetTaste(&VIZN_API_CLIENT(Shell),(char*)Current_User.c_str(), (cfg_Coffee_taste)coffee_taste); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s order a cup of %s \r\n", Current_User.c_str(), Coffee_type[coffee_taste]); } else { SHELL_Printf(shellContextHandle, "Cannot set coffee taste, Current user is %s\r\n",Current_User.c_str()); } } else { SHELL_Printf(shellContextHandle, "Unsupported coffee taste\r\n"); } } } } Use the cafe logo of《Friends》to replace the original Welcome_home picture, use the BmpCvt tool to convert the picture into the corresponding array, and add it to welcomehome_320x122.h. static const unsigned short Coffee_shop_320_122[] = { 0x59E6, 0x6227, 0x6247, 0x59C5, 0x59C5, 0x59A5, 0x4103, 0x6A67, 0x6A47, 0x6227, 0x6A47, 0x6A68, 0x7268, 0x6A67, 0x6A67, 0x6A47, 0x72A9, 0x6A68, 0x7268, 0x6A48, 0x5A06, 0x6A88, 0x6A68, 0x6247, 0x6A47, 0x7289, 0x7289, 0x6A47, 0x6A47, 0x6A47, 0x6227, 0x6A68, 0x6206, 0x6A47, 0x5A26, 0x6247, 0x6227, 0x6A27, 0x4924, 0x836D, 0x5207, 0x7BAC, 0x5247, 0x83ED, 0x4A47, 0x2923, 0x7B8C, 0x49E5, 0x49E5, 0x4A05, 0x28C1, 0x5226, 0x6267, 0x6A87, 0x72E9, 0x6267, 0x6AA9, 0x5A27, 0x6AA9, 0x6AA9, 0x5A47, 0x6A88, 0x5A06, 0x5A47, 0x6AA9, 0x5A47, 0x62A9, 0x5206, 0x6288, 0x6268, 0x5A47, 0x5A27, 0x5A47, 0x5A27, 0x49E6, 0x4A07, 0x4A07, 0x5A89, 0x49C6, 0x5A48, 0x5A28, 0x5A47, 0x5226, 0x49E6, 0x49C6, 0x41A6, 0x5208, 0x2082, 0x52A8, 0x6B6B, 0x39A5, 0x39A5, 0x3964, 0x49E7, 0x3104, 0x49C7, 0x3945, 0x41A6, 0x28A2, 0x2061, 0x3965, 0x28E3, 0x1881, 0x3944, 0x3103, 0x3103, 0x3903, 0x4145, 0x51A6, 0x51C6, 0x4985, 0x51E6, 0x51E6, 0x61E7, 0x6A48, 0x6A28, 0x6A28, 0x6A27, 0x61E6, 0x6207, 0x6A68, 0x59E7, 0x4185, 0x51E6, 0x51A6, 0x6228, 0x5A07, 0x6228, 0x5A08, 0x4184, 0x41A5, 0x4164, 0x3944, 0x3944, 0x736B, 0x83ED, 0x41A5, 0x83ED, 0x6288, 0x8BAB, 0x836A, 0x6287, 0x6B2A, 0x5267, 0x83CD, 0x5A68, 0x5228, 0x3986, 0x3985, 0x7B0A, 0x6A67, 0x7267, 0x832B, 0x49A5, 0x6206, 0x8AC9, 0x72A8, 0x82C9, 0x82E9, 0x8309, 0x6A46, 0x8B2B, 0x3860, 0x8329, 0x6A67, 0x7288, 0x7268, 0x61E6, 0x7267, 0x6A67, 0x59C5, 0x51A4, 0x6A46, 0x7AA8, 0x6A26, 0x7287, 0x7AA8, 0x72A8, 0x72A9, 0x51C5, 0x5A27, 0x5A27, 0x3923, 0x ~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~ 0x7B8C, 0x734B, 0x6B0A, 0x83CD, 0x83ED, 0x8C0E, 0x7B8C, 0x7B6C, 0x20C2, 0x5227, 0x83ED, 0x6AE9, 0x734B, 0x62A9, 0x7B6B, 0x7B8C, 0x62E9, 0x7BAC, 0x7B6B, 0x732A, 0x940D, 0x83AC, 0x732A, 0x7309, 0x8BCC, 0x7309, 0x8BCD, 0x83AC, 0x7B6B, 0x940D, 0x3943, 0x942E, 0x7B6B, 0x734A, 0x7B8B, 0x62C8, 0x7B8B, 0x7B6A, 0x7BAB, 0x732A, 0x7B6B, 0x7B6B, 0x83CC, 0x6B09, 0x6AA9, 0x6AE9, 0x7B6B, 0x7B8B, 0x83AC, 0x734B, 0x6AC9, 0x6B0A, 0x734B, 0x734A, 0x62A8, 0x732A, 0x8C0E, 0x8BCD, 0x944F, 0x734B, 0x7B8B, 0x732A, 0x942E, 0x8BCD, 0x83AD, 0x732B, 0x6B0A, 0x6AEA, 0x62C9, 0x9C90, 0x28C2, 0x8BEE, 0x93EE, 0x8BCD, 0x4183, 0x838B, 0x7B6A, 0x6287, 0x8BCB }; Programming the new project After saving the modified code and recompile the sln_viznas_iot_elock_oobe project (as shown in the figure below), then connect the MCU-LINK to J6 on the SLN-VIZNAS-IOT, just like Fig9 shows. Fig8 Recompile code Fig9 MCU-LINK (Note: it needs to reselect the Flash driver, as the below figure shows.) Fig10 Flash driver After that, it's able to program the code project to the on-board Hyperflash. Test & Summary When the new code project boot-up, please refer to Get Started with the SLN-VIZNAS-IOT to use the serial terminal to test the newly added three Shell commands: orders, set_taste, and get_taste. Once a face is successfully recognized, the cafe logo will appear up (as shown in Fig11). Fig11 Cafe logo Definitely, this smart coffee machine seems like a 'toy' demo, and there is a lot of work to improve it. Below is the list of my future work plans, Use the LCD panel instead of USB to display; Connect an external amplifier to enable voice prompt feature; Enable the Wifi feature to connect to the App; Use the GUI library to enhance UI experience; Add a voice recognition feature to control; And I'll be glad to hear any comments from you.    
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/Design-an-IoT-edge-node-for-CV-application-base-on-the-i/ta-p/1127423 
查看全文
1.    Abstract When using the RT600 SDK USB composite example, the customer found that the default HS interval was 125us, and the code was: mimxrt685audevk_dev_composite_hid_audio_unified_bm. However, in actual applications, the 125us packet interval for data transmission will cause a large interrupt load on the CPU, so the customer hopes to change the interval to a larger during, such as 1ms. After changing interval to 1ms, it is found that the data packet can be sent to the RT chip, but there is indeed a problem with the playback on the RT side, speaker no voice. Changing it to 500us in synchronous mode is possible work, but at 500us, the customer's CPU load still reaches more than 80%, which is not convenient for subsequent application code expansion, so it is still hoped to achieve a 1ms method. This article will give the transmission of different intervals of UAC and provide a solution for 1ms intervals. 2.    Test situation    Board:MIMXRT685-AUD-EVK    SDK:SDK_2_15_000_MIMXRT685-AUD-EVK    USB Analyzer:Lecroy USB-TOS2-A01-X 2.1 Platform connection situation First, given the connection between MIMXRT685-AUD-EVK, USB analyzer and audio source. This article mainly tests the RT685 UAC speaker function, that is, USB transmits audio source data to RT685, and plays the audio source through a player (which can be headphones), or speaker. The J7 USB port of EVK is connected to the A port of Lecroy, and the B port of Lecroy is connected to the audio source, which can be a PC or a mobile phone. If using a PC, the speaker needs to be selected as USB AUDIO+HID DEMO, because the name of the board after UAC enumeration is USB AUDIO+HID DEMO. The other end of Lecroy is connected to the PC for transmitting USB bus data. The specific connection diagram is as follows:      Fig 1 EVK USB analyzer connection 2.2 Different audio interval data package situation This chapter gives the modifications under different intervals, as well as the results of USB analyzer packet capture. The clock system of the audio synchronous/isochronous audio endpoint can be synchronized through SOF. The USB 1.0 sampling rate must be locked to the 1ms SOF beat. The USB2.0 high-speed HS endpoint can be locked to the 125us SOF beat. If you want to change the interval, it is usually a multiple of 125us, that is, it can be 250us, 500us, 1ms, etc. The modification method is usually to directly modify the USB stack's usb_audio_config.h:   #define HS_ISO_OUT_ENDP_INTERVAL (0x01) The relationship between HS_ISO_OUT_ENDP_INTERVAL and the real during time is: 125us*2^( HS_ISO_OUT_ENDP_INTERVAL -1) So: HS_ISO_OUT_ENDP_INTERVAL=1 : 125us HS_ISO_OUT_ENDP_INTERVAL=2 : 250us HS_ISO_OUT_ENDP_INTERVAL=3 : 500us HS_ISO_OUT_ENDP_INTERVAL=4 : 1000us The following are the four situations mentioned above respectively through USB analyzer packet capture test. 2.2.1 125us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x01) Fig 2 125us interval It can be seen that when the interval is configured to 125us, the SOF beat interval in front of each OUT packet is 125us, and the length of the OUT data packet is 24Byte. The data in this 24Byte packet is the actual audio data. In this case, the playback is normal   2.2.2 250us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x02) The packet capture configured as 250us is shown in the figure below. It can be seen that the SOF beat interval in front of the two OUT packets is 250us, and the length of the OUT data packet is 48 Byte. In other words, as the interval increases, the length of the data packet also increases proportionally, that is, the transmission time is longer, but a packet contains more data packets. At this time, the RT side also needs to provide more USB receiving buffers to receive data. Fig 3 250us interval This situation, the speaker play normally, have the audio sound. 2.2.3 500us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x03) Fig 4 500us interval SYNC mode At this time, you can see that the data packet has become 96 Bytes, and the data content is also normal audio data, but the playback has problems and no sound can be heard. However, if you configure: #define USB_DEVICE_AUDIO_USE_SYNC_MODE (0U) It is not the SYNC mode, it can hear the audio sound, the packet situation is: Fig 5 500us interval no SYNC mode However, the asynchronous mode also has problems. After a long period of operation, it may become out of sync and the playback may become stuck. Therefore, it is still necessary to use the 1ms method in the synchronous mode. 2.2.4 1ms interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x04) Fig 6 interval 1ms It can be seen that the data packet length has become 192 bytes, and the SOF beat interval in front of the OUT packet has indeed become 1ms. The audio data packet also looks like normal audio data, so the audio data here is successfully transmitted from USB to RT. However, the playback is abnormal and no sound can be heard. 3. 1ms interval solution Now let's debug the project to find the problem. Because the USB analyzer can show that the audio data of the USB bus is actually transmitted, we must first check whether the USB receives the data packet in the kUSB_DeviceAudioEventStreamRecvResponse event in USB_DeviceAudioCompositeCallback of audio_unified.c, whether the data packet length is correct, and whether the data content looks like audio data. The following is the debug result: Fig 7 data packet received situation So from this point of view, the USB interface data packet is received, and the data looks like real audio data. If it is abnormal data, it is either all 0 or irregular data that changes, but the test result cannot be played. So now to check the I2S playback function, composite.h file, TxCallback function: Fig 8 I2S play data buffer We can see, the real data transfer to the I2S data buffer are always 0, and the reality should be g_composite.audioUnified.startPlayFlag none zero, and also use: s_TxTransfer.dataSize = g_composite.audioUnified.audioPlayTransferSize; s_TxTransfer.data=audioPlayDataBuff + g_composite.audioUnified.tdReadNumberPlay; But, after testing, audioPlayDataBuff data are also 0. So the issue should be the USB received side, receive the data, but when save the data to the audioPlayDataBuff have issues, go back to audio_unified.c Fig 9 USB_DeviceAudioCompositeCallback Fig10 USB_AudioSpeakerPutBuffer After testing, in the Fig 9, audioUnified.tdWriteNumberPlay never larger than g_deviceAudioComposite->audioUnified.audioPlayTransferSize * AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT=192*6   #define AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT \ (0x06U) /* 6 means 6 mico frames (6*125us), make sure the latency is smaller than 1ms for sync mode */ The low latency here is a delay buffer. Therefore, the USB cache data must at least be able to hold the delayed data packet. Unit 1 represents an interval frame. Now it is changed to an interval of 1ms, 6 delay units, which means that the delay here is 6ms. Here, the receiving buffer size needs to be larger, which is related to the following definition: Fig 11 USB device callback g_composite.audioUnified.audioPlayBufferSize = AUDIO_PLAY_BUFFER_SIZE_ONE_FRAME * AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT; #define AUDIO_PLAY_BUFFER_SIZE_ONE_FRAME AUDIO_OUT_TRANSFER_LENGTH_ONE_FRAME #define AUDIO_OUT_FORMAT_CHANNELS (0x02U) #define AUDIO_OUT_FORMAT_SIZE (0x02) #define AUDIO_OUT_SAMPLING_RATE_KHZ (48) /* transfer length during 1 ms */ #define AUDIO_OUT_TRANSFER_LENGTH_ONE_FRAME \ (AUDIO_OUT_SAMPLING_RATE_KHZ * AUDIO_OUT_FORMAT_CHANNELS * AUDIO_OUT_FORMAT_SIZE) #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT \ (2U) /* 2 units size buffer (1 unit means the size to play during 1ms) */ Here, try to set larger AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT, let it should be at least can put 6ms data, as 1 unit is 1ms play data size, and it needs to have more than 6ms, so, set to 12 unit, the modified definition: #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT 12 Build the project and download it, now capture the USB bus data again: Fig 12 1ms interval data packet play successfully   This data waveform is the data that can be played normally. Below is the video of the MIMXRT685-AUD-EVK test. The attachment provides two demos of RT685 and RT1170, both modified to 1ms interval, and the modification method of RT1170 is exactly the same 4.    Summarize Whether it is RT600, RT1170, or more precisely the UAC code of the entire RT series, if you need to modify the interval, the main focus is on the following macros in usb_audio_config.h, the following is for 1ms interval: #define HS_ISO_OUT_ENDP_INTERVAL (0x04)//(0x01)//(0X04) #define AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT \ (0x06U) /* 6 means 16 mico frames (6*125us), make sure the latency is smaller than 1ms for ehci high speed */ #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT \ (12U)//(2U) /* 2 units size buffer (1 unit means the size to play during 1ms) */ Test video:
查看全文