i.MX RT Crossover MCUs Knowledge Base

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

i.MX RT Crossover MCUs Knowledge Base

讨论

排序依据:
1 Introduction    With the quick development of science and technology, the Internet of Things(IoT) is widely used in various areas, such as industry, agriculture, environment, transportation, logistics, security, and other infrastructure. IoT usage makes our lives more colorful and intelligent. The explosive development of the IoT cannot be separated from the cloud platform. At present, there are many types of cloud services on the market, such as Amazon's AWS, Microsoft's Azure, google cloud, China's Alibaba Cloud, Baidu Cloud, OneNet, etc.    Amazon AWS Cloud is a professional cloud computing service that is provided by Amazon. It provides a complete set of infrastructure and cloud solutions for customers in various countries and regions around the world. It is currently a cloud computing with a large number of users. AWS IoT is a managed cloud platform that allows connected devices to easily and securely interact with cloud applications and other devices.    NXP crossover MCU RT product has launched a series of AWS sample codes. This article mainly explains the remote_control_wifi_nxp code in the official MIMXRT1060-EVK SDK as an example to realize the data interaction with AWS IoT cloud, Android mobile APP, and MQTTfx client. The cloud topology of this article is as follows: Fig.1-1 2 AWS cloud operation 2.1 Create an AWS account Prepare a credit card, and then go to the below amazon link to create an AWS account:    https://console.aws.amazon.com/console/home   2.2 Create a Thing    Open the AWS IOT link: https://console.aws.amazon.com/iot    Choose the Things item under manage, if it is the first time usage, customer can choose “register a thing” to create the thing. If it is used in the previous time, customers can click the “create” button in the right corner to create the thing. Choose “create a single thing” to create the new thing, more details check the following picture. Fig. 2-1 Fig.2-2 Fig.2-3 2.3 Create certificate    Create a certificate for the newly created thing, click the “create certificate” button under the following picture: Fig.2-4    After the certificate is built, it will have the information about the certificate created, it means the certificate is generated and can be used. Fig. 2-5 Please note, download files: certificate for this thing, public key, private key. It will be used in the mqttfx tool configuration. Click “A root CA for AWS for Download”, download the root CA for AWS IoT, the mqttfx tool setting will also use it. Open the root CA download link, can download the CA certificate. RSA 2048 bit key: VeriSign Class 3 Public Primary G5 root CA certificate Fig. 2-6 At last, we can get these files: 7abfd7a350-certificate.pem.crt 7abfd7a350-private.pem.key 7abfd7a350-public.pem.key AmazonRootCA1.pem Save it, it will be used later. Click “active” button to active the certificate, and click “Done” button. The policy will be attached later.   2.4 Create Policies     Back to the iot view page: https://console.aws.amazon.com/iot/     Select the policies under Secure item, to create the new policies.  Fig. 2-7 Input the policy name, in the action area, fill: iot:*, Resource ARN area fill: * Check Allow item, click the create button to finish the new policy creation. Fig. 2-8 2.5 Things attach relationship     After the thing, certificate, policies creation, then will attach the policy to the certificate, and attach the certificate to the Things. Fig. 2-9 Choose the certificates under Secure item, in the related certificate item, choose “…”, you will find the down list, click “attach policy”, and choose the newly created policy. Then click attach thing, choose the newly created thing. Fig. 2-10 Fig. 2-11 Fig. 2-12 Now, open the Things under Mange item, check the detail things related information. Fig.2-13 Double click the thing, in the Interact item, we can find the Rest API Endpoint, the RT code and the mqttfx tool will use this endpoint to realize the cloud connection. Fig. 2-14 Check the security, you will find the previously created certificate, it means this thing already attach the new certificates: Fig. 2-15 Until now, we already finish the Things related configuration, then it will be used for the MQTT fx, Android app, RT EVK board connections, and testing, we also can check the communication information through the AWS shadow in the webpage directly.       3 Android related configuration 3.1 AWS cognito configuration    If use the Android app to communicate with the AWS IoT clould, the AWS side still needs to use the cognito service to authorize the AWS IoT, then access the device shadows. Create a new identity pools at first from the following link: https://console.aws.amazon.com/cognitohttps://console.aws.amazon.com/cognito Fig. 3-1 Click “manage Identity pools”, after enter it, then click “create new identity pool” Fig. 3-2 Fig. 3-3 Fig. 3-4 Here, it will generate two Roles: Cognito_PoolNameAuth_Role Cognito_PoolNameUnauth_Role Click Allow, to finish the identity pool creation. Fig. 3-5 Please record the related Identity pool ID, it will be used in the Android app .properties configuration files. 3.2 Create plicies in IAM for cognito   Open https://console.aws.amazon.com/iam   Click the “policies” item under “access management” Fig. 3-6 Choose “create policy”, create a IAM policies, in the Policy JSON area, write the following content: Fig. 3-7 { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Connect" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "iot:Publish" ], "Resource": [ "arn:aws:iot:us-east-1:965396684474:topic/$aws/things/RTAWSThing/shadow/update", "arn:aws:iot:us-east-1:965396684474:topic/$aws/things/RTAWSThing/shadow/get" ] }, { "Effect": "Allow", "Action": [ "iot:Subscribe", "iot:Receive" ], "Resource": [ "*" ] } ] }‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Please note, in the JSON content: "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/update", "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/get" Region:the us-east-1 inFig. 3-5 ACCOUNT ID, it can be found in the upper right corner my account side. Fig 3-8 Fig 3-9 After finished the IAM policy creation, then back to IAM policies page, choose Filter policies as customer managed, we can find the new created customer’s policy. Fig. 3-10 3.3 Attach policy for the cognito role in IAM   In IAM, choose roles item: Fig. 3-11 Double click the cognito_PoolNameUnauth_Role which is generated when creating the pool in cognito, click attach policies, select the new created policy. Fig. 3-12 Fig. 3-13 Until now, we already finish the AWS cognito configuration.   3.4  Android properties file configuration Create a file with .properties, the content is:     customer_specific_endpoint=<REST API ENDPOINT>     cognito_pool_id=<COGNITO POOL ID>     thing_name=<THING NAME>     region=<REGION> Please fill the correct content: REST API ENDPOINT:Fig 2-14 COGNITO POOL ID:fig 3-5 THING NAME:fig 2-14,upper left corner REGION:Fig 3-5, the region data in COGNITO POOL ID Take an example, my properties file content is:  customer_specific_endpoint=a215vehc5uw107-ats.iot.us-east-1.amazonaws.com  cognito_pool_id=us-east-1:c5ca6d11-f069-416c-81f9-fc1ec8fd8de5  thing_name=RTAWSThing  region=us-east-1 In the real usage, please use your own configured data, otherwise, it will connect to my cloud endpoint. 4. MQTTfx configuration and testing MQTT.fx is an MQTT client tool which is based on EclipsePaho and written in Java language. It supports subscribe and publish of messages through Topic. You can download this tool from the following link:   http://mqttfx.jensd.de/index.php/download    The new version is:1.7.1.   4.1 MQTT.fx configuration     Choose connect configuration button, then enter the connection configuration page: Fig. 4-1 Profile Name: Enter the configuration name Broker Address: it is REST API ENDPOINT。 Broker Port:8883 Client ID: generate it freely CA file: it is the downloaded CA certificate file Client Certificate File: related certificate file Client key File: private key file Check PEM formatted。 Click apply and OK to finish the configuration. 4.2 Use the AWS cloud to test connection   In order to test whether it can be connected to the event cloud, a preliminary connection test can be performed. Open the aws page: https://console.aws.amazon.com/iot here is a Test button under this interface, which can be tested by other clients or by itself.Both AWS cloud and MQTTfx subscribe topic: $aws/things/RTAWSThing/shadow/update MQTTfx publishes data to the topic: $aws/things/RTAWSThing/shadow/update It can be found that both the cloud test port and the MQTTfx subscribe can receive data: Fig. 4-2 Below, the Publish data is tested by the cloud, and then you can see that both the MQTTFX subscribe and the cloud subscribe can receive data: Fig. 4-3 Until now, the AWS cloud can transfer the data between the AWS iot cloud and the client. 5 RT1060 and wifi module configuration   We mainly use the RT1060 SDK2.8.0 remote_control_wifi_nxp as the RT test code: SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\aws_examples\remote_control_wifi_nxp Test platform is:MIMXRT1060-EVK Panasonic PAN9026 SDIO ADAPTER + SD to uSD adapter The project is using Panasonic PAN9026 SDIO ADAPTER in default. 5.1 WIFI and the AWS code configuration    The project need the working WIFI SSID and the password, so prepare a working WIFI for it. Then add the SSID and the password in the aws_clientcredential.h #define clientcredentialWIFI_SSID       "Paste WiFi SSID here." #define clientcredentialWIFI_PASSWORD   "Paste WiFi password here." The connection for AWS also in file: aws_clientcredential.h #define clientcredentialMQTT_BROKER_ENDPOINT "a215vehc5uw107-ats.iot.us-east-1.amazonaws.com" #define clientcredentialIOT_THING_NAME       "RTAWSThing" #define clientcredentialMQTT_BROKER_PORT      8883   5.2 certificate and the key configuration Open the SDK following link: SDK_2.8.0_EVK-MIMXRT1060\rtos\freertos\tools\certificate_configuration\CertificateConfigurator.html Fig. 5-1 Generate the new aws_clientcredential_keys.h, and replace the old one. Take the MCUXPresso IDE project as an example, the file location is: Fig. 5-2 Build the project and download it to the MIMXRT1060-EVK board. 6 Test result Androd mobile phone download and install the APK under this folder: SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\aws_examples\remote_control_android\AwsRemoteControl.apk SDK can be downloaded from this link: Welcome | MCUXpresso SDK Builder  Then, we can use the Android app to remote control the RT EVK on board LED, the test result is 6.1 APP and EVK test result MIMXRT1060-EVK printf information: Fig. 6-1 Turn on and turn off the led:   Fig. 6-2                                        Fig. 6-3 6.2 MQTTfx subscribe result MQTTfx subscribe data Turn on the led, we can subscribe two messages: Fig. 6-4 Fig. 6-5   Turn off the led, we also can subscribe two messages: Fig. 6-6 Fig. 6-7 In the two message, the first one is used to set the led status. The second one is the EVK used to report the EVK led information. MQTTfx also can use the publish page, publish this data: {"state":{"desired":{"LEDstate":1}}} or {"state":{"desired":{"LEDstate":0}}} To topic: $aws/things/RTAWSThing/shadow/update It also can realize the on board LED turn on or off. 6.3 AWS cloud shadows display result Turn on the led: Fig. 6-8 Turn off the led: Fig. 6-9 In conclusion, after the above configuration and testing, it can finish the Android mobile phone to remote control the RT EVK on board LED and get the information. Also can use the MQTTFX client tool and the AWS shadow page to check the communication data.
查看全文
RT106X secure JTAG test and IDE debug 1 Introduction     Regarding the usage of RT10XX Secure JTAG, the nxp.com has already released a very good application note AN12419 Secure JTAG for i.MXRT10xx: https://www.nxp.com/docs/en/application-note/AN12419.pdf This application note talks about the principle of Secure JTAG, how to modify the fuse to implement the Secure JTAG function, and the content of the related JLINKscript file, and then gives the use of JLINK commander to realize the identification of the ARM core. Usually, if the ARM core can be identified, it indicates that Secure JTAG connection has been passed. But in practical usage, I found many customers encounter the different issues, for example, the Secure JTAG could not find the ARM core directly, or the core identify is not stable, and some customers asked how to use common IDEs, such as MCUXPresso, IAR , MDK to add this Secure JTAG function to realize  Secure JTAG debugging.   For the test of secure JTAG, it also needs the cost, because the fuse needs to be modified. If the position of the fuse is accidentally modified, it may cause irreversible problems. Due to the different situations of customers, I also done more tests, borrowing boards with chip socket which can replace the different RT chip, I have tested RT1050, RT1060, RT1064, but in practical usage, there are still some customers mentioned that it will be reproduced on the EVK, so I also tested the secure JTAG function on the RT1060 and RT1064 EVK     This article will share all the previous relevant experience, so that latecomers can have a reference when encountering similar problems, and avoid unnecessary minefields. This document used the platform: MIMXRT1064-EVK revA: RT1060-EVK, RT1050-EVKB is similar SDK_2_13_0_EVK-MIMXRT1064 MCUXpresso IDE v11.7.1_9221 MDK V5.36: higher reversion is the same IAR 9.30.1: higher reversion is the same Segger JLINK plus JLINK driver version:V788D NXP-MCUBootUtility-5.1.0 2 RT1064 secure JTAG modification Under normal circumstances, it is not recommended for customers to burn all the related fuses directly and then test it directly. I usually proceeds step by step, hardware layout, to ensure that it can support JTAG, and then save the original read of the fuse, burn JTAG, test JTAG, and finally Burn and test other fuses for secure JTAG.    2.1 MIMXRT1064-EVK Hardware modification For RT10XX EVK, the board default situation is the same as the chip situation, which supports SWD. The JTAG pin is connected to other hardware modules from the hardware, so it will affect JTAG function. When it is determined to use JTAG function, the circuit needs to be modified, just like MIMXRT105060HDUG has said:    (1). Burn fuse DAP_SJC_SWD_SEL from ‘0’ to ‘1’ to choose JTAG. (2). DNP R323,R309,R152 to isolate JTAG multiplexed signals. (3). Keep off J47 to J50 to isolate board level debugger.     So, to the MIMXRT1064-EVK board, just need to remove R323, R309, R152, disconnect J47,J48,J49,J50, which is used to disconnect the on board debugger, then use the external Segger JLINK JTAG interface to connect the MIMXRT1064-EVK on board J21. 2.2 Original fuse map read First, the MIMXRT1064-EVK board enters the serial download mode, SW7: 1-OFF, 2-OFF, 3-OFF, 4-ON. Use MCUBootUtility tool to connect EVK, and read the initial fuse map, the situation is as follows:     Fig 1 2.3 JTAG Modification and test    Modify fuse to realize SWD to JTAG: 0X460[19] DAP_SJC_SWD_SEL=1   Fig 2     Use the JLINK commander, JTAG method to connect the board, to find the ARM CM7 core: Fig 3     If the ARM CM7 core can’t be identified, it means the hardware still have issues, or the fuse modified bit is not correct, just do the double check, make sure the ARM core can be found, then go to the next steps. 2.4 Secure JTAG Modification     Modify fuse bit to realize Secure JTAG:     0X460[23:22]:JTAG_SMODE =1     0X460[26]: KTE_FUSE=1     0X610,0X600 burn key: 0xedcba987654321, user also can burn with other custom keys, but need to record it, as the JLINKScript needs to use it.   Fig 4 In the above picture, the secure JTAG fuse and key fuse is finished, at last, to burn fuse 0X400[6]: SJC_RESP_LOCK=1, which is used to close the write and read to secret response key: Fig 5 Here, we can see, the 0X600,0X610 key area is shadow. Now, record the UUID0, UUID1, it will use the script to read out to check the UUID correction or not. 2.5 Secure JTAG JLINK commander test Because during the secure JTAG connection process, the JTAG_MOD pin needs to be pulled low and high, so a wire needs to be connected to pull JTAG_MOD low and high. MIMXRT1064-EVK can use J25_4, which is 3.3V, and JTAG_MOD signal point can use TP11 test point. By default, JTAG_MOD is pulled low. When it needs to be pulled high, it can be connected to J25_4.         During the test, it will need to use the JLINKScript, the content is as follows, also can check  the attached NXP_RT1064_SecureJTAG.JlinkScript file: int InitTarget(void) { int r; int v; int Key0; int Key1; JLINK_SYS_Report("***********************************************"); JLINK_SYS_Report("J-Link script: InitTarget() *"); JLINK_SYS_Report("NXP iMXRT, Enable Secure JTAG *"); JLINK_SYS_Report("***********************************************"); JLINK_SYS_MessageBox("Set pin JTAG_MOD => 1 and press any key to continue..."); // Secure response stored @ 0x600, 0x610 in eFUSE region (OTP memory) Key0 = 0x87654321; Key1 = 0xedcba9; JLINK_CORESIGHT_Configure("IRPre=0;DRPre=0;IRPost=0;DRPost=0;IRLenDevice=5"); CPU = CORTEX_M7; JLINK_SYS_Sleep(100); JLINK_JTAG_WriteIR(0xC); // Output Challenge instruction // Readback Challenge, Shift 64 dummy bits on TDI, TODO: receive Challenge bits on TDO JLINK_JTAG_StartDR(); JLINK_SYS_Report("Reading Challenge ID...."); JLINK_JTAG_WriteDRCont(0xffffffff, 32); // 32-bit dummy write on TDI / read 32 bits on TDO v = JLINK_JTAG_GetU32(0); JLINK_SYS_Report1("Challenge UUID0:", v); JLINK_JTAG_WriteDREnd(0xffffffff, 32); v = JLINK_JTAG_GetU32(0); JLINK_SYS_Report1("Challenge UUID1:", v); JLINK_JTAG_WriteIR(0xD); // Output Response instruction JLINK_JTAG_StartDR(); JLINK_JTAG_WriteDRCont(Key0, 32); JLINK_JTAG_WriteDREnd(Key1, 24); JLINK_SYS_MessageBox("Change pin JTAG_MOD => 0, press any key to continue..."); return 0; }   SecJtag.bat file content is: jlink.exe -JLinkScriptFile NXP_RT1064_SecureJTAG.JlinkScript -device MIMXRT1064XXX6A -if JTAG -speed 4000 -autoconnect 1 -JTAGConf -1,-1 This command is mainly used the JLINK commander and JLINKScript to realize the Secure JTAG connection. When test it, put the SecJtag.bat, JLink.exe, and NXP_RT1064_SecureJTAG.JlinkScript 3 files in the same folder. For testing, can change the board mode to the internal boot mode, SW7:1-OFF,2-OFF, 3-ON, 4-OFF. Run SecJtag.bat, the test situation is: It indicates to connect JTAG_MOD to higher level   Fig 6 Here, use the wire to connect the J25_4 and TP11, which is connect the JTAG_MOD=1, then click OK, go to the next step:   Fig 7 It can be seen here that the correct UUID has been recognized, which is consistent with the UUID read by MCUBootutility above. Many customers cannot read the correct UUID here, indicating that there is a problem with hardware modification, or fuse modification, or another. Or in the case, the JTAG pin in the app is not enabled, which will be described in detail later. Here disconnect the connection between TP11 and J25_4, the default is JTAG_MOD=0, click OK to continue Fig 8 Here, we can see, the ARM CM7 core is found, it means this hardware platform already realize the Secure JTAG connection. Now, can use the IDEs to do the debugging. 3. Secure JTAG debug function in 3 IDEs This chapter aims at how to use secure JTAG function in RT10XX three commonly used IDEs: MCUXpresso, IAR, MDK,  to implement secure JTAG code debug operation.    3.1 Software code prepare This article selects the SDK hello_world project as the test demo: SDK_2_13_0_EVK-MIMXRT1064\boards\evkmimxrt1064\demo_apps\hello_world     Two points should be noted here:  Do not use led_blinky directly, because the led control pin GPIO_AD_B0_09 used by the code is JTAG_TDI, which will cause the Secure JTAG connection to fail after downloading this code, because the pin function of JTAG has been changed. Add the pin configuration for JTAG in app code pinmux.c, otherwise there will be a phenomenon due to the lack of JTAG pin configuration, to the empty RT1064, which the chip that has not burned the code can use Secure JTAG connection, but once the code is burned, the connection will be failed. Add the following code to Pinmux.c: IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_11_JTAG_TRSTB, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_06_JTAG_TMS, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_07_JTAG_TCK, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_09_JTAG_TDI, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_10_JTAG_TDO, 0U); 3.2 MCUXpresso Secure JTAG debug    Use MCUXpresso IDE to import the SDK hello world demo, modify the pinmux.c, which add the JTAG pin function configuration.    Configure MCUXPresso ID’s debugger JLinkGDBServerCL.exe version as your used JLINK driver version, Window->preferences Fig 9 Run->Debug configurations, configure to JTAG, choose device as MIMXRT1064xxx6A, add the JLINKscriptfile   Fig10   Fig 11 Connect JTAG_MOD=1, which is connect TP11 to J25_4, connect OK.   Fig 12 We can see, it already gets the correct UUID, it also requires connect JTAG_MOD=0, here just leave the TP11 floating, then connect OK:   Fig 13 It can be seen that at this time, it has successfully entered the debug mode and can do debugging. For details, you can check the MCUXpresso11_7_1_MIMXRT1064_SJTAG.mp4 file in the attachment. The test experience here is that MCUXpresso V11.7.1 is found to be a bit unstable and needs to be tried a few more times, but the download of the higher version V11.8.0 version is very stable. If you can get a version higher than V11.7.1, it is recommended to use a higher version of MCUXpresso IDE . 3.3 IAR Secure JTAG debug Some customers need to use the IAR IDE to debug Secure JTAG function, you can use the hello world in the SDK demo, modify pinmux.c to add the JTAG pin configuration code.     The difference is:   (1) Run JLINK driver:JLinkDLLUpdater.exe   Fig 14 Just to refresh the JLINK driver to the IAR,MDK IDE. (2) Modify the file name of JLINKscript to be consistent with the name of the demo, and put it under the settings folder of the project folder. For example, the routine here is hello_world_flexspi_nor_debug, and the file name of JlinkScript is required: hello_world_flexspi_nor_debug.JlinkScript, so that IAR will automatically call the corresponding JlinkScript file   Fig15 (3) Configure IAR debugger as JLINK JTAG   Fig 16                                          Fig 17 Click debug button to enter debug mode:   Fig 18 It needs to set JTAG_MOD=1, just to connect TP11 to J25_4.   Fig 19 It needs to set JTAG_MOD=0, just leave the TP11 floating, click OK to continue.   Fig 20 We can see, the IAR already can do the secure JTAG debugging. 3.4 MDK Secure JTAG debug   For the MDK secure JTAG configuration, the basic requirement is:     (1) Modify pinmux.c code to enable the JTAG pin function     (2) Run JLINK driver, JLinkDLLUpdater.exe,refresh the driver to MDK     (3) JlinkScript file name changed to JLinkSettings.JlinkScript, copy it to the folder in the mdk project, then the MDK will call the JLINKscript file automatically   Fig 21       (4) Modify debugger to JLINK, then modify the interface to JTAG   Fig 22   Fig 23 So far, the Secure JTAG related configuration of MDK has been completed. From theory, it can be directly debugged to run. But I found some problems after many tests. For the code of RAM (hello_world debug), it is no problem to be able to perform secure JTAG debug, but for the code of flash (hello_world_flexspi_nor_debug), there is no problem through secure jtag download, but the debug will run the program abnormal, check the memory data in the flash, also get the wrong data     Fig 24 We can see, UUID also correct, normally, this issue is related to the flashloader during downloading, however, the flashloader of JLINK has not been directly accessed, so I tried to use RT-UFL as the flashloader, and the debugger was successful. If customers encounter similar problems when want to use the MDK to do the secure JTAG debugging, they can use RT-UFL as the flashloader. The reference document is: https://www.cnblogs.com/henjay724/p/13951686.html https://www.cnblogs.com/henjay724/p/15465655.html To summarize it here, copy the iMXRT_UFL file to the JLINK driver folder: C:\Program Files\SEGGER\JLINK\Devices\NXP Copy JLinkDevices.xml to folder: C:\Program Files\SEGGER\JLINK The Jlinkscript file add is the same as the Figure 21. Modify the JlinkSettings.ini file, device is MIMXRT1064_UFL, override =1.   Fig 25 Delete the program algorithm, will use the RT-UFL algorithm   Fig 26 Uncheck update target before Debugging   Fig 27 Enter debug mode:   Fig 28 Configure JTAG_MOD=1, connect TP11 to J25_4, click OK to continue:   Fig 29 Leave the TP11 as floating, click OK to enter the debug mode, the result is:   Fig 30 We can see, after changing the flashloader to the RT-UFL, MDK project Secure JTAG debug also works OK, the attachment also share the RT-UFL related files.  4. Summary For Secure JTAG, you need to modify the hardware to support JTAG function, modify the fuse to support secure JTAG, and modify the code pins to enable the JTAG function. For the IDE debug, you need to configure the relevant interface as JTAG and add the correct JlinkScriptfile, so that the secure JTAG function can be successfully run , and perform IDE code debugging. Attachments: evkmimxrt1064_hello_world_SJTAG.zip:MCUXpresso project EVK-MIMXRT1064-hello_world_iar.7z:IAR project EVK-MIMXRT1064-hello_world_mdk.7z:MDK project File\ NXP_RT1064_SecureJTAG.JlinkScript, JLINK script File\ SecJtag.bat, associate with JLink.exe and NXP_RT1064_SecureJTAG.JlinkScript to realize JLINK Commander connection, which will find the ARM core. File\ RT-UFL: RT ultra flashloader algorithm, source:https://github.com/JayHeng/RT-UFL   Here, really thanks so much for our expert @juying_zhong 's help with the Secure JTAG patient guide during my testing road!
查看全文
The newly announced i.MX RT1170 is a dual-core Arm® Cortex®-M based crossover MCU that breaks the gigahertz (GHz) barrier and accelerates advanced Machine Learning (ML) applications at the edge.  Built using advanced 28nm FD-SOI technology for lower active and static power requirements, i.MX RT1170 MCU family integrates a GHz Arm Cortex-M7 and power-efficient Cortex-M4, advanced 2D vector graphics, together with NXP’s signature EdgeLock security solution.  The i.MX RT1170 delivers a total CoreMark score of 6468 and address the growing performance needs of edge computing for industrial, Internet-of-Things (IoT) and automotive applications
查看全文
A small project I worked on was to understand how RT1050 boot-up performs from different memory types. I used the LED_blinky code from the SDK as a baseline, and ran some tests on the EVKB board. The data I gathered is described below, as well as more detailed testing procedures. Testing Procedure The boot-up time will be defined as the time from which the processor first receives power, to when it executes the first line of code from the main() function. Time was measured using an oscilloscope (Tektronix TDS 2014) between the rising edge of the POR_B* signal to the following two points: FlexSPI_CS asserted (first read of the FlexSPI by the ROM)** GPIO Toggle in application code (signals beginning of code execution).*** *The POR_B signal was available to scope through header J26-1 **The FlexSPI_CS signal is available through a small pull-up resistor on the board, R356. A small wire was soldered alongside this resistor, and was probed on the oscilloscope. ***The GPIO pin that was used was the same one that connected to USER_LED (Active low). This pin could be scoped through header J22-5. TP 2, 3, 4, and 5 are used to ground the probe of the oscilloscope. This was all done in the EVKB evaluation board. Here are a couple of noteworthy points about the test ran: This report mostly emphasizes the time between the rise of the POR_B signal, and the first line of execution of code. However, there is a time between when power is first provided to the board and the POR_B system goes up. This is a matter of power electronics and can vary depending on the user application and design. Because of this, this report will not place a huge emphasis on this. The first actual lines of code of the application is actually configuring several pins of the processor. Only after these pins are executed, does the GPIO toggle low and the time is taken on the oscilloscope. However, these lines of configuration code are executed so rapidly, that the time is ignored for the test.   Clock Configurations The bootable image was flashed to the RT1050 in all three cases. Afterwards, in MCUXpresso, the debugger was configured with “Attach Only” set to true. A debug session was then launched, and after the processor finished executing code, it was paused and the register values were read according to the RT1050 Reference Manual, chapter 18, CCM Block Diagram.  Boot Configuration: Core Clock (MHz) * FlexSPI Clock (MHz) SEMC Clock (MHz) FlexSPI 130 99 SDRAM 396 130 99 SRAM 396 130 99 *The Core Clock speed was also verified by configuring clko1 as an output with the clock speed divided by 8. This frequency was measured using an oscilloscope and verified to be 396 MHz. Results The time to chip select pin represents the moment when the first flash read happens from the RT1050 processor. The time to GPIO output represents the boot-up time.   As expected, XiP Hyperflash boots faster than other memories. SRAM and SDRAM memories must copy to executable memory before executing which will take more time and therefore boot slower. In the sections below, a more thorough explanation is provided of how these tests were ran and why Hyperflash XiP is expected to be the fastest. Hyperflash XiP Boot Up Below is an outline of the steps of what we expect the Hyperflash XiP boot-up process to look like: Power On Reset (J26-1) Begin access to Flash memory (FlexSPI_SS0) Execute in place in flash (XiP) First line of code is exectuted (USER_LED) In MCUXpresso, the map file showed the following: The oscilloscope image is below:   SDRAM Boot Up The processor will bootup from ROM, which will be told to copy an application image from the serial NOR flash memory to SDRAM (serial NOR flash uses Hyperflash communication). The RT flashloader tool will let me load up the application to the flash to be configured to copy over memory to the SDRAM and execute to it.   It is expected that copying to SDRAM will be slower than executing in place from Hyperflash since an entire copying action must take place.   The SDRAM boot-up process looks like the following: Power On Reset (J26-1) Begin access to Flash memory (FlexSPI_SS0) Copy code to SDRAM Execute in place in SDRAM (FlexSPI_SS0) First line of code is executed (USER_LED)   In MCUXpresso, the map file showed the following:   In order to run this test, I followed these instructions: https://community.nxp.com/docs/DOC-340655. SRAM Boot Up For SRAM, a similar process to that of SDRAM is expected. The processor will first boot from internal ROM, and then go to Hyperflash. It will then copy over everything from Hyperflash to internal SRAM DTC memory and then execute from there.  The SRAM Boot Up Process follows as such: Power On Reset (J26-1) Begin access to Flash memory (FlexSPI_SS0) Copy code to SRAM Execute in place in SRAM (FlexSPI_SS0) First line of code is executed (USER_LED)   In MCUXpresso, the map file showed the following:   This document was generated from the following discussion: javascript:;
查看全文
i.MXRT1050 MCU supports 10M/100M Ethernet MAC. Nowadays, LAN8720A is a very common PHY used in many networking design. In this document, I will show you how to use LAN8720A with i.MXRT1050.  1. Schematic   In this design example,  ENET_RST  is connected to GPIO_AD_B1_04      ENET_INT is connected to GPIO_AD_B0_15      2. Source code modification In the i.MXRT1050 SDK, the source code files of the PHY are fsl_phy.c and fsl_phy.h. The registers of LAN8720A need to be added into the source code. Below is the registers of LAN8720A. The details can be found in the LAN8720A datasheet. ( The modified fsl_phy.c and fsl_phy.h are attached)   In the pinmux.c, modify the GPIO Mux setting of the ENET_INT and ENET_RST.   IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B1_04_GPIO1_IO20, 0U);                                      IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_15_GPIO1_IO15, 0U);                                      IOMUXC_SetPinConfig(IOMUXC_GPIO_AD_B1_04_GPIO1_IO20, 0xB0A9u);                                  IOMUXC_SetPinConfig(IOMUXC_GPIO_AD_B0_15_GPIO1_IO15, 0xB0A9u);                               This is the part of the source code to reset the PHY in the main() function. gpio_pin_config_t gpio_config = {kGPIO_DigitalOutput, 0, kGPIO_NoIntmode}; GPIO_PinInit(GPIO1, 20, &gpio_config); GPIO_PinInit(GPIO1, 15, &gpio_config); GPIO_WritePinOutput(GPIO1, 15, 1); GPIO_WritePinOutput(GPIO1, 20, 0); delay(); GPIO_WritePinOutput(GPIO1, 20, 1); For more example codes, please refer to the demo_apps/lwip in the i.MXRT SDK package. Reference: i.MXRT1050 web page : i.MX RT1050 MCU/Applications Crossover Processor | Arm® Cortex®-M7 @600 MHz, 512KB SRAM |NXP  MCUXpresso SDK web page : MCUXpresso SDK|NXP 
查看全文
RT10xx image reserve the APP FCB methods 1. Abstract     Regarding RT10XX programming, it is mainly divided into two categories: 1) Serial download mode with blhost proramming     To this method, we can use the MCUBootUtility tool, or blhost+elftosb+sdphost cmd method, we also can use the NXP SPT(MCUXpresso secure provisional Tool). This programming need to enter the serial download mode, then use the flashloader supported UART or the USB HID interface. 2) Use Programmer or debugger with flashdriver programming This method is usually through the SWD/JTAG download interface combined with the debugger + IDE, or directly software burning, the chip mode can be in the internal boot, or in the serial download mode, with the help of the flashloader to generate the flash burning algorithm file. Method 2, The burning method using the debugger tool usually ensures that the burning code is consistent with the original APP.     Method 1, Uses the blhost method to download, usually blhost will regenerate an FCB with a full-featured LUT to burn to the external flash, and then burn the app code with IVT, that is, without the FCB header of the original APP, and re-assemble a blhost generated FCB header and burn it separately. However, for some customers who need to read out the flash image and compare with the original APP image to check the difference after burning, the commonly used blhost method will have the problem of inconsistent FCB area matching. If the customer needs to use the blhost burning method in serial download mode, how to ensure that the flash image after burning is consistent with the original burning file? This article will take the MIMXRT1060-EVK development board as an example, and give specific methods for the command mode and SPT tool mode. 2 Blhost programming reserve APP FCB     From the old RT1060 SDK FCB file (below SDK2.12.0), evkmimxrt1060_flexspi_nor_config.c, we can see:   const flexspi_nor_config_t qspiflash_config = { .memConfig = { .tag = FLEXSPI_CFG_BLK_TAG, .version = FLEXSPI_CFG_BLK_VERSION, .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad, .csHoldTime = 3u, .csSetupTime = 3u, .sflashPadType = kSerialFlash_4Pads, .serialClkFreq = kFlexSpiSerialClk_100MHz, .sflashA1Size = 8u * 1024u * 1024u, .lookupTable = { // Read LUTs FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xEB, RADDR_SDR, FLEXSPI_4PAD, 0x18), FLEXSPI_LUT_SEQ(DUMMY_SDR, FLEXSPI_4PAD, 0x06, READ_SDR, FLEXSPI_4PAD, 0x04), }, }, .pageSize = 256u, .sectorSize = 4u * 1024u, .blockSize = 64u * 1024u, .isUniformBlockSize = false, };   This FCB LUT just contains the basic read command, normally, to the app booting, the FCB just need to provide the read command to the ROM, then it can boot normally.     But what happens to the memory downloaded by blhost? Based on the MIMXRT1060-EVK development board, the following shows how to use the command line mode corresponding to blhost to burn the SDK led_blinky project app, and read out the corresponding flash burning code to analysis. 2.1 Normal blhost download command line    This command line also the same as MCUBootUtility download log, source code is attached rt1060 cmd.bat. elftosb.exe -f imx -V -c imx_application_gen.bd -o ivt_evkmimxrt1060_iled_blinky_FCB.bin evkmimxrt1060_iled_blinky.s19 sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- write-file 0x20208200 ivt_flashloader.bin sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- jump-address 0x20208200 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 1 0 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 24 0 blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202000 4 0xc0000007 word  //option 0 blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202004 4 0 word                 //option1 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20202000                    blhost -t 2048000 -u 0x15A2,0x0073 -j -- flash-erase-region 0x60000000 0x8000 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20203000 4 0XF000000F word  blhost -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20203000                    blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x60001000 ivt_evkmimxrt1060_iled_blinky_FCB_nopadding.bin 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 0x8000 flexspiNorCfg.dat 9 The normal blhost programming is to use the cmd line method, and provide an app which is without the FCB header(Even app with the FCB, will exclude the FCB header at first), then use the elftosb.exe generate the app with IVT, eg ivt_evkmimxrt1060_iled_blinky_FCB_nopadding.bin, download the flashloader file ivt_flashloader to internal RAM, and jump to the flashloader, then use the fill-memory to fill option0, option1 to choose the proper external flash, and use the configure-memory to configure the flexSPI module, with the SFDP table which is got from get configure command, then fill the flexSPI LUT internal buffer. Next, fill-memory 0x20203000 4 0XF000000F associate with configure-memory will generate the full FCB header, burn it from flash address 0x60000000. At last, burn the app which contains IVT from flash address 0X60001000, until now, realize the whole app image programming. Pic 1 shows the comparison between the data read after programming and the original app data. It can be seen that the LUT of the FCB actually programmed on the left is not only contains read, but also contains read status, write enable, program and erase commands. The one on the right is the original app with FCB. The LUT of FCB only contains read commands for boot. So, if you want to keep the FCB header of the original APP instead of the header generated and burned by option0,1 configure-memory, how to do it? The method is that you can also use Option0, 1 to generate and fill in the LUT for flexSPI for communication use, but do not burn the corresponding generated FCB, just burn the FCB that comes with the original APP. pic1 2.2 Reuse option0 and option1 to program the original APP LUT The following command gives reuse option0 and option1, generates LUT and fills in flexSPI LUT for connection with external flash interface, but does not call:  fill-memory 0x20203000 4 0XF000000F and configure-memory 9 0x20203000, so that the generated FCB will not be burned to external memory.    Source file is attached rt1060 cmd_option01.bat. elftosb.exe -f imx -V -c imx_application_gen.bd -o ivt_evkmimxrt1060_iled_blinky_FCB.bin evkmimxrt1060_iled_blinky.s19 sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- write-file 0x20208200 ivt_flashloader.bin sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- jump-address 0x20208200 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 1 0 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 24 0 blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202000 4 0xc0000007 word blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202004 4 0 word blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20202000 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 2048000 -u 0x15A2,0x0073 -j -- flash-erase-region 0x60000000 0x8000 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x60000000 evkmimxrt1060_iled_blinky_FCB.bin 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 0x8000 flexspiNorCfg.dat 9 Pic 2 is the comparison between the read data after programming and the original programming data. It can be seen that the FCB programmed at this time is exactly the same as the original code FCB. Pic 2 2.3 use 1bit FCB file to configure LUT    The used file cfg_fdcb_RTxxx_1bit_sdr_flashA.bin is copied from MCUBOOTUtility: \NXP-MCUBootUtility-3.4.0\src\targets\fdcb_model . The configuration of Option0 and Option1 is usually for chips that can support SFDP table, but some flash chips cannot support SFDP table. At this time, you need to fill in the flexSPI LUT for the full LUT manually. The so-called full LUT command is not only read commands, but also supports erasing, program, etc. In this way, the flexSPI interface can be successfully connected to the external FLASH, and the corresponding functions of reading, erasing, and writing can be realized. Therefore, the method in this chapter is to use a single-line command, which is also a command supported by general chips, to enable the corresponding function of flexSPI, so it can complete the subsequent APP code programming.   Pic 3     We can see: 03H is read, 05H is read status register, 06H is write enable, D8H is the block 64K erase, 02H is the page program, 60H is the chip erase. This is the 1bit SPI method full function LUT command, which can realize the chip read, write and erase function.     The command line is, source file is attached rt1060 cmd_fdcb_1bit_sdr_flashA.bat: elftosb.exe -f imx -V -c imx_application_gen.bd -o ivt_evkmimxrt1060_iled_blinky_FCB.bin evkmimxrt1060_iled_blinky.s19 sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- write-file 0x20208200 ivt_flashloader.bin sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- jump-address 0x20208200 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 1 0 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 24 0 blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x20202000 cfg_fdcb_RTxxx_1bit_sdr_flashA.bin blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20202000 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 2048000 -u 0x15A2,0x0073 -j -- flash-erase-region 0x60000000 0x8000 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x60000000 evkmimxrt1060_iled_blinky_FCB.bin 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 0x8000 flexspiNorCfg.dat 9 In the command line, where option0,1 was previously filled in, instead of filling in the data of option0,1, the 512-byte Bin file of the complete FCB LUT command is directly given, and then the configure-memory command is used to configure the flashloader’s FlexSPI LUT with the FCB file. so that it can support read and write erase commands, etc. The comparison between the flash data and the original APP data when burning and reading is in the Pic 4, we can see, the readout data from the flash is totally the same as the original APP FCB. Pic 4 3,SPT program reserve APP FCB The NXP officially released MCUXPresso Secure Provisional Tool can support the function of retaining the customer's FCB, but the SPT tool currently uses the APP FCB to fill in the flashloader FlexSPI FCB. Therefore, if the customer directly uses the old SDK demo which just contains the read command in the LUT to generate an APP with FCB, then use the SPT tool to burn the flash, and choose to keep the customer FCB in the tool, you will encounter the problem of erasing failure. In this case, analyze the reason, we can know the FCB on the customer APP side needs to fill in the full FCB LUT command, that is, including reading, writing, erasing, etc. The following shows how the old original SDK led_blinky generates an image with an FCB header and writes it in the SPT tool. As you can see in Pic 5, the tool has information that if you use APP FCB, you need to ensure that the FCB LUT contains the read, erase, program commands. Pic 6 shows the programming situation of APP FCB LUT only including read. It has failed when doing erase. The reason is that there is no erase, program and other commands in the FlexSPI LUT command, so it will fail when doing the corresponding erasing or programming.   Pic 5 Pic 6 Pic 7 If you look at the specific command, as shown in Pic 7, you can find that the SPT tool directly uses the FCB header extracted from the APP image to flash the LUT of the flashloader FlexSPI, so there will be no erase and write commands, and it will fail when erasing. The following is how to fill in the LUT in the FCB of the SDK, open evkmimxrt1060_flexspi_nor_config.c, and modify the FCB as follows: const flexspi_nor_config_t qspiflash_config = {     .memConfig =         {             .tag              = FLEXSPI_CFG_BLK_TAG,             .version          = FLEXSPI_CFG_BLK_VERSION,             .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad,             .csHoldTime       = 3u,             .csSetupTime      = 3u,             .sflashPadType    = kSerialFlash_4Pads,             .serialClkFreq    = kFlexSpiSerialClk_100MHz,             .sflashA1Size     = 8u * 1024u * 1024u,             .lookupTable =                 {                   // Read LUTs                   FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xEB, RADDR_SDR, FLEXSPI_4PAD, 0x18),                   FLEXSPI_LUT_SEQ(DUMMY_SDR, FLEXSPI_4PAD, 0x06, READ_SDR, FLEXSPI_4PAD, 0x04),                   // Read status                   [4*1] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x05, READ_SDR, FLEXSPI_1PAD, 0x04),                   //write Enable                   [4*3] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x06, STOP, FLEXSPI_1PAD, 0),                   // Sector Erase byte LUTs                   [4*5] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x20, RADDR_SDR, FLEXSPI_1PAD, 0x18),                   // Block Erase 64Kbyte LUTs                   [4*8] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xD8, RADDR_SDR, FLEXSPI_1PAD, 0x18),                    //Page Program - single mode                   [4*9] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x02, RADDR_SDR, FLEXSPI_1PAD, 0x18),                   [4*9+1] = FLEXSPI_LUT_SEQ(WRITE_SDR, FLEXSPI_1PAD, 0x04, STOP, FLEXSPI_1PAD, 0x0),                   //Erase whole chip                   [4*11] =FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x60, STOP, FLEXSPI_1PAD, 0),                                       },         },     .pageSize           = 256u,     .sectorSize         = 4u * 1024u,     .blockSize          = 64u * 1024u,     .isUniformBlockSize = false, }; Please note, after the internal SDK team modification, from SDK_2_12_0_EVK-MIMXRT1060, the evkmimxrt1060_flexspi_nor_config.c already add LUT cmd to the full FCB LUT function. Use the above FCB to generate the APP, then use the SPT tool to burn the app with customer FCB again, we can see, the programming is working now. Pic 8 In summary, if you need to reserve the customer FCB, you can use the above method, but if you use the SPT tool, you need to add read, write, and erase commands to the LUT of the code FCB to ensure that flexSPI successfully operates the external flash.
查看全文
Get 500 MHz for just $1 with NXP's new i.MX RT1010 crossover MCU.  Targeted for a variety of applications, this video highlights two very popular example use-cases for i.MX RT1010 -audio and motor control.
查看全文
Recently, we often encounter customers using i.MXRT for RS485 communication. Mostly the problem of receiving and sending direction conversion in the process of using. Taking iMXRT1050 and SN65HVD11QDR as examples, The document introduces the LPUART to RS485 circuit and the method of transceiver control. The working principle is as follows: LPUART TXD: Transmit Data LPUART RXD: Receive Date LPUART RTS_B: Request To Send   The main control methods are as follows: 1  Use TXD signal line to do hardware automatic transceiver control According to the UART protocol, when the line is idle, TX is logic high. After the NOT gate, the LOW level is added to the direction control terminal, so when the UART is not  transmitting data, RS485 is in the state of receiving data. 2   Use GPIO control & LPUART_RTS More detailed information, users can refer to the link: https://www.nxp.com/docs/en/application-note/AN12679.pdf Note: Using GPIO control, software needs to judge the timing of receiving and sending. If the control is not good, it is easy to lose data. In order to control it well, the software must respond to TX FIFO "empty" interrupt, or query the sending status register, and accurately grasp the control opportunity, so as to ensure that there is no error in sending and receiving. Combined with the above methods, some customers are using the following control: Best Regards
查看全文
Porting JLink RTT to RT595 Porting JLink RTT to RT595         1. Introduction         2. RTT (Real-Time Terminal)         3. Porting                  Steps for Porting         4. Conclusion 1. Introduction For most beginners learning MCU or embedded systems, the first step often involves simple tasks like "lighting up an LED" or a "Hello World" program. Today, we will discuss a topic closely related to "Hello World." Serial output is a highly effective debugging tool, allowing developers to monitor program states, interact with the program, and diagnose issues. This is a familiar friend to anyone engaged in embedded development. The most common approach, as seen in NXP SDK examples, uses a UART peripheral for logging: Initialize the MCU's UART: Configure clock frequency, pin multiplexing, pin settings, baud rate, etc. Open a serial tool on the PC and configure the correct baud rate. Use the UART driver in the project to enable serial logging. This is the simplest and most widely used method for serial output. However, what if the precious UART resource is already occupied? Here's a great alternative: porting SEGGER's RTT (Real-Time Terminal) driver and using the JLink RTT functionality for logging. The biggest advantage of this approach is conserving UART resources! Next, let’s explore the powerful capabilities of JLink RTT. 2. RTT (Real-Time Terminal) RTT, developed by SEGGER, is a real-time terminal solution for interactive communication in embedded applications. Beyond conserving UART resources, RTT offers significant advantages over semi-hosting methods provided by tools like MCUXpresso IDE. RTT allows for high-speed bidirectional data transfer between the MCU and the host without compromising real-time performance. Key features of JLink RTT include: Low Overhead: Efficient data transfer mechanisms ensure minimal impact on target system performance. Real-Time Capability: Developers can output debugging information or receive data from the target system in real time without halting execution. Flexibility: Supports multiple channels for transmitting different types of data, such as debugging logs and performance metrics. OS Independence: Unlike traditional printf debugging methods, RTT can be used on embedded systems without an operating system. JLink RTT typically pairs with JLink debuggers and SEGGER's development tools, providing powerful support for debugging and tracking embedded systems. To try out this functionality, a JLink debugger is essential. Using the classic RT595-EVK as an example, we will demonstrate how to port RTT. 3. Porting The development environment includes the MCUXpresso IDE and the hello_world project from the SDK. The SDK version is not critical. Steps for Porting Locate RTT Resources According to SEGGER's official documentation, RTT resources can be found in the JLink installation directory:   C:\Program Files\SEGGER\JLink\Samples\RTT Copy Required Files Copy the following files to the source folder of the hello_world project: SEGGER_RTT_Syscalls_GCC.c SEGGER_RTT_Conf.h SEGGER_RTT_printf.c SEGGER_RTT.c SEGGER_RTT.h Copy these source files to the source folder of the hello_world project:   Integrate into Project If using Keil or IAR, you may need to add header file dependencies. However, since the RTT files are placed directly in the MCUXpresso project’s source folder, you only need to call the relevant RTT functions in hello_world.c.   Initialize and Configure Buffers Add the following code to initialize RTT and create up/down buffers: SEGGER_RTT_Init(); uint8_t rx_buffer[32], tx_buffer[32]; SEGGER_RTT_ConfigUpBuffer(0, "RTTUP", rx_buffer, sizeof(rx_buffer), SEGGER_RTT_MODE_NO_BLOCK_SKIP); SEGGER_RTT_ConfigDownBuffer(0, "RTTDOWN", tx_buffer, sizeof(tx_buffer), SEGGER_RTT_MODE_NO_BLOCK_SKIP); SEGGER_RTT_SetTerminal(0); SEGGER_RTT_printf(0, "hello world\r\n"); Use RTT for sending: SEGGER_RTT_SetTerminal(0); SEGGER_RTT_printf(0, "hello world\r\n"); Here, after we port the file and add the RTT operation to the source code of hello_world, the code part is ready to be completed. Use JLink RTT Viewer Launch the JLink RTT Viewer program, select the appropriate device number, run the program, and open "Terminal 0" to view the output.   4. Conclusion Compared to traditional UART-based logging, utilizing the debugger’s built-in RTT functionality reduces peripheral usage and eliminates the need for UART initialization and configuration. With JLink, RTT is essentially plug-and-play, providing convenient and fast logging and interaction. In addition to basic functionality, SEGGER offers advanced features such as changing font colors. Explore more on SEGGER's official website: SEGGER RTT Documentation   For Chinese version and demo project, please check this link: https://www.nxpic.org.cn/module/forum/forum.php?mod=viewthread&tid=803638&fromuid=3253523
查看全文
1.    Abstract When using the RT600 SDK USB composite example, the customer found that the default HS interval was 125us, and the code was: mimxrt685audevk_dev_composite_hid_audio_unified_bm. However, in actual applications, the 125us packet interval for data transmission will cause a large interrupt load on the CPU, so the customer hopes to change the interval to a larger during, such as 1ms. After changing interval to 1ms, it is found that the data packet can be sent to the RT chip, but there is indeed a problem with the playback on the RT side, speaker no voice. Changing it to 500us in synchronous mode is possible work, but at 500us, the customer's CPU load still reaches more than 80%, which is not convenient for subsequent application code expansion, so it is still hoped to achieve a 1ms method. This article will give the transmission of different intervals of UAC and provide a solution for 1ms intervals. 2.    Test situation    Board:MIMXRT685-AUD-EVK    SDK:SDK_2_15_000_MIMXRT685-AUD-EVK    USB Analyzer:Lecroy USB-TOS2-A01-X 2.1 Platform connection situation First, given the connection between MIMXRT685-AUD-EVK, USB analyzer and audio source. This article mainly tests the RT685 UAC speaker function, that is, USB transmits audio source data to RT685, and plays the audio source through a player (which can be headphones), or speaker. The J7 USB port of EVK is connected to the A port of Lecroy, and the B port of Lecroy is connected to the audio source, which can be a PC or a mobile phone. If using a PC, the speaker needs to be selected as USB AUDIO+HID DEMO, because the name of the board after UAC enumeration is USB AUDIO+HID DEMO. The other end of Lecroy is connected to the PC for transmitting USB bus data. The specific connection diagram is as follows:      Fig 1 EVK USB analyzer connection 2.2 Different audio interval data package situation This chapter gives the modifications under different intervals, as well as the results of USB analyzer packet capture. The clock system of the audio synchronous/isochronous audio endpoint can be synchronized through SOF. The USB 1.0 sampling rate must be locked to the 1ms SOF beat. The USB2.0 high-speed HS endpoint can be locked to the 125us SOF beat. If you want to change the interval, it is usually a multiple of 125us, that is, it can be 250us, 500us, 1ms, etc. The modification method is usually to directly modify the USB stack's usb_audio_config.h:   #define HS_ISO_OUT_ENDP_INTERVAL (0x01) The relationship between HS_ISO_OUT_ENDP_INTERVAL and the real during time is: 125us*2^( HS_ISO_OUT_ENDP_INTERVAL -1) So: HS_ISO_OUT_ENDP_INTERVAL=1 : 125us HS_ISO_OUT_ENDP_INTERVAL=2 : 250us HS_ISO_OUT_ENDP_INTERVAL=3 : 500us HS_ISO_OUT_ENDP_INTERVAL=4 : 1000us The following are the four situations mentioned above respectively through USB analyzer packet capture test. 2.2.1 125us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x01) Fig 2 125us interval It can be seen that when the interval is configured to 125us, the SOF beat interval in front of each OUT packet is 125us, and the length of the OUT data packet is 24Byte. The data in this 24Byte packet is the actual audio data. In this case, the playback is normal   2.2.2 250us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x02) The packet capture configured as 250us is shown in the figure below. It can be seen that the SOF beat interval in front of the two OUT packets is 250us, and the length of the OUT data packet is 48 Byte. In other words, as the interval increases, the length of the data packet also increases proportionally, that is, the transmission time is longer, but a packet contains more data packets. At this time, the RT side also needs to provide more USB receiving buffers to receive data. Fig 3 250us interval This situation, the speaker play normally, have the audio sound. 2.2.3 500us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x03) Fig 4 500us interval SYNC mode At this time, you can see that the data packet has become 96 Bytes, and the data content is also normal audio data, but the playback has problems and no sound can be heard. However, if you configure: #define USB_DEVICE_AUDIO_USE_SYNC_MODE (0U) It is not the SYNC mode, it can hear the audio sound, the packet situation is: Fig 5 500us interval no SYNC mode However, the asynchronous mode also has problems. After a long period of operation, it may become out of sync and the playback may become stuck. Therefore, it is still necessary to use the 1ms method in the synchronous mode. 2.2.4 1ms interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x04) Fig 6 interval 1ms It can be seen that the data packet length has become 192 bytes, and the SOF beat interval in front of the OUT packet has indeed become 1ms. The audio data packet also looks like normal audio data, so the audio data here is successfully transmitted from USB to RT. However, the playback is abnormal and no sound can be heard. 3. 1ms interval solution Now let's debug the project to find the problem. Because the USB analyzer can show that the audio data of the USB bus is actually transmitted, we must first check whether the USB receives the data packet in the kUSB_DeviceAudioEventStreamRecvResponse event in USB_DeviceAudioCompositeCallback of audio_unified.c, whether the data packet length is correct, and whether the data content looks like audio data. The following is the debug result: Fig 7 data packet received situation So from this point of view, the USB interface data packet is received, and the data looks like real audio data. If it is abnormal data, it is either all 0 or irregular data that changes, but the test result cannot be played. So now to check the I2S playback function, composite.h file, TxCallback function: Fig 8 I2S play data buffer We can see, the real data transfer to the I2S data buffer are always 0, and the reality should be g_composite.audioUnified.startPlayFlag none zero, and also use: s_TxTransfer.dataSize = g_composite.audioUnified.audioPlayTransferSize; s_TxTransfer.data=audioPlayDataBuff + g_composite.audioUnified.tdReadNumberPlay; But, after testing, audioPlayDataBuff data are also 0. So the issue should be the USB received side, receive the data, but when save the data to the audioPlayDataBuff have issues, go back to audio_unified.c Fig 9 USB_DeviceAudioCompositeCallback Fig10 USB_AudioSpeakerPutBuffer After testing, in the Fig 9, audioUnified.tdWriteNumberPlay never larger than g_deviceAudioComposite->audioUnified.audioPlayTransferSize * AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT=192*6   #define AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT \ (0x06U) /* 6 means 6 mico frames (6*125us), make sure the latency is smaller than 1ms for sync mode */ The low latency here is a delay buffer. Therefore, the USB cache data must at least be able to hold the delayed data packet. Unit 1 represents an interval frame. Now it is changed to an interval of 1ms, 6 delay units, which means that the delay here is 6ms. Here, the receiving buffer size needs to be larger, which is related to the following definition: Fig 11 USB device callback g_composite.audioUnified.audioPlayBufferSize = AUDIO_PLAY_BUFFER_SIZE_ONE_FRAME * AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT; #define AUDIO_PLAY_BUFFER_SIZE_ONE_FRAME AUDIO_OUT_TRANSFER_LENGTH_ONE_FRAME #define AUDIO_OUT_FORMAT_CHANNELS (0x02U) #define AUDIO_OUT_FORMAT_SIZE (0x02) #define AUDIO_OUT_SAMPLING_RATE_KHZ (48) /* transfer length during 1 ms */ #define AUDIO_OUT_TRANSFER_LENGTH_ONE_FRAME \ (AUDIO_OUT_SAMPLING_RATE_KHZ * AUDIO_OUT_FORMAT_CHANNELS * AUDIO_OUT_FORMAT_SIZE) #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT \ (2U) /* 2 units size buffer (1 unit means the size to play during 1ms) */ Here, try to set larger AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT, let it should be at least can put 6ms data, as 1 unit is 1ms play data size, and it needs to have more than 6ms, so, set to 12 unit, the modified definition: #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT 12 Build the project and download it, now capture the USB bus data again: Fig 12 1ms interval data packet play successfully   This data waveform is the data that can be played normally. Below is the video of the MIMXRT685-AUD-EVK test. The attachment provides two demos of RT685 and RT1170, both modified to 1ms interval, and the modification method of RT1170 is exactly the same 4.    Summarize Whether it is RT600, RT1170, or more precisely the UAC code of the entire RT series, if you need to modify the interval, the main focus is on the following macros in usb_audio_config.h, the following is for 1ms interval: #define HS_ISO_OUT_ENDP_INTERVAL (0x04)//(0x01)//(0X04) #define AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT \ (0x06U) /* 6 means 16 mico frames (6*125us), make sure the latency is smaller than 1ms for ehci high speed */ #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT \ (12U)//(2U) /* 2 units size buffer (1 unit means the size to play during 1ms) */ Test video:
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/Design-an-IoT-edge-node-for-CV-application-base-on-the-i/ta-p/1127423 
查看全文
i.MX RT1050 is the first set of processors in NXP's crossover processor family, combining the high-performance and high level of integration on an applications processors with the ease of use and real-time functionality of a microcontroller. As the first device in a new family, we have had some learning and improvements that have come along the way. There have been some changes and improvements to the processor and also our enablement for the device. This can result in some revisions of hardware and software not being directly compatible with each other out of the box. In particular, some software that has been released for the A0 silicon revision (found on EVK boards) doesn't run on the A1 silicon revision (EVKB boards). In order to minimize the risk of compatibility issues, we recommend that all customers move to SDK 2.3.1 or higher. The SDK 2.3.1 is listed as supporting the EVKB hardware specifically, but the SDK is compatible with the EVK (non-B) hardware. We also recommend that customers using the DAPLink firmware for the OpenSDA debugging circuit built into the EVK/EVKB update to the latest version available on the www.nxp.com/opensda site. The flashloader package has also been updated. Rev 1.1 or later should be used (Flashloader i.MX-RT1050). There are many application notes available for RT1050. Many of these application notes were written based on the original silicon revision and early releases of enablement software. We are in the process of reviewing the published application notes and application note software to prioritize updating them where needed based on the latest enablement and recommendations. If you are in a situation where you need to use SDK 2.3.0 on A1 silicon, the most likely problem area involves some new clock gate bits that were added on the A1 silicon revision. These bits weren't present on the A0 silicon, so SDK 2.3.0 will clear them which disables external memory interfaces. If you comment out  the call to BOARD_BootClockGate() that is in the BOARD_BootClockRUN function (found in the clock_config.c file), that should allow the SDK 2.3.0 software to run on an A1 silicon/EVKB. For more information: MCUXpresso SDK RT1050 migration app note  i.MX RT1050 CMSIS-DAP drag-and-drop programming 
查看全文
In the tutorial, I'd like to show the steps of deploying an image classification model on i.MX RT1060 to enabling you to classify fashion images and categories. In the first part of this tutorial, we will review the Fashion MNIST dataset, including how to download it to your system. From there we’ll define a simple CNN network using the TensorFlow platform. Next, we’ll train our CNN model on the Fashion MNIST dataset, train it, and review the results. Finally, we'll optimize the model, after that, the model will be smaller and increase inferencing speed, which is valuable for source-limited devices such as MCU. Let’s go ahead and get started! Fashion MNIST dataset The Fashion MNIST dataset was created by the e-commerce company, Zalando. Fig 1 Fashion MNIST dataset As they note on their official GitHub repo for the Fashion MNIST dataset, there are a few problems with the standard MNIST digit recognition dataset: It’s far too easy for standard machine learning algorithms to obtain 97%+ accuracy. It’s even easier for deep learning models to achieve 99%+ accuracy. The dataset is overused. MNIST cannot represent modern computer vision tasks. Zalando, therefore, created the Fashion MNIST dataset as a drop-in replacement for MNIST. 60,000 training examples 10,000 testing examples 10 classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot 28×28 grayscale images The code below loads the Fashion-MNIST dataset using the TensorFlow and creates a plot of the first 25 images in the training dataset. import tensorflow as tf import numpy as np # For easy reset of notebook state. tf.keras.backend.clear_session() # load dataset fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() lass_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] plt.figure(figsize=(8,8)) for i in range(25): plt.subplot(5,5,i+1,) plt.tight_layout() plt.imshow(train_images[i]) plt.xlabel(lass_names[train_labels[i]]) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.show() Fig 2 Running the code loads the Fashion-MNIST train and test dataset and prints their shape. Fig 3 We can see that there are 60,000 examples in the training dataset and 10,000 in the test dataset and that images are indeed square with 28×28 pixels. Creating model We need to define a neural network model for the image classify purpose, and the model should have two main parts: the feature extraction and the classifier that makes a prediction. Defining a simple Convolutional Neural Network (CNN) For the convolutional front-end, we build 3 layers of convolution layer with a small filter size (3,3) and a modest number of filters followed by a max-pooling layer. The last filter map is flattened to provide features to the classifier. As we know, it's a multi-class classification task, so we will require an output layer with 10 nodes in order to predict the probability distribution of an image belonging to each of the 10 classes. In this case, we will require the use of a softmax activation function. And between the feature extractor and the output layer, we can add a dense layer to interpret the features. All layers will use the ReLU activation function and the He weight initialization scheme, both best practices. We will use the Adam optimizer to optimize the sparse_categorical_crossentropy loss function, suitable for multi-class classification, and we will monitor the classification accuracy metric, which is appropriate given we have the same number of examples in each of the 10 classes. The below code will define and run it will show the struct of the model. # Define a Model model = tf.keras.models.Sequential() # First Convolution ,Kernel:16*3*3 model.add( tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_uniform',input_shape=(28, 28, 1))) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Second Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Third Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Flatten()) model.add( tf.keras.layers.Dense(32, activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Dense(10, activation='softmax')) Fig 4 Training Model After the model is defined, we need to train it. The model will be trained using 5-fold cross-validation. The value of k=5 was chosen to provide a baseline for both repeated evaluation and to not be too large as to require a long running time. Each validation set will be 20% of the training dataset or about 12,000 examples. The training dataset is shuffled prior to being split and the sample shuffling is performed each time so that any model we train will have the same train and validation datasets in each fold, providing an apples-to-apples comparison. We will train the baseline model for a modest 20 training epochs with a default batch size of 32 examples. The validation set for each fold will be used to validate the model during each epoch of the training run, so we can later create learning curves, and at the end of the run, we use the test dataset to estimate the performance of the model. As such, we will keep track of the resulting history from each run, as well as the classification accuracy of the fold. The train_model() function below implements these behaviors, taking the training dataset and test dataset as arguments, and returning a list of accuracy scores and training histories that can be later summarized. from sklearn.model_selection import KFold # train a model using k-fold cross-validation def train_model(dataX, dataY, n_folds=5): scores, histories = list(), list() # prepare cross validation kfold = KFold(n_folds, shuffle=True, random_state=1) for train_ix, validate_ix in kfold.split(dataX): # select rows for train and test trainX, trainY, validate_X, validate_Y = dataX[train_ix], dataY[train_ix], dataX[validate_ix], dataY[validate_ix] # fit model history = model.fit(trainX, trainY, epochs=20, batch_size=32, validation_data=(validate_X, validate_Y), verbose=0) # evaluate model _, acc = model.evaluate(validate_X, validate_Y, verbose=0) print("Accurary: {:.4f},Total number of figures is {:0>2d}".format(acc * 100.0, len(testY))) # append scores scores.append(acc) histories.append(history) return scores, histories Module Summary After the model has been trained, we can present the results. There are two key aspects to present: the diagnostics of the learning behavior of the model during training and the estimation of the model performance. These can be implemented using separate functions. First, the diagnostics involve creating a line plot showing model performance on the train and validate set during each fold of the k-fold cross-validation. These plots are valuable for getting an idea of whether a model is overfitting, underfitting, or has a good fit for the dataset. We will create a single figure with two subplots, one for loss and one for accuracy. Blue lines will indicate model performance on the training dataset and orange lines will indicate performance on the hold-out validate dataset. The summarize_diagnostics() function below creates and shows this plot given the collected training histories. # plot diagnostic learning curves def summarize_diagnostics(histories): for i in range(len(histories)): # plot loss plt.subplot(2,1,1) plt.title('Cross Entropy Loss') plt.plot(histories[i].history['loss'], color='blue', label='train') plt.plot(histories[i].history['val_loss'], color='orange', label='test') # plot accuracy plt.subplot(2,1,2) plt.title('Classification Accuracy') plt.plot(histories[i].history['accuracy'], color='blue', label='train') plt.plot(histories[i].history['val_accuracy'], color='orange', label='test') plt.show() Fig 5 Next, the classification accuracy scores collected during each fold can be summarized by calculating the mean and standard deviation. This provides an estimate of the average expected performance of the model trained on the test dataset, with an estimate of the average variance in the mean. We will also summarize the distribution of scores by creating and showing a box and whisker plot. The summarize_performance() function below implements this for a given list of scores collected during model training. # summarize model performance def summarize_performance(scores): # print summary print('Accuracy: mean={:.4f} std={:.4f}, n={:0>2d}'.format(np.mean(trained_scores)*100, np.std(trained_scores)*100, len(scores))) # box and whisker plots of results plt.boxplot(scores) plt.show()   Fig 6 Verifying predictions According to the above figure, we see that the final trained model can get up to around 87.6% accuracy when predicting the test dataset. And with the trained model, running the below code will demonstrate the result of predictions about some images. def plot_image(i, predictions_array, true_label, img): true_label, img = true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): true_label = true_label[i] plt.grid(False) plt.xticks(range(10)) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') predictions = model.predict(test_images) # Plot the first X test images, their predicted labels, and the true labels. # Color correct predictions in blue and incorrect predictions in red. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions[i], test_labels) plt.tight_layout() plt.show()   Fig 7 Model quantization Post-training quantization is a conversion technique that can reduce model size while also improving CPU and hardware accelerator latency, with little degradation in model accuracy, especially it's crucial to embedded platforms, as it lacks the compute-intensive performance, the Flash and RAM memory is also very limited. TensorFlow Lite is able to be used to convert an already-trained float TensorFlow model to the TensorFlow Lite format. In addition, the TensorFlow Lite provides several approaches to optimize the mode, among these ways, Integer quantization is an optimization strategy that converts 32-bit floating-point numbers (such as weights and activation outputs) to the nearest 8-bit fixed-point numbers. This results in a smaller model and increased inferencing speed, which is very valuable for low-power devices such as microcontrollers. The below codes show how to implement the Integer quantization of the trained model, and after running these codes, we can find that the size of Tensorflow Lite mode reduces almost 64.9 KB versus the original model, becomes about 32% of the original size(Fig 8). import os # Convert using integer-only quantization def representative_data_gen(): for input_value in tf.data.Dataset.from_tensor_slices(tf.cast(train_images,tf.float32)).shuffle(500).batch(1).take(150): yield [input_value] # Convert using dynamic range quantization converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model_quant = converter.convert() # Save the model to disk open("model_dynamic_range_quantization.tflite", "wb").write(tflite_model_quant) ## Size difference Dynamic_range_quantization_model_size = os.path.getsize("model_dynamic_range_quantization.tflite") print("Dynamic range quantization model is %d bytes" % Dynamic_range_quantization_model_size) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_data_gen # Ensure that if any ops can't be quantized, the converter throws an error converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # Set the input and output tensors to uint8 (APIs added in r2.3) converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 tflite_model_advanced_quant = converter.convert() # Save the model to disk open("model_integer_only_quantization.tflite", "wb").write(tflite_model_advanced_quant) Integer_only_quantization_model_size = os.path.getsize("model_integer_only_quantization.tflite") print("Integer_only_quantization_model is %d bytes" % Integer_only_quantization_model_size) difference = Dynamic_range_quantization_model_size - Integer_only_quantization_model_size print("Difference is %d bytes" % difference) Fig 8 Evaluating the TensorFlow Lite model Now we'll run inferences using the TensorFlow Lite Interpreter to compare the model accuracies. First, we need a function that runs inference with a given model and images, and then returns the predictions: # Helper function to run inference on a TFLite model def run_tflite_model(tflite_file, test_image_indices): # Initialize the interpreter interpreter = tf.lite.Interpreter(model_path=str(tflite_file)) interpreter.allocate_tensors() input_details = interpreter.get_input_details()[0] output_details = interpreter.get_output_details()[0] predictions = np.zeros((len(test_image_indices),), dtype=int) for i, test_image_index in enumerate(test_image_indices): test_image = test_images[test_image_index] test_label = test_labels[test_image_index] # Check if the input type is quantized, then rescale input data to uint8 if input_details['dtype'] == np.uint8: input_scale, input_zero_point = input_details["quantization"] test_image = test_image / input_scale + input_zero_point test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"]) interpreter.set_tensor(input_details["index"], test_image) interpreter.invoke() output = interpreter.get_tensor(output_details["index"])[0] predictions[i] = output.argmax() return predictions Next, we'll compare the performance of the original model and the quantized model on one image. model_basic_quantization.tflite is the original TensorFlow Lite model with floating-point data. model_integer_only_quantization.tflite is the last model we converted using integer-only quantization (it uses uint8 data for input and output). Let's create another function to print our predictions and run it for testing. import matplotlib.pylab as plt # Change this to test a different image test_image_index = 1 ## Helper function to test the models on one image def test_model(tflite_file, test_image_index, model_type): global test_labels predictions = run_tflite_model(tflite_file, [test_image_index]) plt.imshow(test_images[test_image_index].reshape(28,28)) template = model_type + " Model \n True:{true}, Predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0]))) plt.grid(False) Fig 9 Fig 10 Then evaluate the quantized model by using all the test images we loaded at the beginning of this tutorial. After summarizing the prediction result of the test dataset, we can see that the prediction accuracy of the quantized model decrease 7% less than the original model, it's not bad. # Helper function to evaluate a TFLite model on all images def evaluate_model(tflite_file, model_type): test_image_indices = range(test_images.shape[0]) predictions = run_tflite_model(tflite_file, test_image_indices) accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images) print('%s model accuracy is %.4f%% (Number of test samples=%d)' % ( model_type, accuracy, len(test_images))) Deploying model Converting TensorFlow Lite model to C file The following code runs xxd on the quantized model, writes the output to a file called model_quantized.cc, in the file, the model is defined as an array of bytes, and prints it to the screen. The output is very long, so we won’t reproduce it all here, but here’s a snippet that includes just the beginning and end. # Save the file as a C source file xxd -i model_integer_only_quantization.tflite > model_quantized.cc # Print the source file cat model_quantized.cc Fig 11 Deploying the C file to project We use the tensorflow_lite_cifar10 demo as a prototype, then replace the original model and do some code modification, below is the code in the modified main file. #include "board.h" #include "fsl_debug_console.h" #include "pin_mux.h" #include "timer.h" #include <iomanip> #include <iostream> #include <string> #include <vector> #include "tensorflow/lite/kernels/register.h" #include "tensorflow/lite/model.h" #include "tensorflow/lite/optional_debug_tools.h" #include "tensorflow/lite/string_util.h" #include "get_top_n.h" #include "model.h" #define LOG(x) std::cout // ---------------------------- Application ----------------------------- // Lenet Mnist model input data size (bytes). #define LENET_MNIST_INPUT_SIZE 28*28*sizeof(char) // Lenet Mnist model number of output classes. #define LENET_MNIST_OUTPUT_CLASS 10 // Allocate buffer for input data. This buffer contains the input image // pre-processed and serialized as text to include here. uint8_t imageData[LENET_MNIST_INPUT_SIZE] = { #include "clothes_select.inc" }; /* Tresholds */ #define DETECTION_TRESHOLD 60 /*! * @brief Initialize parameters for inference * * @param reference to flat buffer * @param reference to interpreter * @param pointer to storing input tensor address * @param verbose mode flag. Set true for verbose mode */ void InferenceInit(std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor** input_tensor, bool isVerbose) { model = tflite::FlatBufferModel::BuildFromBuffer(Fashion_MNIST_model, Fashion_MNIST_model_len); if (!model) { LOG(FATAL) << "Failed to load model\r\n"; return; } tflite::ops::builtin::BuiltinOpResolver resolver; tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (!interpreter) { LOG(FATAL) << "Failed to construct interpreter\r\n"; return; } int input = interpreter->inputs()[0]; const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); if (interpreter->AllocateTensors() != kTfLiteOk) { LOG(FATAL) << "Failed to allocate tensors!"; return; } /* Get input dimension from the input tensor metadata assuming one input only */ *input_tensor = interpreter->tensor(input); auto data_type = (*input_tensor)->type; if (isVerbose) { const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); LOG(INFO) << "input: " << inputs[0] << "\r\n"; LOG(INFO) << "number of inputs: " << inputs.size() << "\r\n"; LOG(INFO) << "number of outputs: " << outputs.size() << "\r\n"; LOG(INFO) << "tensors size: " << interpreter->tensors_size() << "\r\n"; LOG(INFO) << "nodes size: " << interpreter->nodes_size() << "\r\n"; LOG(INFO) << "inputs: " << interpreter->inputs().size() << "\r\n"; LOG(INFO) << "input(0) name: " << interpreter->GetInputName(0) << "\r\n"; int t_size = interpreter->tensors_size(); for (int i = 0; i < t_size; i++) { if (interpreter->tensor(i)->name) { LOG(INFO) << i << ": " << interpreter->tensor(i)->name << ", " << interpreter->tensor(i)->bytes << ", " << interpreter->tensor(i)->type << ", " << interpreter->tensor(i)->params.scale << ", " << interpreter->tensor(i)->params.zero_point << "\r\n"; } } LOG(INFO) << "\r\n"; } } /*! * @brief Runs inference input buffer and print result to console * * @param pointer to image data * @param image data length * @param pointer to labels string array * @param reference to flat buffer model * @param reference to interpreter * @param pointer to input tensor */ void RunInference(const uint8_t* image, size_t image_len, const std::string* labels, std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor* input_tensor) { /* Copy image to tensor. */ memcpy(input_tensor->data.uint8, image, image_len); /* Do inference on static image in first loop. */ auto start = GetTimeInUS(); if (interpreter->Invoke() != kTfLiteOk) { LOG(FATAL) << "Failed to invoke tflite!\r\n"; return; } auto end = GetTimeInUS(); const float threshold = (float)DETECTION_TRESHOLD /100; std::vector<std::pair<float, int>> top_results; int output = interpreter->outputs()[0]; TfLiteTensor *output_tensor = interpreter->tensor(output); TfLiteIntArray* output_dims = output_tensor->dims; // assume output dims to be something like (1, 1, ... , size) auto output_size = output_dims->data[output_dims->size - 1]; /* Find best image candidates. */ GetTopN<uint8_t>(interpreter->typed_output_tensor<uint8_t>(0), output_size, 1, threshold, &top_results, false); if (!top_results.empty()) { auto result = top_results.front(); const float confidence = result.first; const int index = result.second; if (confidence * 100 > DETECTION_TRESHOLD) { LOG(INFO) << "----------------------------------------\r\n"; LOG(INFO) << " Inference time: " << (end - start) / 1000 << " ms\r\n"; LOG(INFO) << " Detected: " << std::setw(10) << labels[index] << " (" << (int)(confidence * 100) << "%)\r\n"; LOG(INFO) << "----------------------------------------\r\n\r\n"; } } } /*! * @brief Main function */ int main(void) { const std::string labels[] = {"T-shirt/top", "Trouser","Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"}; /* Init board hardware. */ BOARD_ConfigMPU(); BOARD_InitPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); InitTimer(); std::unique_ptr<tflite::FlatBufferModel> model; std::unique_ptr<tflite::Interpreter> interpreter; TfLiteTensor* input_tensor = 0; InferenceInit(model, interpreter, &input_tensor, false); LOG(INFO) << "Fashion MNIST object recognition example using a TensorFlow Lite model.\r\n"; LOG(INFO) << "Detection threshold: " << DETECTION_TRESHOLD << "%\r\n"; /* Run inference on static ship image. */ LOG(INFO) << "\r\nStatic data processing:\r\n"; RunInference((uint8_t*)imageData, (size_t)LENET_MNIST_INPUT_SIZE, labels, model, interpreter, input_tensor); while(1) {} } Testing result After deploying the model in the demo project, then we'll run this demo on the MIMXRT1060 (Fig 12) board for testing. Fig 12 Run the below code to covert the Fashion MNIST image to text The process_image() function can convert a Fashion MNIST image to an include file as static data, then include this file in the demo project. def process_image(image, output_path, num_batch=1): img_data = np.transpose(image, (2, 0, 1)) # Repeat image for batch processing (resulting tensor is NCHW or NHWC) img_data = np.reshape(img_data, (num_batch, img_data.shape[0], img_data.shape[1], img_data.shape[2])) img_data = np.repeat(img_data, num_batch, axis=0) img_data = np.reshape(img_data, (num_batch, img_data.shape[1], img_data.shape[2], img_data.shape[3])) # Serialize image batch img_data_bytes = bytearray(img_data.tobytes(order='C')) image_bytes_per_line = 20 with open(output_path, 'wt') as f: idx = 0 for byte in img_data_bytes: f.write('0X%02X, ' % byte) if idx % image_bytes_per_line == (image_bytes_per_line - 1): f.write('\n') idx = idx + 1 # Return serialized image size return len(img_data_bytes)      2. Run the demo project on board.
查看全文
Wireless module combinations from Tables 2 and 3 are not updated with the latest SDK 2.12.1 in the user manual UM11441. Major updates in Table 2 and Table 3: u-blox modules are supported only on rt1060 Murata modules are only tested with i.MX RT1060 EVKB and i.MX RT1040 EVK platforms Modified Murata modules names Renamed i.MX RT685S EVK to IMXRT685-AUD-EVK Please refer to the attached PDF for updated information.
查看全文
The i.MX RT600 crossover MCU combines an ultra-low power MCU with a high performance DSP to enable the next generation of ML/AI, voice and audio applications. Get started today and order your MIMXRT685-EVK.
查看全文
  RT1050 CSI OV7670 camera eLCD display 1.Abstract OV7670 is a CMOS VGA image sensor with small size and low operating voltage. It is controlled by the SCCB bus and can output 8-bit image data of various resolutions with a frame rate of up to 30 frames/second and low cost. This article mainly implements the use of CSI on RT10XX to obtain OV7670 camera data, and displays it using the eLCDIF display module that comes with RT10XX. The camera and display use RGB565 format. The camera resolution configuration is QVGA 320*240, the LCD is NXP official EVKB matching LCD RK043FN02H, the resolution is 480*272, and the frame rate is 30FPS. This article is based on NXP official RT1050 SDK: SDK_2_14_0_EVKB-IMXRT1050\boards\evkbimxrt1050\driver_examples\csi\rgb565 Porting the OV7670 driver to implement the CSI method to collect OV7670 image data and display it on the LCD through the eLCDIF module   2. Principle explanation    Here is a brief explanation of relevant knowledge. 2.1 RGB565 Color mode As a basic color coding format for images, RGB565 refers to a pixel that occupies 2 bytes of data and is usually used in images and display devices. R red, G green, B blue, the actual display can obtain different other colors according to the configuration of the three primary colors. Each pixel bit can display 65536 (2^16) colors. The specific allocation is as follows:   Fig 1 From the above figure, we can know that the 2-byte data displayed in pure red, green and blue is: Red: 0xf800, Green: 0X07E0, Blue: 0X001F 2.2 OV7670 camera hardware and waveform situation The OV7670 module used is as follows: Fig 2 Pin situation No Signal Description 1 PWDN Power consumption selection mode, pull down for normal use 2 RET Reset port, pull high for normal use 3 D0 Data port output bit 0 4 D1 Data port output bit 1 5 D2 Data port output bit 2 6 D3 Data port output bit 3 7 D4 Data port output bit 4 8 D5 Data port output bit 5 9 D6 Data port output bit 6 10 D7 Data port output bit 7 11 XLK Clock signal input signal 12 PLK Pixel clock output signal 13 HS Horizontal synchronization signal output signal 14 VS Frame sync clock output signal 15 SDA SCCB Interface data control 16 SCL SCCB Interface clock control 17 GND GND 18 3.3V 3.3V power RGB565 output data timing: Fig 3 2.2 CSI frame synchronization signal timing waveform Fig 4 2.3 LCD display wave Fig 5 Therefore, the data of OV7670 is obtained through CSI and then stored in the buffer. The eLCDIF then retrieves the data from the buffer and displays it on the LCD screen to display the real-time collection and reality of the camera data. 3 Software and hardware realize    The test platform is based on NXP MIMXRT1050-EVKB revA1 version: https://www.nxp.com/design/development-boards/i-mx-evaluation-and-development-boards/i-mx-rt1050-evaluation-kit:MIMXRT1050-EVK LCD为:https://www.nxp.com/part/RK043FN02H-CT#/ 3.1 Hardware connection As can be seen from Figure 2, the universal module purchased is a 2.54mm direct plug mode, but the CSI interface used on the MIMXRT1050-EVKB board is an FPC interface, so an adapter board is required to switch from FPC to 2.54mm direct plug mode. The wiring diagram is as follows:    Fig 6 The actual overall hardware connection situation is as follows: Fig 7 3.2 Software prepare Regarding the SDK driver of OV7670, the RT SDK does not provide it directly, but the FRDM-K82 SDK provides relevant drivers that can be transplanted to the RT1050 SDK.       SDK version:SDK_2_14_0_EVKB-IMXRT1050\boards\evkbimxrt1050\driver_examples\csi\rgb565 The code replaces the original OV7725 code, replaces the relevant driver with the OV7670 driver, modifies the OV7670 code, matches it to the RT1050 CSI code, and adds IO signal control for OV7670 RST and PWDN. The reason for adding RST and PWDN control is that it was found Some modules, if the RST pin is not closed and delayed to open, will cause the problem of unsuccessful acquisition. However, with the addition of RST and PWDN control, currently OV7670 from different manufacturers can successfully acquire and display stably. For the specific OV7670 code, you can view the attached source code. The camera initialization code is as follows:   static void APP_InitCamera(void) { const camera_config_t cameraConfig = { .pixelFormat = kVIDEO_PixelFormatRGB565, .bytesPerPixel = APP_BPP, .resolution = FSL_VIDEO_RESOLUTION(320, 240), /* Set the camera buffer stride according to panel, so that if * camera resoution is smaller than display, it can still be shown * correct in the screen. */ .frameBufferLinePitch_Bytes = DEMO_BUFFER_WIDTH * APP_BPP, .interface = kCAMERA_InterfaceGatedClock, .controlFlags = DEMO_CAMERA_CONTROL_FLAGS, .framePerSec = 30, }; memset(s_frameBuffer, 0, sizeof(s_frameBuffer)); BOARD_InitCameraResource(); CAMERA_RECEIVER_Init(&cameraReceiver, &cameraConfig, NULL, NULL); if (kStatus_Success != CAMERA_DEVICE_Init(&cameraDevice, &cameraConfig)) { PRINTF("Camera device initialization failed\r\n"); while (1) { ; } } CAMERA_DEVICE_Start(&cameraDevice); /* Submit the empty frame buffers to buffer queue. */ for (uint32_t i = 0; i < APP_FRAME_BUFFER_COUNT; i++) { CAMERA_RECEIVER_SubmitEmptyBuffer(&cameraReceiver, (uint32_t)(s_frameBuffer[i])); } } The resolution here is QVGA 320*240, which does not match the 480*272 of the LCD, but it does not matter. In fact, the size of 320*240 is displayed in the LCD. If you want to display it to 480*272, you can also configure the size through PXP. For more code details, see the attached code package. 4. Summary This article aims to provide a demo of RT OV7670 CSI+eLCDIF acquisition and display. let’s go directly to the finished product effect video. You can see that the relative display is relatively clear, and the refresh effect is also good.
查看全文
There is an issue with the DCD file used in the SDK 2.9.0 release for the i.MX RT1170 processor. When the included DCD file is used in a project to configure the SDRAM memory on the EVK, the refresh for the memory is not enabled. This can lead to corruption/data loss over time.   To fix the problem, replace the dcd.c file in your project with the attached file instead.   We are working on a fix, and a new revision of the SDK will be released soon.
查看全文
1.Introduction Recently, some customers need the RT1170 LWIP socket client, so this post is mainly share the socket client code which is based on the RT1170 SDK, it is just a simple demo, which also give the test result based on the NXP official EVKB board. 2. Code modification Platform: MIMXRT1170-EVKB SDK_2_13_1_MIMXRT1170-EVKB MCUXpresso IDE v11.7.1 Code is based on the SDK project : lwip_ping_freertos_cm7. This project already add the socket related file, so the modification is simple, just need to add the socket related header file and the app function. The modification is: Add socket server IP address, port, and the message which want to sendout. #define INIT_THREAD_STACKSIZE 1024 /*! @brief Priority of the temporary lwIP initialization thread. */ #define INIT_THREAD_PRIO DEFAULT_THREAD_PRIO #define HOST_NAME "192.168.0.100" #define BUF_LEN 100 uint8_t senddata[]= "Socket client test"; #define PORT 54321 #define IP_ADDR "192.168.0.100" Comment the ping code calling in stack_init API. //  ping_init(&netif_gw); Add the socket client thread: sys_thread_new("socketclient", socketclient_thread, NULL, DEFAULT_THREAD_STACKSIZE, DEFAULT_THREAD_PRIO); The thread code is: static void socketclient_thread(void *arg) { int sock = -1,rece; struct sockaddr_in client_addr; char* host_ip; ip4_addr_t dns_ip; err_t err; uint32_t *pSDRAM= pvPortMalloc(BUF_LEN);// host_ip = HOST_NAME ; PRINTF("host name : %s , host_ip : %s\r\n",HOST_NAME,host_ip); // while(1) // { PRINTF("Start server Connect !\r\n"); // create connection sock = socket(AF_INET, SOCK_STREAM, 0); if (sock < 0) { PRINTF("Socket error\n"); vTaskDelay(10); // continue; } client_addr.sin_family = AF_INET; client_addr.sin_port = htons(PORT); client_addr.sin_addr.s_addr = inet_addr(host_ip); memset(&(client_addr.sin_zero), 0, sizeof(client_addr.sin_zero)); if (connect(sock, (struct sockaddr *)&client_addr, sizeof(struct sockaddr)) == -1) { PRINTF("Connect failed!\r\n"); closesocket(sock); vTaskDelay(10); // continue; } PRINTF("Connect to server successful!\r\n"); // PRINTF("\r\n************************************************************\n\r"); // PRINTF("\r\n Begin write\n\r"); write(sock,senddata,sizeof(senddata)); while (1) { //receive data rece = recv(sock, (uint8_t*)pSDRAM, BUF_LEN, 0);//BUF_LEN if (rece <= 0) break; PRINTF("recv %d len data\r\n",rece); PRINTF("%.*s\r\n",rece,(uint8_t*)pSDRAM); write(sock,pSDRAM,rece); } //rec data process memset(pSDRAM,0,BUF_LEN); closesocket(sock); vTaskDelay(10000);//about 10s //10000 // } }   3. Test Result Firstly, use the PC to configure the ENET IP for the server:     192.168.0.100   After configuration, customer can use the TCP test tool, eg:USR-TCP232-Test, which is configured to the TCP server, local IP is:192.168.0.100, host port is:54321, then enter the listen mode:   After the code download to the MIMXRT1170-EVKB, and run it, we can see, the server can detect the connected client IP:192.168/0.102, after the client connect to the server, it will send out the message: “socket client test”, then the server can send out the message to the client, the client will use the UART printf it, also loop back to the server again. This is the test result video: Code attached:evkbmimxrt1170_lwip_socket_client_freertos_cm7.7z  
查看全文