i.MX RT知识库

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

i.MX RT Knowledge Base

讨论

排序依据:
Source code: https://github.com/JayHeng/NXP-MCUBootUtility 【v1.3.0】 Features: > 1. Can generate .sb file by actions in efuse operation utility window >    支持生成仅含自定义efuse烧写操作(在efuse operation windows里指定)的.sb格式文件 Improvements: > 1. HAB signed mode should not appliable for FlexSPI/SEMC NOR device Non-XIP boot with RT1020/1015 ROM >    HAB签名模式在i.MXRT1020/1015下应不支持从FlexSPI NOR/SEMC NOR启动设备中Non-XIP启动 > 2. HAB encrypted mode should not appliable for FlexSPI/SEMC NOR device boot with RT1020/1015 ROM >    HAB加密模式在i.MXRT1020/1015下应不支持从FlexSPI NOR/SEMC NOR启动设备中启动 > 3. Multiple .sb files(all, flash, efuse) should be generated if there is efuse operation in all-in-one action >    当All-In-One操作中包含efuse烧写操作时,会生成3个.sb文件(全部操作、仅flash操作、仅efuse操作) > 4. Can generate .sb file without board connection when boot device type is NOR >    当启动设备是NOR型Flash时,可以不用连接板子直接生成.sb文件 > 5. Automatic image readback can be disabled to save operation time >    一键操作下的自动程序回读可以被禁掉,用以节省操作时间 > 6. The text of language option in menu bar should be static and easy understanding >    菜单栏里的语言选项标签应该是静态且易于理解的(中英双语同时显示) Bugfixes: > 1. Cannot generate bootable image when original image (hex/bin) size is larger than 64KB >    当输入的源image文件格式为hex或者bin且其大小超过64KB时,生成可启动程序会失败 > 2. Cannot download large image file (eg 6.8MB) in some case >    当输入的源image文件非常大时(比如6.8MB),下载可能会超时失败 > 3. There is language switch issue with some dynamic labels >    当切换显示语言时,有一些控件标签(如Connect按钮)不能实时更新 > 4. Some led demos of RT1050 EVKB board are invalid >    /apps目录下RT1050 EVKB板子的一些LED demo是无效的 【v1.4.0】 Features: > 1. Support for loading bootable image into uSDHC SD/eMMC boot device >    支持下载Bootable image进主动启动设备 - uSDHC接口SD/eMMC卡 > 2. Provide friendly way to view and set mixed eFuse fields >    支持更直观友好的方式去查看/设置某些混合功能的eFuse区域 Improvements: > 1. Set default FlexSPI NOR device to align with NXP EVK boards >    默认FlexSPI NOR device应与恩智浦官方EVK板卡相匹配 > 2. Enable real-time gauge for Flash Programmer actions >    为通用Flash编程器里的操作添加实时进度条显示
查看全文
1. Abstract     When the customer uses the NXP RT board to debug the code, sometimes, the board   suddenly meets debug connect issues when using the IDE and the debugger to download the code. Especially to the customer who is using the NXP RT EVK board with the onboard default CMSIS-DAP debugger. They even suspect the board is broken after trying a lot of times.     The debugger connects issues that normally happen when using the wrong FDCB, the download process is ended abnormally, the wrong flash loader, or the abnormal app code in the flash, .etc.     The reported issue result will be similar to the following pictures: Fig.1 Fig.2     The connect log maybe:       No connection to chip’s debug port       Error: Wire Ack Wait Fault     This document will give some methods to recover the board when meeting the debugger issues with the typical MIMXRT1060-EVK board +MCUXpresso IDE as the test platform. Other platform is also similar, the recovery method also can be used. 2. RT board recovery method     The main method is to change the RT board to the serial download mode, bring the core to a known state, then do the mass erase in IDE or with the MCUXpresso Secure Provisioning Tool(SPT).     First, let the board enter the serial download mode: Fig.3     Enter serial download mode:     1) SW7: 1-OFF,2-OFF,3-OFF,4-ON     2) Power off and power on again, or Press Reset button 2.1 IDE Mass Erase     MCUXpresso IDE, choose the related debugger interface, then choose “erase flash action”.     Here take the MIMXRT1060-EVK on board default debugger CMSIS DAP as an example: Fig.4 Fig.5    After this operation, when the Board is back to the internal boot again, the debugger can download to the flash again. 2.2 SPT Mass Erase     Customers also can use the NXP MCUXpresso Secure Provisioning Tool to do the code download or the mass erase in the serial download mode, this method also can recover the board, in fact, just let the core in the known status.     SPT tool download link: https://www.nxp.com/design/software/development-software/mcuxpresso-secure-provisioning-tool:MCUXPRESSO-SECURE-PROVISIONING     After installing the SPT tool, open it.     1) Create one RT1060 workspace Fig.6     2) Connect the board with USB or the UART     Here, take USB interface as an example. Fig.7 Fig. 8 Fig.9 Fig.10 Fig.11       Until now, the external flash is erased!     In the SPT, the customer also can double check the memory, especially whether the FDCB area is erased or not, just like in the following picture: Fig.12     The customer can go back to the internal boot mode and use the debugger to download the code again.       Internal boot mode:           SW7:1-OFF,2-OFF,3-ON,4-OFF      Press reset or power off and power on again to enter the internal boot mode, then use the debugger to test it again, this is the result: Fig.13     We can see, the MIMXRT1060-EVK debugger interface is recovered! 3. Conclusion     When the flash contains an app that is abnormal(access memory does not exist, memory is corrupted, misconfiguration of the clocks, etc.), it will cause the board to end up in an unknown state, then the debugger can’t take control over the core. But, when put the core in serial downloader mode, then it will put the core in a known state, this way, the debugger will be able to take control of the core.     So, when meeting the debugger issues in the RT board, try to mass erase the external flash in serial download mode, then it will recover the board debugger to a normal situation.
查看全文
The newly announced i.MX RT1170 is a dual-core Arm® Cortex®-M based crossover MCU that breaks the gigahertz (GHz) barrier and accelerates advanced Machine Learning (ML) applications at the edge.  Built using advanced 28nm FD-SOI technology for lower active and static power requirements, i.MX RT1170 MCU family integrates a GHz Arm Cortex-M7 and power-efficient Cortex-M4, advanced 2D vector graphics, together with NXP’s signature EdgeLock security solution.  The i.MX RT1170 delivers a total CoreMark score of 6468 and address the growing performance needs of edge computing for industrial, Internet-of-Things (IoT) and automotive applications
查看全文
One-stop secure boot tool: NXP-MCUBootUtility v1.0.0 is released Source code: https://github.com/JayHeng/NXP-MCUBootUtility 【v1.1.0】 Feature:   1. Support i.MXRT1015   2. Add Language option in Menu/View and support Chinese Improvement:   1. USB device auto-detection can be disabled   2. Original image can be a bootable image (with IVT&BootData/DCD)   3. Show boot sequence page dynamically according to action Interest:   1. Add sound effect (Mario) 【v1.2.0】 Feature:   1. Can generate .sb file for MfgTool and RT-Flash   2. Can show cost time along with gauge Improvement:   1. Non-XIP image can also be supported for BEE Encryption case   2. Display guage in real time Bug:   1. Region count cannot be set more than 1 for Fixed OTPMK Key case   2. Option1 field is not implemented for FlexSPI NOR configuration
查看全文
Introduction NXP i.MX RT1xxx series provide the High Assurance Boot (HAB) feature which makes the hardware to have a mechanism to ensure that the software can be trusted, as the HAB feature enables the ROM to authenticate the program image by using digital signatures, which can assure the application image's integrity, authenticated and undeniable. So the OEM can utilize it to make their product reject any system image which is not authorized to run. However, what's the trust chain of HAB for implementing the purpose? How the key and certificate generate In the installation directory of MCUXpresso Secure Provisioning:  ~\nxp\MCUX_Provi_v3.1\bin\tools_scripts\keys , there are scripts for generating keys: hab4_pki_tree.sh and hab4_pki_tree.bat (both are applicable to Linux and Windows systems respectively), running any of the above scripts will generate 13 pairs of public and private keys in sequence through OpenSSL, which constitute the below tree structure. Fig1 Key Tree structure The public key and private key generated by OpenSSL are paired one by one, saving the private key and publishing the corresponding public key to the outside world can easily implement asymmetric encryption applications. But how to ensure that the obtained public key is correct and has not been tampered with? At this time, the intervention of authoritative departments is required. Just like everyone can print their resume and say who they are, but if they have the seal of the Public Security Bureau, only the household registration book can prove you are you. This issued by the authority is called a certificate. What's in the certificate? Of course, it should contain a public key, which is the most important; there is also the owner of the certificate, just like the household registration book with your name and ID number, indicating that the book is yours; in addition, there is the issuer of the certificate and the validity period of the ID card is a bit like the issuer institution on the ID card, and how many years of the validity period. If someone fakes a certificate issued by an authority, it's like having fake ID cards and fake household registration books. To generate a certificate, you need to initiate a certificate request, and then send the request to an authority for certification, which is called a CA(Certificate Authority). After sending this request to the authority, the authority will give the certificate a signature. Another question arises, how can the signature be guaranteed to be signed by a genuine authority? Of course, it can only be signed with something that is only in the hands of the authority, which is the CA's private key. The signature algorithm probably works like this: a Hash calculation is performed on the target information to obtain a Hash value. And this process is irreversible, that is to say, the original information content cannot be obtained through the Hash value. When the information is sent out, the hash value is encrypted and sent together with the information as a signature. The process is as follows. Fig2 Signature and verification process Looking at the content of the certificate (as shown below), we will find that there is an Issuer, that is, who issued the certificate; The subject is to who the certificate is issued; Validity is the certificate period; Public-key is the content of the public key, and related signature algorithm. You will find that in order to verify the certificate, the public key of the CA is required. Then a new question arises. How can we be sure that the public key of the CA is correct? This requires a superior CA to sign the CA's public key, and then form the CA's certificate. If you want to know whether a CA's certificate is reliable, you need to see if the public key of the CA's superior certificate can unlock the CA's signature. Just likes if you don’t trust the District Public Security Bureau, you can call the Municipal Public Security Bureau and ask the Municipal Public Security Bureau to confirm its legitimacy of the District Public Security Bureau. This goes up layer by layer until the root CA makes the final endorsement. Through this layer-by-layer credit endorsement method, the normal operation of the asymmetric encryption mode is guaranteed. How does the Root CA prove itself? At this time, Root CA will issue another certificate (as shown below), called the Self-Signed Certificate, which is to sign itself with its own private key, giving people a feeling of "I am me, whether you believe it or not", Therefore, its format content is slightly different from the above CA certificate. Its Issuer and Subject are the same, and its own public key can be used for authentication. So the certificate authentication process will also end here. In this way, in addition to generating the public key and private key through running the script, the OpenSSL will also generate the certificate chain shown below.  Fig3 certificates Boot flow of the HAB mode Figure 4 shows the boot flow of the HAB mode. And steps 1, 2, and 3 are essentially the signature verification process. Fig4 Boot flow of the HAB mode The verification process (as shown in Figure 2) can be used to detect data integrity, identity authentication, and non-repudiation when the public key is trusted, so hab4_pki_tree.sh and hab4_pki_tree.bat scripts can ensure the generated public key and private key pair and the certificate are trusted, it's the "perfectly closed loop". However, the Application image in Figure 4 is plaintext, and the confidentiality of the data is not implemented, so the encrypted boot is always a combination of the HAB boot and the encrypted boot is an advanced usage of an authenticated boot. Reference AN4581: i.MX Secure Boot on HABv4 Supported Devices AN12681: How to use HAB secure boot in i.MX RT10xx  
查看全文
RT106L_S voice control system based on the Baidu cloud 1 Introduction     The NXP RT106L and RT106S are voice recognition chip which is used for offline local voice control, SLN-LOCAL-IOT is based on RT106L, SLN-LOCAL2-IOT is a new local speech recognition board based on RT106S. The board includes the murata 1DX wifi/BLE module, the AFE voice analog front end, the ASR recognition system, the external flash, 2 microphones, and the analog voice amplifier and speakers. The voice recognition process for SLN-LOCAL-IOT and SLN-LOCAL2-IOT is different and the new SLN-LOCAL2-IOT is recommended.     This article is based on the voice control board SLN-LOCAL/2-IOT to implement the following block diagram functions: Pic 1 Use the PC-side speed model tool (Cyberon DSMT) to generate WW(wake word) and VC(voice command) Command related voice engine binary files , which will be used by the demo code. This system is mainly used for the Chinese word recognition, when the user says Chinese word: "小恩小恩", it wakes up SLN-LOCAL/2-IOT, and the board gives feedback "小恩来了,请吩咐". Then system enter the voice recognition stage, the user can say the voice recognition command: “开红灯”,“关红灯”,“开绿灯”,“关绿灯”,“灯闪烁”,“开远程灯”,“关远程灯”, after recognition, the board gives feedback "好的". Among them, “开红灯”,“关红灯”,“开绿灯”,“关绿灯”,“灯闪烁”,the five commands are used for the local light switch, while the 开远程灯”,“关远程灯“two commands can through network communication Baidu cloud control the additional MIMXRT1060-EVK development board light switch. SLN-LOCAL/2-IOT through the WIFI module access to the Internet with MQTT protocol to achieve communication with Baidu cloud, when dectect the remote control command, publish the json packets to Baidu cloud, while MIMRT1060-EVK subscribe Baidu cloud data, will receive data from the IOT board and analyze the EVK board led control. PC side can use MQTT.fx software to subscribe the Baidu cloud data, it also can send data to the device to achieve remote control function directly.  Now, will give the detail content about how to use the SLN-LOCAL/2-IOT SDK demo realize the customized Chinese wake command and voice command, and remote control the MIMXRT1060-EVK through the Baidu Cloud.     2 Platform establish 2.1 Used platform SLN-LOCAL-IOT/SLN-LOCAL2-IOT MIMXRT1060-EVK MQTT.fx SDK_2_8_0_SLN-LOCAL2-IOT MCUXPresso IDE Segger JLINK Baidu Smart Cloud: Baidu cloud control+ TTS Audacity:audio file format convert tool WAVToCode:wav convert to the c array code, which used for the demo tilte play MCUBootUtility: used to burn the feedback audio file to the filesystem Cyberon DSMT: wake word and voice detect command generation tool DSMT is the very important tool to realize the wake word and voice dection, the apply follow is: Pic 2 2.2 Baidu Smart cloud 2.2.1 Baidu cloud IOT control system Enter the IoT Hub: https://cloud.baidu.com/product/iot.html     Click used now. 2.2.1.1 Create device project Create a project, select the device type, and enter the project name. Device types can use shadows as images of devices in the cloud to see directly how data is changing. Once created, an endpoint is generated, along with the corresponding address: Pic 3 2.2.1.2 Create Thing model The Thing model is mainly to establish various properties needed in the shadow, such as temperature, humidity, other variables, and the type of value given, in fact, it is also the json item in the actual MQTT communication.    Click the newly created device-type project where you can create a new thing model or shadow: Pic 4    Here create 3 attributes:LEDstatus,humid,temp It is used to represent the led status, humidity, temperature and so on, which is convenient for communication and control between the cloud and RT board. Once created, you get the following picture:   Pic 5   2.2.1.3 Create Thing shadow In the device-type project, you can select the shadow, build your own shadow platform, enter the name, and select the object model as the newly created Thing model containing three properties, after the create, we can get the details of the shadow:   Pic 6 At the same time will also generate the shadow-related address, names and keys, my test platform situation is as follows: TCP Address: tcp://rndrjc9.mqtt.iot.gz.baidubce.com:1883 SSL Address: ssl://rndrjc9.mqtt.iot.gz.baidubce.com:1884 WSS Address: wss://rndrjc9.mqtt.iot.gz.baidubce.com:443 name: rndrjc9/RT1060BTCDShadow key: y92ewvgjz23nzhgn Port 1883, does not support transmission data encryption Port 1884, supports SSL/TLS encrypted transmission Port 8884, which supports wesockets-style connections, also contains SSL encryption. This article uses a 1883 port with no transmission data encryption for easy testing. So far, Baidu cloud device-type cloud shadow has been completed, the following can use MQTTfx tools to connect and test. In practice, it is recommended that customers build their own Baidu cloud connection, the above user key is for reference only.   2.2.2 Online TTS    SLN-LOCAL/2-IOT board recognizes wake-up words, recognition words, or when powering on, you need to add corresponding demo audio, such as: "百度云端语音测试demo ", "小恩来啦!请吩咐“,"好的". These words need to do a text-to-wav audio file synthesis, here is Baidu Smart Cloud's online TTS function, the specific operation can refer to the following documents: https://ai.baidu.com/ai-doc/SPEECH/jk38y8gno   Once the base audio library is opened, use the main.py provided in the link above and modify it to add the Chinese field you want to convert to the file "TEXT" and add the audio file to be converted in "save_file" such as xxx .wav, using the command: python main.py to complete the conversion, and generate the audio format corresponding to the text, such as .mp3, .wav. Pic 7   After getting the wav file, it can’t be used directly, we need to note that for SLN-LOCAL/2-IOT board, you need to identify the audio source of the 48K sample rate with 16bit, so we need to use the Audioacity Audio tool to convert the audio file format to 48K16bit wav. Import 16K16bit wav files generated by Baidu TTS into the Audioacity tool, select project rate of 48Khz, file->export->export as WAV, select encoding as signed 16bit PCM, and regenerate 48Khz16bit wav for use. Pic 8 “百度云端语音测试demo“:Used for power-on broadcasting, demo name broadcasting, it is stored in RT demo code, so you need to convert it to a 16bit C code array and add it to the project. "小恩来啦!请吩咐",“好的“:voice detect feedback, it is saved in the filesystem ZH01,ZH02 area. 2.3 playback audio data prepare and burn   There are two playback audio file, it is "小恩来啦!请吩咐",“好的“,it is saved in the filesystem ZH01,ZH02 area. Filesystem memory map like this: Pic 9 So, we need to convert the 48K16bit wav file to the filesystem needed format, we need to use the official tool::Ivaldi_sln_local2_iot Reference document:SLN-LOCAL2-IOT-DG chapter 10.1 Generating filesystem-compatible files Use bash input the commands like the following picture: Pic10 Use the convert command to get the playback bin file: python file_format.py -if xiaoencoming_48k16bit.wav -of xiaoencoming_48k16bit.bin -ft H At last, it will generate the file: "小恩来啦!请吩咐"->xiaoencoming_48k16bit.bin,burn to flash address 0x6184_0000 “好的”->OK_48k16bit.bin, burn to flash address 0x6180_0000 Then, use MCUBootUtility tool burn the above two file to the related images. Here, take OK_48k16bit.bin as an example, demo enter the serial download mode(J27-0), power off and power on. Flash chip need to select hyper flash IS26KSXXS, use the boot device memory windows, write button to burn the .bin file to the related address, length is 0X40000 Pic11 Pic12 xiaoencoming_48k16bit.bin can use the same method to download to 0x6184_0000,Length is 0X40000.   2.4 Demo audio prepare and add The prepared baiduclouddemo_48K16bit.wav(“百度云端语音测试demo “) need to convert to the 16bit C array code, and put to the project code, calls by the code, this is used for the demo mode play. The convert need to use the WAVToCode, the operation like this: Pic 13 The generated baiducloulddemo_48K16bit.c,add it to the demo project C files: sln_local_iot_local_demo->audio->demos->smart_home.c。 2.5 WW and VC prepare Wake-up word are generated through the cyberon DSMT tool, which supports a wide range of language, customers can request the tool through Figure 2. The Chinese wake-up words and voice command words in this article are also generated through DSMT. DSMT can have multiple groups, group1 as a wake-up word configuration, CmdMapID s 1. Other groups act as voice command words, such as CMD-IOT in this article, cmdMapID=2. Pic 14   Pic 15 Wake word continuously detects the input audio stream, uses group1, and if successfully wakes up, will do the voice command detection uses group2, or other identifying groups as well as custom groups. The wake-up words using the DSMT tool, the configuration are as follows: Pic 16 The WW can support more words, customer can add the needed one in the group 1. Use the DSMT configure VC like this: Pic 17 Then, save the file, code used file are: _witMapID.bin, CMD_IOT.xml,WW.xml. In the generated files, CYBase.mod is the base model, WW.mod is the WW model, CMD_IOT.mod is the VC model. After Pic 16,17, it finishes the WW and VC command prepare, we can put the DSMT project to the RT106S demo project folder: sln_local2_iot_local_demo\local_voice\oob_demo_zh 3 Code prepare Based on the official SLN-LOCAL2-IOT SDK local_demo, the code in this article modifies the Chinese wake-up words and recognition words (or you can build a new customer custom group directly), add local voice detect the led status operations, Then feedback Chinese audio, demo Chinese audio, Wifi network communication MQTT protocol code, and Baidu cloud shadow connection publish. Source reference code SDK path: SDK_2_8_0_SLN-LOCAL2-IOT\boards\sln_local2_iot\sln_voice_examples\local_demo   SDK_2_8_0_SLN-LOCAL2-IOT\boards\sln_local2_iot\sln_boot_apps SLN-LOCAL2-IOT and SLN-LOCAL-IOT code are nearly the same, the only difference is that the ASR library file is different, for RT106S (SLN-LOCAL2-IOT) using SDK it’s own libsln_asr.a library, for RT106L (SLN-LOCAL-IOT) need to use the corresponding libsln_asr_eval.a library.    Importing code requires three projects: local_demo, bootloader, bootstrap. The three projects store in different spaces. See SLN-LOCAL2-IOT-DG .pdf, chapter 3.3 Device memory map    This is the 3 chip project boot process: Pic 18 This document is for demo testing and requires debug, so this article turns off the encryption mechanism, configures bootloader, bootstrap engineering macro definition: DISABLE_IMAGE_VERIFICATION = 1, and uses JLINK to connect SLN-LOCAL/2-IOT's SWD interface to burn code. The following is to add modification code for app local_demo projects. 3.1 sln-local/2-iot code Sln-local-iot, sln-local2-iot platform, the following modification are the same for the two platform. 3.1.1 Voice recognition related code 1)Demo audio play Play content:“百度云端语音测试demo“ sln_local2_iot_local_demo_xe_ledwifi\audio\demos\ smart_home.c content is replaced by the previously generated baiducloulddemo_48K16bit.C. audio_samples.h,modify: #define SMART_HOME_DEMO_CLIP_SIZE 110733 This code is used for the main.c announce_demo API play:         case ASR_CMD_IOT:             ret = demo_play_clip((uint8_t *)smart_home_demo_clip, sizeof(smart_home_demo_clip));   2)command print information #define NUMBER_OF_IOT_CMDS      7 IndexCommands.h static char *cmd_iot_en[] = {"Red led on", "Red led off", "Green led on", "Green led off",                              "cycle led",        "remote led on",         "remote led off"}; static char *cmd_iot_zh[] = {"开红灯", "关红灯", "开绿灯", "关绿灯", "灯闪烁", "开远程灯", "关远程灯"}; Here is the source code modification using IOT, you can actually add your own speech recognition group directly, and add the relevant command identification.   3)sln_local_voice.c Line757 , add led-related notification information in ASR_CMD_IOT mode. oob_demo_control.ledCmd = g_asrControl.result.keywordID[1];     The code is used to obtain the recognized VC command data, and the value of keywordID[1] represents the number. This number can let the code know which detail voice is detected. so that you can do specific things in the app based on the value of ledcmd. The value of keywordID[1] corresponds to Command List in Figure 17. For example, “开远程灯“, if woke up, and recognized "开远程灯", then keywordID[1] is 5, and will transfer to oob_demo_control.ledCmd, which will be used in the appTask API to realize the detail control. 4) main.c void appTask(void *arg) Under case kCommandGeneric: if the language is Chinese, then add the recognition related control code, at first, it will play the feedback as “好的”. Then, it will check the voice detect value, give the related local led control. else if (oob_demo_control.language == ASR_CHINESE) { // play audio "OK" in Chinese #if defined(SLN_LOCAL2_RD) ret = audio_play_clip((uint8_t *)AUDIO_ZH_01_FILE_ADDR, AUDIO_ZH_01_FILE_SIZE); #elif defined(SLN_LOCAL2_IOT) ret = audio_play_clip(AUDIO_ZH_01_FILE); #endif //kerry add operation code==================================================begin RGB_LED_SetColor(LED_COLOR_OFF); if (oob_demo_control.ledCmd == LED_RED_ON) { RGB_LED_SetColor(LED_COLOR_RED); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_RED_OFF) { RGB_LED_SetColor(LED_COLOR_OFF); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_BLUE_ON) { RGB_LED_SetColor(LED_COLOR_BLUE); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_BLUE_OFF) { RGB_LED_SetColor(LED_COLOR_OFF); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == CYCLE_SLOW) { for (int i = 0; i < 3; i++) { RGB_LED_SetColor(LED_COLOR_RED); vTaskDelay(400); RGB_LED_SetColor(LED_COLOR_OFF); RGB_LED_SetColor(LED_COLOR_GREEN); vTaskDelay(400); RGB_LED_SetColor(LED_COLOR_OFF); RGB_LED_SetColor(LED_COLOR_BLUE); vTaskDelay(400); } } … } In addition to local voice recognition control, this article also add remote control functions, mainly through wifi connection, use the mqtt protocol to connect Baidu cloud server, when local speech recognition get the remote control command, it publish the corresponding control message to Baidu cloud, and then the cloud send the message to the client which subscribe this message,  after the client get the message, it will refer to the message content do the related control.   3.1.3 Network connection code 1)sln_local2_iot_local_demo_xe_ledwifi\lwip\src\apps\mqtt     Add mqtt.c 2)sln_local2_iot_local_demo_xe_ledwifi\lwip\src\include\lwip\apps Add mqtt.h, mqtt_opts.h,mqtt_prv.h The related mqtt driver is from the RT1060 sdk, which already added in the attachment project. 3)sln_tcp_server.c   Add MQTT application layer API function code, client ID, server host, MQTT server port number, user name, password, subscription topic, publishing topic and data, etc., more details, check the attachment code.    The MQTT application code is ported from the mqtt project of the RT1060 SDK and added to the sln_tcp_server.c. TCP_OTA_Server function is used to initialize the wifi network, realize wifi connection, connect to the network, resolve Baidu cloud server URL to get IP, and then connect Baidu cloud server through mqtt, after the successful connection, publish the message at first, so that after power-up through mqttfx to see whether the power on network publishing message is successful. TCP_OTA_Server function code is as follows: static void TCP_OTA_Server(void *param) //kerry consider add mqtt related code { err_t err = ERR_OK; uint8_t status = kCommon_Failed; #if USE_WIFI_CONNECTION /* Start the WiFi and connect to the network */ APP_NETWORK_Init(); while (status != kCommon_Success) { status_t statusConnect; statusConnect = APP_NETWORK_Wifi_Connect(true, true); if (WIFI_CONNECT_SUCCESS == statusConnect) { status = kCommon_Success; } else if (WIFI_CONNECT_NO_CRED == statusConnect) { APP_NETWORK_Uninit(); /* If there are no credential in flash delete the TPC server task */ vTaskDelete(NULL); } else { status = kCommon_Failed; } } #endif #if USE_ETHERNET_CONNECTION APP_NETWORK_Init(true); #endif /* Wait for wifi/eth to connect */ while (0 == get_connect_state()) { /* Give time to the network task to connect */ vTaskDelay(1000); } configPRINTF(("TCP server start\r\n")); configPRINTF(("MQTT connection start\r\n")); mqtt_client = mqtt_client_new(); if (mqtt_client == NULL) { configPRINTF(("mqtt_client_new() failed.\r\n");) while (1) { } } if (ipaddr_aton(EXAMPLE_MQTT_SERVER_HOST, &mqtt_addr) && IP_IS_V4(&mqtt_addr)) { /* Already an IP address */ err = ERR_OK; } else { /* Resolve MQTT broker's host name to an IP address */ configPRINTF(("Resolving \"%s\"...\r\n", EXAMPLE_MQTT_SERVER_HOST)); err = netconn_gethostbyname(EXAMPLE_MQTT_SERVER_HOST, &mqtt_addr); configPRINTF(("Resolving status: %d.\r\n", err)); } if (err == ERR_OK) { configPRINTF(("connect to mqtt\r\n")); /* Start connecting to MQTT broker from tcpip_thread */ err = tcpip_callback(connect_to_mqtt, NULL); configPRINTF(("connect status: %d.\r\n", err)); if (err != ERR_OK) { configPRINTF(("Failed to invoke broker connection on the tcpip_thread: %d.\r\n", err)); } } else { configPRINTF(("Failed to obtain IP address: %d.\r\n", err)); } int i=0; /* Publish some messages */ for (i = 0; i < 5;) { configPRINTF(("connect status enter: %d.\r\n", connected)); if (connected) { err = tcpip_callback(publish_message_start, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } i++; } sys_msleep(1000U); } vTaskDelete(NULL); } Please note the following published json data, it can’t be publish directly in the code. {   "reported": {     "LEDstatus": false,     "humid": 88,     "temp": 22   } } Which need to use this web https://www.bejson.com/ realize the json data compression and convert: {\"reported\" : {     \"LEDstatus\" : true,     \"humid\" : 88,     \"temp\" : 11    } }   4)main appTask Under case kCommandGeneric: , if the language is Chinese, then add the corresponding voice recognition control code. "开远程灯": turn on the local yellow light, publish the “remote led on” mqtt message to Baidu cloud, control remote 1060EVK board lights on. "关远程灯": turn on the local white light, publish the “remote led off” mqtt message to Baidu cloud, control the remote 1060EVK board light off. Related operation code: else if (oob_demo_control.ledCmd == LED_REMOTE_ON) { RGB_LED_SetColor(LED_COLOR_YELLOW); vTaskDelay(5000); err_t err = ERR_OK; err = tcpip_callback(publish_message_on, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } } else if (oob_demo_control.ledCmd == LED_REMOTE_OFF) { RGB_LED_SetColor(LED_COLOR_WHITE); vTaskDelay(5000); err_t err = ERR_OK; err = tcpip_callback(publish_message_off, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } } 3.2 MIMXRT1060-EVK code The main function of the MIMXRT1060-EVK code is to configure another client in the cloud, subscribe to the message published by SLN-LOCAL/2-IOT which detect the remote command, and then the LED on the control board is used to test the voice recognition remote control function, this code is based on Ethernet, through the Ethernet port on the board, to achieve network communication, and then use mqtt to connect baidu cloud, and subscribe the message from local2, This enables the reception and execution of the Local2 command. the network code part is similar to SLN-LOCAL2-IOT board network code, the servers, cloud account passwords, etc. are all the same, the main function is to subscribe messages. See the code from attachment RT1060, lwip_mqtt_freertos.c file. When receives data published by the server, it needs to do a data analysis to get the status of the led light and then control it. Normal data from Baidu cloud shadow sent as follows Received 253 bytes from the topic "$baidu/iot/shadow/RT1060BTCDShadow/update/accepted": "{"requestId":"2fc0ca29-63c0-4200-843f-e279e0f019d3","reported":{"LEDstatus":false,"humid":44,"temp":33},"desired":{},"lastUpdatedTime":{"reported":{"LEDstatus":1635240225296,"humid":1635240225296,"temp":1635240225296},"desired":{}},"profileVersion":159}" Then you need to parse the data of LEDstatus from the received data, whether it is false or true. Because the amount of data is small, there is no json-driven parsing here, just pure data parsing, adding the following parsing code to the mqtt_incoming_data_cb function: mqtt_rec_data.mqttindex = mqtt_rec_data.mqttindex + len; if(mqtt_rec_data.mqttindex >= 250) { PRINTF("kerry test \r\n"); PRINTF("idex= %d", mqtt_rec_data.mqttindex); datap = strstr((char*)mqtt_rec_data.mqttrecdata,"LEDstatus"); if(datap != NULL) { if(!strncmp(datap+11,strtrue,4))//char strtrue[]="true"; { GPIO_PinWrite(GPIO1, 3, 1U); //pull high PRINTF("\r\ntrue"); } else if(!strncmp(datap+11,strfalse,5))//char strfalse[]="false"; { GPIO_PinWrite(GPIO1, 3, 0U); //pull low PRINTF("\r\nfalse"); } } mqtt_rec_data.mqttindex =0; It use the strstr search the “LEDstatus“ in the received data, and get the pointer position, then add the fixed length to get the LED status is true or flash. If it is true, turn on the led, if it is false, turn off the led. 4 Test Result    This section gives the test results and video of the system. Before testing the voice function, first use MQTTfx to test baidu cloud connection, release, subscription is no problem, and then test sln-local2-iot combined with mimxrt1060-evk voice wake-up recognition and remote control functions.    For SLN-LOCAL2-IOT wifi hotspot join, enter the command in the print terminal: setup AWS kerry123456   4.1 MQTT.fx test baidu cloud connection MQTT.fx is an EclipsePaho-based MQTT client tool written in the Java language that supports subscription and publishing of messages through Topic.    4.1.1 MQTT fx configuration     Download and install the tool, then open it, at first, need to do the configuration, click edit connection: Pic19 Profile name:connect name Profile type: MQTT broker Broker address: It is the baidu could generated broker address, with 1883 no encryption transfer. Broker port:1883 No encryption Client ID: RT1060BTCDShadow, here need to note, this name should be the same as the could shadow name, otherwise, on the baidu webpage, the connection is not be detected. If this Client ID name is the same as the shadow name, then when the MQTT fx connect, the online side also can see the connection is OK. User credentials: add the thing User name and password from the baidu cloud. After the configuration, click connect, and refresh the website. Before conection: Pic 20 After connection: Pic 21 4.1.2 MQTT fx subscribe When it comes to subscription publishing, what is the topic of publishing subscriptions?  Here you can open your thing shadow, select the interaction, and see that the page has given the corresponding topic situation: Pic 22 Subscribe topic is: $baidu/iot/shadow/RT1060BTCDShadow/update/accepted  Publish topic is: $baidu/iot/shadow/RT1060BTCDShadow/update Pic 23 Click subscribe, we can see it already can used to receive the data.   4.1.3 MQTT fx publish Publish need to input the topic: $baidu/iot/shadow/RT1060BTCDShadow/update It also need to input the content, it will use the json content data. Pic 24 Here, we can use this json data: {   "reported" : {     "LEDstatus" : true,     "humid" : 88,     "temp" : 11    } } The json data also can use the website to check the data: https://www.bejson.com/jsonviewernew/ Pic 25 Input the publish data, and click pubish button: Pic 26 4.1.4 Publish data test result   Before publish, clean the website thing data: Pic 27 MQTT fx publish data, then check the subscribe data and the website situation: Pic 28 We can see, the published data also can be see in the website and the mqttfx subscribe area. Until now, the connection, data transfer test is OK.   4.2 Voice recognition and remote control test This is the device connection picture: Pic 29 4.2.1 voice recognition local control Pic 30 This is the SLN-LOCAL2-IOT print information after recognize the voice WW and VC. Red led on: led cycle: 4.2.2 voice recognition remote control   Following test, wakeup + remote on, wakeup+remote off, and also give the print result and the video. Pic 31 remote control:  
查看全文
i.MXRT1050 MCU supports 10M/100M Ethernet MAC. Nowadays, LAN8720A is a very common PHY used in many networking design. In this document, I will show you how to use LAN8720A with i.MXRT1050.  1. Schematic   In this design example,  ENET_RST  is connected to GPIO_AD_B1_04      ENET_INT is connected to GPIO_AD_B0_15      2. Source code modification In the i.MXRT1050 SDK, the source code files of the PHY are fsl_phy.c and fsl_phy.h. The registers of LAN8720A need to be added into the source code. Below is the registers of LAN8720A. The details can be found in the LAN8720A datasheet. ( The modified fsl_phy.c and fsl_phy.h are attached)   In the pinmux.c, modify the GPIO Mux setting of the ENET_INT and ENET_RST.   IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B1_04_GPIO1_IO20, 0U);                                      IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_15_GPIO1_IO15, 0U);                                      IOMUXC_SetPinConfig(IOMUXC_GPIO_AD_B1_04_GPIO1_IO20, 0xB0A9u);                                  IOMUXC_SetPinConfig(IOMUXC_GPIO_AD_B0_15_GPIO1_IO15, 0xB0A9u);                               This is the part of the source code to reset the PHY in the main() function. gpio_pin_config_t gpio_config = {kGPIO_DigitalOutput, 0, kGPIO_NoIntmode}; GPIO_PinInit(GPIO1, 20, &gpio_config); GPIO_PinInit(GPIO1, 15, &gpio_config); GPIO_WritePinOutput(GPIO1, 15, 1); GPIO_WritePinOutput(GPIO1, 20, 0); delay(); GPIO_WritePinOutput(GPIO1, 20, 1); For more example codes, please refer to the demo_apps/lwip in the i.MXRT SDK package. Reference: i.MXRT1050 web page : i.MX RT1050 MCU/Applications Crossover Processor | Arm® Cortex®-M7 @600 MHz, 512KB SRAM |NXP  MCUXpresso SDK web page : MCUXpresso SDK|NXP 
查看全文
Recently, we often encounter customers using i.MXRT for RS485 communication. Mostly the problem of receiving and sending direction conversion in the process of using. Taking iMXRT1050 and SN65HVD11QDR as examples, The document introduces the LPUART to RS485 circuit and the method of transceiver control. The working principle is as follows: LPUART TXD: Transmit Data LPUART RXD: Receive Date LPUART RTS_B: Request To Send   The main control methods are as follows: 1  Use TXD signal line to do hardware automatic transceiver control According to the UART protocol, when the line is idle, TX is logic high. After the NOT gate, the LOW level is added to the direction control terminal, so when the UART is not  transmitting data, RS485 is in the state of receiving data. 2   Use GPIO control & LPUART_RTS More detailed information, users can refer to the link: https://www.nxp.com/docs/en/application-note/AN12679.pdf Note: Using GPIO control, software needs to judge the timing of receiving and sending. If the control is not good, it is easy to lose data. In order to control it well, the software must respond to TX FIFO "empty" interrupt, or query the sending status register, and accurately grasp the control opportunity, so as to ensure that there is no error in sending and receiving. Combined with the above methods, some customers are using the following control: Best Regards
查看全文
There are two new LCD panels that are now available for i.MX RT EVKs: The original RK043FN02H-CT panel is being replaced with the newer RK043FN66HS-CTG panel, which will affect the following EVKs: i.MX RT1050 i.MX RT1060 i.MX RT1064   The original RK055HDMIPI4M panel is being replaced with the newer RK055HDMIPI4MA0 panel, which will affect the following EVKs: i.MX RT595 i.MX RT1160 i.MX RT1170   These changes are due to the previous panels being EOL by the LCD panel manufacturer. These new LCDs have the same dimensions and screen size as their original versions (4.3” 480x272 and 5.5” 720x1280 respectively) and the physical connections are the same. The version name can be found on the back of the LCD. However there are modifications to the software that may need to be made or else the LCD panel will be dark or blank when running MCUXpresso SDK demos on i.MXRT EVKs. This updated code is already available in the latest MCUXpresso SDK and SDK demos are configured by default to use the new panels.   For the i.MX RT1050/1060/1064 panel RK043FN66HS-CTG: The touch controller has changed and the SDK software has been modified to support the new touch controller. The LCD panel also has slightly different specs but the same code used for the original LCD panel will also work with the new LCD panel, so no change is necessary for display-only demos.  LCD demos are configured to support the new panel by default in the latest MCUXpresso SDK. So if you have the original panel you will need to change in the SDK code from      #define DEMO_PANEL  DEMO_PANEL_RK043FN66HS    //new panel (default SDK setting)           to       #define DEMO_PANEL  DEMO_PANEL_RK043FN02H     //older panel   For the i.MX RT595/RT1160/RT1170 panel RK055HDMIPI4MA0: Both the touch and display SDK software had to be updated to support this new panel. LCD demos are configured to support the new panel by default in the latest MCUXpresso SDK. So if you have the original panel you will need to change in the SDK code from:       #define DEMO_PANEL DEMO_PANEL_RK055MHD091    //new panel (default SDK setting)           to       #define DEMO_PANEL DEMO_PANEL_RK055AHD091    //older panel
查看全文
There is an issue with the DCD file used in the SDK 2.9.0 release for the i.MX RT1170 processor. When the included DCD file is used in a project to configure the SDRAM memory on the EVK, the refresh for the memory is not enabled. This can lead to corruption/data loss over time.   To fix the problem, replace the dcd.c file in your project with the attached file instead.   We are working on a fix, and a new revision of the SDK will be released soon.
查看全文
i.MX RT1050 is the first set of processors in NXP's crossover processor family, combining the high-performance and high level of integration on an applications processors with the ease of use and real-time functionality of a microcontroller. As the first device in a new family, we have had some learning and improvements that have come along the way. There have been some changes and improvements to the processor and also our enablement for the device. This can result in some revisions of hardware and software not being directly compatible with each other out of the box. In particular, some software that has been released for the A0 silicon revision (found on EVK boards) doesn't run on the A1 silicon revision (EVKB boards). In order to minimize the risk of compatibility issues, we recommend that all customers move to SDK 2.3.1 or higher. The SDK 2.3.1 is listed as supporting the EVKB hardware specifically, but the SDK is compatible with the EVK (non-B) hardware. We also recommend that customers using the DAPLink firmware for the OpenSDA debugging circuit built into the EVK/EVKB update to the latest version available on the www.nxp.com/opensda site. The flashloader package has also been updated. Rev 1.1 or later should be used (Flashloader i.MX-RT1050). There are many application notes available for RT1050. Many of these application notes were written based on the original silicon revision and early releases of enablement software. We are in the process of reviewing the published application notes and application note software to prioritize updating them where needed based on the latest enablement and recommendations. If you are in a situation where you need to use SDK 2.3.0 on A1 silicon, the most likely problem area involves some new clock gate bits that were added on the A1 silicon revision. These bits weren't present on the A0 silicon, so SDK 2.3.0 will clear them which disables external memory interfaces. If you comment out  the call to BOARD_BootClockGate() that is in the BOARD_BootClockRUN function (found in the clock_config.c file), that should allow the SDK 2.3.0 software to run on an A1 silicon/EVKB. For more information: MCUXpresso SDK RT1050 migration app note  i.MX RT1050 CMSIS-DAP drag-and-drop programming 
查看全文
Get 500 MHz for just $1 with NXP's new i.MX RT1010 crossover MCU.  Targeted for a variety of applications, this video highlights two very popular example use-cases for i.MX RT1010 -audio and motor control.
查看全文
1 Introduction    With the quick development of science and technology, the Internet of Things(IoT) is widely used in various areas, such as industry, agriculture, environment, transportation, logistics, security, and other infrastructure. IoT usage makes our lives more colorful and intelligent. The explosive development of the IoT cannot be separated from the cloud platform. At present, there are many types of cloud services on the market, such as Amazon's AWS, Microsoft's Azure, google cloud, China's Alibaba Cloud, Baidu Cloud, OneNet, etc.    Amazon AWS Cloud is a professional cloud computing service that is provided by Amazon. It provides a complete set of infrastructure and cloud solutions for customers in various countries and regions around the world. It is currently a cloud computing with a large number of users. AWS IoT is a managed cloud platform that allows connected devices to easily and securely interact with cloud applications and other devices.    NXP crossover MCU RT product has launched a series of AWS sample codes. This article mainly explains the remote_control_wifi_nxp code in the official MIMXRT1060-EVK SDK as an example to realize the data interaction with AWS IoT cloud, Android mobile APP, and MQTTfx client. The cloud topology of this article is as follows: Fig.1-1 2 AWS cloud operation 2.1 Create an AWS account Prepare a credit card, and then go to the below amazon link to create an AWS account:    https://console.aws.amazon.com/console/home   2.2 Create a Thing    Open the AWS IOT link: https://console.aws.amazon.com/iot    Choose the Things item under manage, if it is the first time usage, customer can choose “register a thing” to create the thing. If it is used in the previous time, customers can click the “create” button in the right corner to create the thing. Choose “create a single thing” to create the new thing, more details check the following picture. Fig. 2-1 Fig.2-2 Fig.2-3 2.3 Create certificate    Create a certificate for the newly created thing, click the “create certificate” button under the following picture: Fig.2-4    After the certificate is built, it will have the information about the certificate created, it means the certificate is generated and can be used. Fig. 2-5 Please note, download files: certificate for this thing, public key, private key. It will be used in the mqttfx tool configuration. Click “A root CA for AWS for Download”, download the root CA for AWS IoT, the mqttfx tool setting will also use it. Open the root CA download link, can download the CA certificate. RSA 2048 bit key: VeriSign Class 3 Public Primary G5 root CA certificate Fig. 2-6 At last, we can get these files: 7abfd7a350-certificate.pem.crt 7abfd7a350-private.pem.key 7abfd7a350-public.pem.key AmazonRootCA1.pem Save it, it will be used later. Click “active” button to active the certificate, and click “Done” button. The policy will be attached later.   2.4 Create Policies     Back to the iot view page: https://console.aws.amazon.com/iot/     Select the policies under Secure item, to create the new policies.  Fig. 2-7 Input the policy name, in the action area, fill: iot:*, Resource ARN area fill: * Check Allow item, click the create button to finish the new policy creation. Fig. 2-8 2.5 Things attach relationship     After the thing, certificate, policies creation, then will attach the policy to the certificate, and attach the certificate to the Things. Fig. 2-9 Choose the certificates under Secure item, in the related certificate item, choose “…”, you will find the down list, click “attach policy”, and choose the newly created policy. Then click attach thing, choose the newly created thing. Fig. 2-10 Fig. 2-11 Fig. 2-12 Now, open the Things under Mange item, check the detail things related information. Fig.2-13 Double click the thing, in the Interact item, we can find the Rest API Endpoint, the RT code and the mqttfx tool will use this endpoint to realize the cloud connection. Fig. 2-14 Check the security, you will find the previously created certificate, it means this thing already attach the new certificates: Fig. 2-15 Until now, we already finish the Things related configuration, then it will be used for the MQTT fx, Android app, RT EVK board connections, and testing, we also can check the communication information through the AWS shadow in the webpage directly.       3 Android related configuration 3.1 AWS cognito configuration    If use the Android app to communicate with the AWS IoT clould, the AWS side still needs to use the cognito service to authorize the AWS IoT, then access the device shadows. Create a new identity pools at first from the following link: https://console.aws.amazon.com/cognitohttps://console.aws.amazon.com/cognito Fig. 3-1 Click “manage Identity pools”, after enter it, then click “create new identity pool” Fig. 3-2 Fig. 3-3 Fig. 3-4 Here, it will generate two Roles: Cognito_PoolNameAuth_Role Cognito_PoolNameUnauth_Role Click Allow, to finish the identity pool creation. Fig. 3-5 Please record the related Identity pool ID, it will be used in the Android app .properties configuration files. 3.2 Create plicies in IAM for cognito   Open https://console.aws.amazon.com/iam   Click the “policies” item under “access management” Fig. 3-6 Choose “create policy”, create a IAM policies, in the Policy JSON area, write the following content: Fig. 3-7 { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Connect" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "iot:Publish" ], "Resource": [ "arn:aws:iot:us-east-1:965396684474:topic/$aws/things/RTAWSThing/shadow/update", "arn:aws:iot:us-east-1:965396684474:topic/$aws/things/RTAWSThing/shadow/get" ] }, { "Effect": "Allow", "Action": [ "iot:Subscribe", "iot:Receive" ], "Resource": [ "*" ] } ] }‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Please note, in the JSON content: "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/update", "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/get" Region:the us-east-1 inFig. 3-5 ACCOUNT ID, it can be found in the upper right corner my account side. Fig 3-8 Fig 3-9 After finished the IAM policy creation, then back to IAM policies page, choose Filter policies as customer managed, we can find the new created customer’s policy. Fig. 3-10 3.3 Attach policy for the cognito role in IAM   In IAM, choose roles item: Fig. 3-11 Double click the cognito_PoolNameUnauth_Role which is generated when creating the pool in cognito, click attach policies, select the new created policy. Fig. 3-12 Fig. 3-13 Until now, we already finish the AWS cognito configuration.   3.4  Android properties file configuration Create a file with .properties, the content is:     customer_specific_endpoint=<REST API ENDPOINT>     cognito_pool_id=<COGNITO POOL ID>     thing_name=<THING NAME>     region=<REGION> Please fill the correct content: REST API ENDPOINT:Fig 2-14 COGNITO POOL ID:fig 3-5 THING NAME:fig 2-14,upper left corner REGION:Fig 3-5, the region data in COGNITO POOL ID Take an example, my properties file content is:  customer_specific_endpoint=a215vehc5uw107-ats.iot.us-east-1.amazonaws.com  cognito_pool_id=us-east-1:c5ca6d11-f069-416c-81f9-fc1ec8fd8de5  thing_name=RTAWSThing  region=us-east-1 In the real usage, please use your own configured data, otherwise, it will connect to my cloud endpoint. 4. MQTTfx configuration and testing MQTT.fx is an MQTT client tool which is based on EclipsePaho and written in Java language. It supports subscribe and publish of messages through Topic. You can download this tool from the following link:   http://mqttfx.jensd.de/index.php/download    The new version is:1.7.1.   4.1 MQTT.fx configuration     Choose connect configuration button, then enter the connection configuration page: Fig. 4-1 Profile Name: Enter the configuration name Broker Address: it is REST API ENDPOINT。 Broker Port:8883 Client ID: generate it freely CA file: it is the downloaded CA certificate file Client Certificate File: related certificate file Client key File: private key file Check PEM formatted。 Click apply and OK to finish the configuration. 4.2 Use the AWS cloud to test connection   In order to test whether it can be connected to the event cloud, a preliminary connection test can be performed. Open the aws page: https://console.aws.amazon.com/iot here is a Test button under this interface, which can be tested by other clients or by itself.Both AWS cloud and MQTTfx subscribe topic: $aws/things/RTAWSThing/shadow/update MQTTfx publishes data to the topic: $aws/things/RTAWSThing/shadow/update It can be found that both the cloud test port and the MQTTfx subscribe can receive data: Fig. 4-2 Below, the Publish data is tested by the cloud, and then you can see that both the MQTTFX subscribe and the cloud subscribe can receive data: Fig. 4-3 Until now, the AWS cloud can transfer the data between the AWS iot cloud and the client. 5 RT1060 and wifi module configuration   We mainly use the RT1060 SDK2.8.0 remote_control_wifi_nxp as the RT test code: SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\aws_examples\remote_control_wifi_nxp Test platform is:MIMXRT1060-EVK Panasonic PAN9026 SDIO ADAPTER + SD to uSD adapter The project is using Panasonic PAN9026 SDIO ADAPTER in default. 5.1 WIFI and the AWS code configuration    The project need the working WIFI SSID and the password, so prepare a working WIFI for it. Then add the SSID and the password in the aws_clientcredential.h #define clientcredentialWIFI_SSID       "Paste WiFi SSID here." #define clientcredentialWIFI_PASSWORD   "Paste WiFi password here." The connection for AWS also in file: aws_clientcredential.h #define clientcredentialMQTT_BROKER_ENDPOINT "a215vehc5uw107-ats.iot.us-east-1.amazonaws.com" #define clientcredentialIOT_THING_NAME       "RTAWSThing" #define clientcredentialMQTT_BROKER_PORT      8883   5.2 certificate and the key configuration Open the SDK following link: SDK_2.8.0_EVK-MIMXRT1060\rtos\freertos\tools\certificate_configuration\CertificateConfigurator.html Fig. 5-1 Generate the new aws_clientcredential_keys.h, and replace the old one. Take the MCUXPresso IDE project as an example, the file location is: Fig. 5-2 Build the project and download it to the MIMXRT1060-EVK board. 6 Test result Androd mobile phone download and install the APK under this folder: SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\aws_examples\remote_control_android\AwsRemoteControl.apk SDK can be downloaded from this link: Welcome | MCUXpresso SDK Builder  Then, we can use the Android app to remote control the RT EVK on board LED, the test result is 6.1 APP and EVK test result MIMXRT1060-EVK printf information: Fig. 6-1 Turn on and turn off the led:   Fig. 6-2                                        Fig. 6-3 6.2 MQTTfx subscribe result MQTTfx subscribe data Turn on the led, we can subscribe two messages: Fig. 6-4 Fig. 6-5   Turn off the led, we also can subscribe two messages: Fig. 6-6 Fig. 6-7 In the two message, the first one is used to set the led status. The second one is the EVK used to report the EVK led information. MQTTfx also can use the publish page, publish this data: {"state":{"desired":{"LEDstate":1}}} or {"state":{"desired":{"LEDstate":0}}} To topic: $aws/things/RTAWSThing/shadow/update It also can realize the on board LED turn on or off. 6.3 AWS cloud shadows display result Turn on the led: Fig. 6-8 Turn off the led: Fig. 6-9 In conclusion, after the above configuration and testing, it can finish the Android mobile phone to remote control the RT EVK on board LED and get the information. Also can use the MQTTFX client tool and the AWS shadow page to check the communication data.
查看全文
In the tutorial, I'd like to show the steps of deploying an image classification model on i.MX RT1060 to enabling you to classify fashion images and categories. In the first part of this tutorial, we will review the Fashion MNIST dataset, including how to download it to your system. From there we’ll define a simple CNN network using the TensorFlow platform. Next, we’ll train our CNN model on the Fashion MNIST dataset, train it, and review the results. Finally, we'll optimize the model, after that, the model will be smaller and increase inferencing speed, which is valuable for source-limited devices such as MCU. Let’s go ahead and get started! Fashion MNIST dataset The Fashion MNIST dataset was created by the e-commerce company, Zalando. Fig 1 Fashion MNIST dataset As they note on their official GitHub repo for the Fashion MNIST dataset, there are a few problems with the standard MNIST digit recognition dataset: It’s far too easy for standard machine learning algorithms to obtain 97%+ accuracy. It’s even easier for deep learning models to achieve 99%+ accuracy. The dataset is overused. MNIST cannot represent modern computer vision tasks. Zalando, therefore, created the Fashion MNIST dataset as a drop-in replacement for MNIST. 60,000 training examples 10,000 testing examples 10 classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot 28×28 grayscale images The code below loads the Fashion-MNIST dataset using the TensorFlow and creates a plot of the first 25 images in the training dataset. import tensorflow as tf import numpy as np # For easy reset of notebook state. tf.keras.backend.clear_session() # load dataset fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() lass_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] plt.figure(figsize=(8,8)) for i in range(25): plt.subplot(5,5,i+1,) plt.tight_layout() plt.imshow(train_images[i]) plt.xlabel(lass_names[train_labels[i]]) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.show() Fig 2 Running the code loads the Fashion-MNIST train and test dataset and prints their shape. Fig 3 We can see that there are 60,000 examples in the training dataset and 10,000 in the test dataset and that images are indeed square with 28×28 pixels. Creating model We need to define a neural network model for the image classify purpose, and the model should have two main parts: the feature extraction and the classifier that makes a prediction. Defining a simple Convolutional Neural Network (CNN) For the convolutional front-end, we build 3 layers of convolution layer with a small filter size (3,3) and a modest number of filters followed by a max-pooling layer. The last filter map is flattened to provide features to the classifier. As we know, it's a multi-class classification task, so we will require an output layer with 10 nodes in order to predict the probability distribution of an image belonging to each of the 10 classes. In this case, we will require the use of a softmax activation function. And between the feature extractor and the output layer, we can add a dense layer to interpret the features. All layers will use the ReLU activation function and the He weight initialization scheme, both best practices. We will use the Adam optimizer to optimize the sparse_categorical_crossentropy loss function, suitable for multi-class classification, and we will monitor the classification accuracy metric, which is appropriate given we have the same number of examples in each of the 10 classes. The below code will define and run it will show the struct of the model. # Define a Model model = tf.keras.models.Sequential() # First Convolution ,Kernel:16*3*3 model.add( tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_uniform',input_shape=(28, 28, 1))) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Second Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Third Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Flatten()) model.add( tf.keras.layers.Dense(32, activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Dense(10, activation='softmax')) Fig 4 Training Model After the model is defined, we need to train it. The model will be trained using 5-fold cross-validation. The value of k=5 was chosen to provide a baseline for both repeated evaluation and to not be too large as to require a long running time. Each validation set will be 20% of the training dataset or about 12,000 examples. The training dataset is shuffled prior to being split and the sample shuffling is performed each time so that any model we train will have the same train and validation datasets in each fold, providing an apples-to-apples comparison. We will train the baseline model for a modest 20 training epochs with a default batch size of 32 examples. The validation set for each fold will be used to validate the model during each epoch of the training run, so we can later create learning curves, and at the end of the run, we use the test dataset to estimate the performance of the model. As such, we will keep track of the resulting history from each run, as well as the classification accuracy of the fold. The train_model() function below implements these behaviors, taking the training dataset and test dataset as arguments, and returning a list of accuracy scores and training histories that can be later summarized. from sklearn.model_selection import KFold # train a model using k-fold cross-validation def train_model(dataX, dataY, n_folds=5): scores, histories = list(), list() # prepare cross validation kfold = KFold(n_folds, shuffle=True, random_state=1) for train_ix, validate_ix in kfold.split(dataX): # select rows for train and test trainX, trainY, validate_X, validate_Y = dataX[train_ix], dataY[train_ix], dataX[validate_ix], dataY[validate_ix] # fit model history = model.fit(trainX, trainY, epochs=20, batch_size=32, validation_data=(validate_X, validate_Y), verbose=0) # evaluate model _, acc = model.evaluate(validate_X, validate_Y, verbose=0) print("Accurary: {:.4f},Total number of figures is {:0>2d}".format(acc * 100.0, len(testY))) # append scores scores.append(acc) histories.append(history) return scores, histories Module Summary After the model has been trained, we can present the results. There are two key aspects to present: the diagnostics of the learning behavior of the model during training and the estimation of the model performance. These can be implemented using separate functions. First, the diagnostics involve creating a line plot showing model performance on the train and validate set during each fold of the k-fold cross-validation. These plots are valuable for getting an idea of whether a model is overfitting, underfitting, or has a good fit for the dataset. We will create a single figure with two subplots, one for loss and one for accuracy. Blue lines will indicate model performance on the training dataset and orange lines will indicate performance on the hold-out validate dataset. The summarize_diagnostics() function below creates and shows this plot given the collected training histories. # plot diagnostic learning curves def summarize_diagnostics(histories): for i in range(len(histories)): # plot loss plt.subplot(2,1,1) plt.title('Cross Entropy Loss') plt.plot(histories[i].history['loss'], color='blue', label='train') plt.plot(histories[i].history['val_loss'], color='orange', label='test') # plot accuracy plt.subplot(2,1,2) plt.title('Classification Accuracy') plt.plot(histories[i].history['accuracy'], color='blue', label='train') plt.plot(histories[i].history['val_accuracy'], color='orange', label='test') plt.show() Fig 5 Next, the classification accuracy scores collected during each fold can be summarized by calculating the mean and standard deviation. This provides an estimate of the average expected performance of the model trained on the test dataset, with an estimate of the average variance in the mean. We will also summarize the distribution of scores by creating and showing a box and whisker plot. The summarize_performance() function below implements this for a given list of scores collected during model training. # summarize model performance def summarize_performance(scores): # print summary print('Accuracy: mean={:.4f} std={:.4f}, n={:0>2d}'.format(np.mean(trained_scores)*100, np.std(trained_scores)*100, len(scores))) # box and whisker plots of results plt.boxplot(scores) plt.show()   Fig 6 Verifying predictions According to the above figure, we see that the final trained model can get up to around 87.6% accuracy when predicting the test dataset. And with the trained model, running the below code will demonstrate the result of predictions about some images. def plot_image(i, predictions_array, true_label, img): true_label, img = true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): true_label = true_label[i] plt.grid(False) plt.xticks(range(10)) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') predictions = model.predict(test_images) # Plot the first X test images, their predicted labels, and the true labels. # Color correct predictions in blue and incorrect predictions in red. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions[i], test_labels) plt.tight_layout() plt.show()   Fig 7 Model quantization Post-training quantization is a conversion technique that can reduce model size while also improving CPU and hardware accelerator latency, with little degradation in model accuracy, especially it's crucial to embedded platforms, as it lacks the compute-intensive performance, the Flash and RAM memory is also very limited. TensorFlow Lite is able to be used to convert an already-trained float TensorFlow model to the TensorFlow Lite format. In addition, the TensorFlow Lite provides several approaches to optimize the mode, among these ways, Integer quantization is an optimization strategy that converts 32-bit floating-point numbers (such as weights and activation outputs) to the nearest 8-bit fixed-point numbers. This results in a smaller model and increased inferencing speed, which is very valuable for low-power devices such as microcontrollers. The below codes show how to implement the Integer quantization of the trained model, and after running these codes, we can find that the size of Tensorflow Lite mode reduces almost 64.9 KB versus the original model, becomes about 32% of the original size(Fig 8). import os # Convert using integer-only quantization def representative_data_gen(): for input_value in tf.data.Dataset.from_tensor_slices(tf.cast(train_images,tf.float32)).shuffle(500).batch(1).take(150): yield [input_value] # Convert using dynamic range quantization converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model_quant = converter.convert() # Save the model to disk open("model_dynamic_range_quantization.tflite", "wb").write(tflite_model_quant) ## Size difference Dynamic_range_quantization_model_size = os.path.getsize("model_dynamic_range_quantization.tflite") print("Dynamic range quantization model is %d bytes" % Dynamic_range_quantization_model_size) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_data_gen # Ensure that if any ops can't be quantized, the converter throws an error converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # Set the input and output tensors to uint8 (APIs added in r2.3) converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 tflite_model_advanced_quant = converter.convert() # Save the model to disk open("model_integer_only_quantization.tflite", "wb").write(tflite_model_advanced_quant) Integer_only_quantization_model_size = os.path.getsize("model_integer_only_quantization.tflite") print("Integer_only_quantization_model is %d bytes" % Integer_only_quantization_model_size) difference = Dynamic_range_quantization_model_size - Integer_only_quantization_model_size print("Difference is %d bytes" % difference) Fig 8 Evaluating the TensorFlow Lite model Now we'll run inferences using the TensorFlow Lite Interpreter to compare the model accuracies. First, we need a function that runs inference with a given model and images, and then returns the predictions: # Helper function to run inference on a TFLite model def run_tflite_model(tflite_file, test_image_indices): # Initialize the interpreter interpreter = tf.lite.Interpreter(model_path=str(tflite_file)) interpreter.allocate_tensors() input_details = interpreter.get_input_details()[0] output_details = interpreter.get_output_details()[0] predictions = np.zeros((len(test_image_indices),), dtype=int) for i, test_image_index in enumerate(test_image_indices): test_image = test_images[test_image_index] test_label = test_labels[test_image_index] # Check if the input type is quantized, then rescale input data to uint8 if input_details['dtype'] == np.uint8: input_scale, input_zero_point = input_details["quantization"] test_image = test_image / input_scale + input_zero_point test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"]) interpreter.set_tensor(input_details["index"], test_image) interpreter.invoke() output = interpreter.get_tensor(output_details["index"])[0] predictions[i] = output.argmax() return predictions Next, we'll compare the performance of the original model and the quantized model on one image. model_basic_quantization.tflite is the original TensorFlow Lite model with floating-point data. model_integer_only_quantization.tflite is the last model we converted using integer-only quantization (it uses uint8 data for input and output). Let's create another function to print our predictions and run it for testing. import matplotlib.pylab as plt # Change this to test a different image test_image_index = 1 ## Helper function to test the models on one image def test_model(tflite_file, test_image_index, model_type): global test_labels predictions = run_tflite_model(tflite_file, [test_image_index]) plt.imshow(test_images[test_image_index].reshape(28,28)) template = model_type + " Model \n True:{true}, Predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0]))) plt.grid(False) Fig 9 Fig 10 Then evaluate the quantized model by using all the test images we loaded at the beginning of this tutorial. After summarizing the prediction result of the test dataset, we can see that the prediction accuracy of the quantized model decrease 7% less than the original model, it's not bad. # Helper function to evaluate a TFLite model on all images def evaluate_model(tflite_file, model_type): test_image_indices = range(test_images.shape[0]) predictions = run_tflite_model(tflite_file, test_image_indices) accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images) print('%s model accuracy is %.4f%% (Number of test samples=%d)' % ( model_type, accuracy, len(test_images))) Deploying model Converting TensorFlow Lite model to C file The following code runs xxd on the quantized model, writes the output to a file called model_quantized.cc, in the file, the model is defined as an array of bytes, and prints it to the screen. The output is very long, so we won’t reproduce it all here, but here’s a snippet that includes just the beginning and end. # Save the file as a C source file xxd -i model_integer_only_quantization.tflite > model_quantized.cc # Print the source file cat model_quantized.cc Fig 11 Deploying the C file to project We use the tensorflow_lite_cifar10 demo as a prototype, then replace the original model and do some code modification, below is the code in the modified main file. #include "board.h" #include "fsl_debug_console.h" #include "pin_mux.h" #include "timer.h" #include <iomanip> #include <iostream> #include <string> #include <vector> #include "tensorflow/lite/kernels/register.h" #include "tensorflow/lite/model.h" #include "tensorflow/lite/optional_debug_tools.h" #include "tensorflow/lite/string_util.h" #include "get_top_n.h" #include "model.h" #define LOG(x) std::cout // ---------------------------- Application ----------------------------- // Lenet Mnist model input data size (bytes). #define LENET_MNIST_INPUT_SIZE 28*28*sizeof(char) // Lenet Mnist model number of output classes. #define LENET_MNIST_OUTPUT_CLASS 10 // Allocate buffer for input data. This buffer contains the input image // pre-processed and serialized as text to include here. uint8_t imageData[LENET_MNIST_INPUT_SIZE] = { #include "clothes_select.inc" }; /* Tresholds */ #define DETECTION_TRESHOLD 60 /*! * @brief Initialize parameters for inference * * @param reference to flat buffer * @param reference to interpreter * @param pointer to storing input tensor address * @param verbose mode flag. Set true for verbose mode */ void InferenceInit(std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor** input_tensor, bool isVerbose) { model = tflite::FlatBufferModel::BuildFromBuffer(Fashion_MNIST_model, Fashion_MNIST_model_len); if (!model) { LOG(FATAL) << "Failed to load model\r\n"; return; } tflite::ops::builtin::BuiltinOpResolver resolver; tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (!interpreter) { LOG(FATAL) << "Failed to construct interpreter\r\n"; return; } int input = interpreter->inputs()[0]; const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); if (interpreter->AllocateTensors() != kTfLiteOk) { LOG(FATAL) << "Failed to allocate tensors!"; return; } /* Get input dimension from the input tensor metadata assuming one input only */ *input_tensor = interpreter->tensor(input); auto data_type = (*input_tensor)->type; if (isVerbose) { const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); LOG(INFO) << "input: " << inputs[0] << "\r\n"; LOG(INFO) << "number of inputs: " << inputs.size() << "\r\n"; LOG(INFO) << "number of outputs: " << outputs.size() << "\r\n"; LOG(INFO) << "tensors size: " << interpreter->tensors_size() << "\r\n"; LOG(INFO) << "nodes size: " << interpreter->nodes_size() << "\r\n"; LOG(INFO) << "inputs: " << interpreter->inputs().size() << "\r\n"; LOG(INFO) << "input(0) name: " << interpreter->GetInputName(0) << "\r\n"; int t_size = interpreter->tensors_size(); for (int i = 0; i < t_size; i++) { if (interpreter->tensor(i)->name) { LOG(INFO) << i << ": " << interpreter->tensor(i)->name << ", " << interpreter->tensor(i)->bytes << ", " << interpreter->tensor(i)->type << ", " << interpreter->tensor(i)->params.scale << ", " << interpreter->tensor(i)->params.zero_point << "\r\n"; } } LOG(INFO) << "\r\n"; } } /*! * @brief Runs inference input buffer and print result to console * * @param pointer to image data * @param image data length * @param pointer to labels string array * @param reference to flat buffer model * @param reference to interpreter * @param pointer to input tensor */ void RunInference(const uint8_t* image, size_t image_len, const std::string* labels, std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor* input_tensor) { /* Copy image to tensor. */ memcpy(input_tensor->data.uint8, image, image_len); /* Do inference on static image in first loop. */ auto start = GetTimeInUS(); if (interpreter->Invoke() != kTfLiteOk) { LOG(FATAL) << "Failed to invoke tflite!\r\n"; return; } auto end = GetTimeInUS(); const float threshold = (float)DETECTION_TRESHOLD /100; std::vector<std::pair<float, int>> top_results; int output = interpreter->outputs()[0]; TfLiteTensor *output_tensor = interpreter->tensor(output); TfLiteIntArray* output_dims = output_tensor->dims; // assume output dims to be something like (1, 1, ... , size) auto output_size = output_dims->data[output_dims->size - 1]; /* Find best image candidates. */ GetTopN<uint8_t>(interpreter->typed_output_tensor<uint8_t>(0), output_size, 1, threshold, &top_results, false); if (!top_results.empty()) { auto result = top_results.front(); const float confidence = result.first; const int index = result.second; if (confidence * 100 > DETECTION_TRESHOLD) { LOG(INFO) << "----------------------------------------\r\n"; LOG(INFO) << " Inference time: " << (end - start) / 1000 << " ms\r\n"; LOG(INFO) << " Detected: " << std::setw(10) << labels[index] << " (" << (int)(confidence * 100) << "%)\r\n"; LOG(INFO) << "----------------------------------------\r\n\r\n"; } } } /*! * @brief Main function */ int main(void) { const std::string labels[] = {"T-shirt/top", "Trouser","Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"}; /* Init board hardware. */ BOARD_ConfigMPU(); BOARD_InitPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); InitTimer(); std::unique_ptr<tflite::FlatBufferModel> model; std::unique_ptr<tflite::Interpreter> interpreter; TfLiteTensor* input_tensor = 0; InferenceInit(model, interpreter, &input_tensor, false); LOG(INFO) << "Fashion MNIST object recognition example using a TensorFlow Lite model.\r\n"; LOG(INFO) << "Detection threshold: " << DETECTION_TRESHOLD << "%\r\n"; /* Run inference on static ship image. */ LOG(INFO) << "\r\nStatic data processing:\r\n"; RunInference((uint8_t*)imageData, (size_t)LENET_MNIST_INPUT_SIZE, labels, model, interpreter, input_tensor); while(1) {} } Testing result After deploying the model in the demo project, then we'll run this demo on the MIMXRT1060 (Fig 12) board for testing. Fig 12 Run the below code to covert the Fashion MNIST image to text The process_image() function can convert a Fashion MNIST image to an include file as static data, then include this file in the demo project. def process_image(image, output_path, num_batch=1): img_data = np.transpose(image, (2, 0, 1)) # Repeat image for batch processing (resulting tensor is NCHW or NHWC) img_data = np.reshape(img_data, (num_batch, img_data.shape[0], img_data.shape[1], img_data.shape[2])) img_data = np.repeat(img_data, num_batch, axis=0) img_data = np.reshape(img_data, (num_batch, img_data.shape[1], img_data.shape[2], img_data.shape[3])) # Serialize image batch img_data_bytes = bytearray(img_data.tobytes(order='C')) image_bytes_per_line = 20 with open(output_path, 'wt') as f: idx = 0 for byte in img_data_bytes: f.write('0X%02X, ' % byte) if idx % image_bytes_per_line == (image_bytes_per_line - 1): f.write('\n') idx = idx + 1 # Return serialized image size return len(img_data_bytes)      2. Run the demo project on board.
查看全文
In the i.MXRT 1050 EVK web page, there is a very nice "Getting Started" page to show the videos and steps how to use the board. 1. Connect the board to your PC by a USB cable. 2. Build and download the SDK. a. In the SDK Builder web page, you can customize and download the specific SDK of your board. b. On the next page, you can select different OS and different IDE. Select "MCUpresso IDE" for Windows here. c. You can add the software component that you wanted. d. Request to build the SDK. e. When the build request has completed, the SDK is available for download under the SDK Dashboard page. - Download icon : Download the SDK - Rebuild icon : Rebuild the SDK with different setting - Share icon : Share the SDK to others - MCUConfigTool icon : Run the MCU Configuration Tool to configure the pinmux and clocks for your own design board. - Remove icon : Remove the SDK from the Dashboard. 3. Install the MCUXpresso IDE. a. Go to the MCUXpresso IDE weg page to download the IDE and then install it. 4. Build and run the example on EVK. a. Open the MCUXpresso IDE. Simply drag & drop the SDK zip file to "Installed SDKs" view. b. Import the SDK examples and then click "Next". c. Select the "hello_world" under the demo_apps. d. Click "Build" to build the demo. e. Execute the terminal software (e.g. PuTTY). The COM port of the console output can be found in "devices manager". The COM setting is 115200,8,N,1. f. Click the "bug" icon to start the debugging. g. Click "Resume All Debug Sessions" icon to run the demo. h. "hello world" print out in console. Reference: i.MXRT1050 web page ( Contain the datasheet, reference manual of the i.MXRT1050 processor) i.MXRT1050EVK web page ( Contain the user's guides of the i.MXRT1050 EVK) MCUXpresso IDE web page ( Contain the user's guides of the MCUXpresso IDE )
查看全文
RT1050 Boundary Scan test based on lauterbach 1. Abstract Boundary Scan is a method of testing interconnections on circuit boards or internal sub-blocks of circuits. You can also debug and observe the pin status of the integrated circuit, measure the voltage or analyze the sub-modules inside the integrated circuit, and test based on the JTAG interface. NXP officials have provided two good application notes: AN13507 (LPC) and AN12919 (RT). Based on the reference application note test method, this article provides the boundary scan test results for NXP MIMXRT1050-EVK revA1. It can use Lauterbach to connect the chip and perform boundary scan to control the external pins. A script file is also provided. It can realize one-click connection to boundary scan and achieve level control of external pins. 2. RT1050 test details 2.1 Hardware platform Lauterbach:LA3050 MIMXRT1050-EVK rev A1 hardware modification point are as follows: (1)Modify fuse bit 0X460[19], which is DAP_SJC_SWD_SEL, from 0-SWD to 1-JTAG. To modify Fuse, you can enter serial download mode and use MCUbootUtility to connect and modify it. Fig 1 (2)DNP R38 ,R323,R309,R152,R303   (3)  JTAG_MODE connect to 3.3V= on board TP11 connect to J24_8 (4)R35 connect 100K resistor (5)ONOFF pin pull up external 100K resistor to 3.3V,board modification point is SW2 pin3 or pin4 connect 100K resistor and pull up to J24_8.    (6) disconnect J32,J33 which will disconnect the on board debugger, because this test need to use the external Lauterbach.    (7) Use the external Lauterbach connect to JTAG interface J21, the connection picture is:     Fig 2 2.2 Software operation Download Lauderbach's supporting software and install it. After installation, open the TRACE32 ICD Arm USB. If the Lauderbach device is connected, the interface will open successfully. Fig 3 At this time, you can enter the relevant commands in the yellow box in the picture above. Here you need to prepare the .bsdl file of the chip, which is usually placed on the chip introduction page of nxp.com. For example, the link to the bsdl file of RT1050 is https://www.nxp.com/downloads/en/bsdl/RT1050.bsdl You can copy the RT1050.bsdl file to the Lauderbach installation path: C:\T32 Next, enter the following command in the window to open the boundary scan window SYStem.Mode Down BSDL.RESet BSDL.ParkState Select-DR-Scan BSDL.state Here, it will open the window: Fig 4 Click FILE item, input the downloaded RT1050.bsdl , then in the window input the commander: BSDL.SOFTRESET     Fig 5 Click check->BYPASSall,IDCODEall,SAMPLEall, make sure the 3 methods can be passed. It is found here that the following problems are encountered when clicking IDCODEall:   Fig 6 It prompts that the IDCODE read is 188c301d, but the expected IDCODE is 088c301d. So what is the correct IDCODE? You can view RT1050RM:   Fig 7 It can be seen that the currently read 188c301d is in line with RM and is correct. Therefore, the version of bsdl downloaded from the official website needs to be modified. Open the RT1050.bsdl file:   Fig 8 Modify line 408 version from 0000 to 0001,Fig 8 is the modified result. Save, run the above commands again, we can see the current BYPASSall,IDCODEall,SAMPLEall connection result is:   Fig 9 Fig 10 Fig 11 To test the output control situation you need to do: BSDLSET 1.: instructions->EXTEXT, DR mode->Set Write, Filter data-> uncheck intern BSDL.state->Run: check SetAndRun, TwoStepDR,  Click RUN button. BSDLSET 1. Window, you can control the pin output status, eg, control GPIO_AD_B1_06 which is J22_2, control the output level: 1 high, 0 low.   Fig 12 2.3 Automation control command script As can be seen from Section 2.2, single-step operation requires manual typing of commands. In actual testing, the efficiency is very low, so scripting language can be used to directly implement automated command control. Below, we take RT1050 as an example to control the level of the onboard GPIO_AD_B1_06 and J22_2 pins, and use a multimeter to test the high and low levels. In this way, when the TRACE32 software is opened, you only need to open the script directly, enter the debug mode, run it to the end with one click, and view the board Just turn on the light and control the status. Script language, suffix .cmm, step: File->New Script, enter the following script command: ;system setup SYStem.Mode Down SYStem.CPU CortexM7 SYSTEM.CONFIG.DEBUGPORTTYPE JTAG SYStem.JtagClock 1MHz ;BSDL Settings BSDL.RESet BSDL.ParkState Select-DR-Scan BSDL.state ;configure boundary scan chain BSDL.FILE RT1050.bsdl ;Check boundary scan chain BSDL.SOFTRESET BSDL.BYPASSall BSDL.IDCODEall BSDL.SAMPLEall ;Perform Sample test BSDL.RUN BSDL.SetAndRun ON BSDL.TwoStepDR ON BSDL.SET 1. BSDL.SET 1. IR EXTEST BSDL.RUN BSDL.SET 1. PORT GPIO_AD_B1_06 0 BSDL.SET 1. PORT GPIO_AD_B1_06 1 BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 6.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 6.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s Function: Pull the GPIO_AD_B1_06 pin high and low 6 times, with no delay, delay 5s, delay 2s. After the script is written, save it and debug it.   Fig 13 This is the video for the testing: It can be seen that automatic control of the onboard GPIO_AD_B1_06 and J22_2 pins can be achieved, and there is no disconnection issue when the test delay is greater than 5S, indicating that the BSDL automatic test has been completed so far. If you encounter problems, be sure to pay attention to whether the hardware modification points of the board have been completely modified.   At last, thanks so much for my colleague @leilei_du  and @albert_li 's endless help!    
查看全文
Note: for similar EVKs, see: Using J-Link with MIMXRT1060-EVKB or MIMXRT1040-EVK Using J-Link with MIMXRT1060-EVK or MIMXRT1064-EVK Using J-Link with MIMXRT1160-EVK or MIMXRT1170-EVK This article provides details using a J-Link debug probe with this EVK.  There are two options: the onboard MCU-Link debug probe can be updated with Segger J-Link firmware, or an external J-Link debug probe can be attached to the EVK.  Using the onboard debug circuit is helpful as no other debug probe is required.  This article details the steps to use either J-Link option. MIMXRT1170-EVKB jumper locations   Using external J-Link debug probe Segger offers several J-Link probe options.  To use one of these probes with these EVKs, configure the EVK with these settings: Install a jumper on JP5, to disconnect the SWD signals from the onboard debug circuit.  This jumper is open by default. Power the EVK: the default option is connecting the power supply to barrel jack J43, and setting power switch SW5 to On position (3-6).  The green LED D16 next to SW5 will be lit when the EVK is properly powered. Connect the J-Link probe to J1, 20-pin dual-row 0.1" header.   Using onboard MCU-Link with J-Link firmware Install the MCU-Link Installer for the drivers and firmware update tool Disconnect any USB cables from the EVK Power the EVK: the default option is connecting the power supply to barrel jack J43, and setting power switch SW5 to On position (3-6).  The green LED D16 next to SW5 will be lit when the EVK is properly powered. Install a jumper at JP3 to force the MCU-Link in ISP mode Connect a USB cable to J86, to the MCU-Link debugger Go to the scripts directory in the MCU-Link software package installation and run the program_JLINK.cmd (Windows) or program_JLINK (Linux/MacOS) script by double-clicking it. Follow the onscreen instructions.  In Windows, this script is typically installed at C:\nxp\MCU-LINK_installer_3.122\scripts\program_JLINK.cmd Unplug the USB cable at J86 Remove the jumper at JP3 Plug the USB cable back in to J86.  Now the MCU-Link debugger should boot as a JLink. Remove jumper JP5, to connect the SWD signals from the MCU-Link debugger.  This jumper is open by default.  
查看全文
The i.MX RT600 crossover MCU combines an ultra-low power MCU with a high performance DSP to enable the next generation of ML/AI, voice and audio applications. Get started today and order your MIMXRT685-EVK.
查看全文