i.MX RT Crossover MCUs Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX RT Crossover MCUs Knowledge Base

Discussions

Sort by:
A vulnerability (CVE-2022-22819) has been identified on select NXP processors by which a malformed SB2 file header sent to the device as part of an update or recovery boot can be used to create a buffer overflow. The buffer overflow can then be used to launch various exploits. Refer to the attached bulletin for more information.   09/26/2022 - Bulletin updated to include fix datecode information. 11/01/2022 - Bulletin updated with clarification that mixed datecodes are RT600 only.    
View full article
RT106L_S voice control system based on the Baidu cloud 1 Introduction     The NXP RT106L and RT106S are voice recognition chip which is used for offline local voice control, SLN-LOCAL-IOT is based on RT106L, SLN-LOCAL2-IOT is a new local speech recognition board based on RT106S. The board includes the murata 1DX wifi/BLE module, the AFE voice analog front end, the ASR recognition system, the external flash, 2 microphones, and the analog voice amplifier and speakers. The voice recognition process for SLN-LOCAL-IOT and SLN-LOCAL2-IOT is different and the new SLN-LOCAL2-IOT is recommended.     This article is based on the voice control board SLN-LOCAL/2-IOT to implement the following block diagram functions: Pic 1 Use the PC-side speed model tool (Cyberon DSMT) to generate WW(wake word) and VC(voice command) Command related voice engine binary files , which will be used by the demo code. This system is mainly used for the Chinese word recognition, when the user says Chinese word: "小恩小恩", it wakes up SLN-LOCAL/2-IOT, and the board gives feedback "小恩来了,请吩咐". Then system enter the voice recognition stage, the user can say the voice recognition command: “开红灯”,“关红灯”,“开绿灯”,“关绿灯”,“灯闪烁”,“开远程灯”,“关远程灯”, after recognition, the board gives feedback "好的". Among them, “开红灯”,“关红灯”,“开绿灯”,“关绿灯”,“灯闪烁”,the five commands are used for the local light switch, while the 开远程灯”,“关远程灯“two commands can through network communication Baidu cloud control the additional MIMXRT1060-EVK development board light switch. SLN-LOCAL/2-IOT through the WIFI module access to the Internet with MQTT protocol to achieve communication with Baidu cloud, when dectect the remote control command, publish the json packets to Baidu cloud, while MIMRT1060-EVK subscribe Baidu cloud data, will receive data from the IOT board and analyze the EVK board led control. PC side can use MQTT.fx software to subscribe the Baidu cloud data, it also can send data to the device to achieve remote control function directly.  Now, will give the detail content about how to use the SLN-LOCAL/2-IOT SDK demo realize the customized Chinese wake command and voice command, and remote control the MIMXRT1060-EVK through the Baidu Cloud.     2 Platform establish 2.1 Used platform SLN-LOCAL-IOT/SLN-LOCAL2-IOT MIMXRT1060-EVK MQTT.fx SDK_2_8_0_SLN-LOCAL2-IOT MCUXPresso IDE Segger JLINK Baidu Smart Cloud: Baidu cloud control+ TTS Audacity:audio file format convert tool WAVToCode:wav convert to the c array code, which used for the demo tilte play MCUBootUtility: used to burn the feedback audio file to the filesystem Cyberon DSMT: wake word and voice detect command generation tool DSMT is the very important tool to realize the wake word and voice dection, the apply follow is: Pic 2 2.2 Baidu Smart cloud 2.2.1 Baidu cloud IOT control system Enter the IoT Hub: https://cloud.baidu.com/product/iot.html     Click used now. 2.2.1.1 Create device project Create a project, select the device type, and enter the project name. Device types can use shadows as images of devices in the cloud to see directly how data is changing. Once created, an endpoint is generated, along with the corresponding address: Pic 3 2.2.1.2 Create Thing model The Thing model is mainly to establish various properties needed in the shadow, such as temperature, humidity, other variables, and the type of value given, in fact, it is also the json item in the actual MQTT communication.    Click the newly created device-type project where you can create a new thing model or shadow: Pic 4    Here create 3 attributes:LEDstatus,humid,temp It is used to represent the led status, humidity, temperature and so on, which is convenient for communication and control between the cloud and RT board. Once created, you get the following picture:   Pic 5   2.2.1.3 Create Thing shadow In the device-type project, you can select the shadow, build your own shadow platform, enter the name, and select the object model as the newly created Thing model containing three properties, after the create, we can get the details of the shadow:   Pic 6 At the same time will also generate the shadow-related address, names and keys, my test platform situation is as follows: TCP Address: tcp://rndrjc9.mqtt.iot.gz.baidubce.com:1883 SSL Address: ssl://rndrjc9.mqtt.iot.gz.baidubce.com:1884 WSS Address: wss://rndrjc9.mqtt.iot.gz.baidubce.com:443 name: rndrjc9/RT1060BTCDShadow key: y92ewvgjz23nzhgn Port 1883, does not support transmission data encryption Port 1884, supports SSL/TLS encrypted transmission Port 8884, which supports wesockets-style connections, also contains SSL encryption. This article uses a 1883 port with no transmission data encryption for easy testing. So far, Baidu cloud device-type cloud shadow has been completed, the following can use MQTTfx tools to connect and test. In practice, it is recommended that customers build their own Baidu cloud connection, the above user key is for reference only.   2.2.2 Online TTS    SLN-LOCAL/2-IOT board recognizes wake-up words, recognition words, or when powering on, you need to add corresponding demo audio, such as: "百度云端语音测试demo ", "小恩来啦!请吩咐“,"好的". These words need to do a text-to-wav audio file synthesis, here is Baidu Smart Cloud's online TTS function, the specific operation can refer to the following documents: https://ai.baidu.com/ai-doc/SPEECH/jk38y8gno   Once the base audio library is opened, use the main.py provided in the link above and modify it to add the Chinese field you want to convert to the file "TEXT" and add the audio file to be converted in "save_file" such as xxx .wav, using the command: python main.py to complete the conversion, and generate the audio format corresponding to the text, such as .mp3, .wav. Pic 7   After getting the wav file, it can’t be used directly, we need to note that for SLN-LOCAL/2-IOT board, you need to identify the audio source of the 48K sample rate with 16bit, so we need to use the Audioacity Audio tool to convert the audio file format to 48K16bit wav. Import 16K16bit wav files generated by Baidu TTS into the Audioacity tool, select project rate of 48Khz, file->export->export as WAV, select encoding as signed 16bit PCM, and regenerate 48Khz16bit wav for use. Pic 8 “百度云端语音测试demo“:Used for power-on broadcasting, demo name broadcasting, it is stored in RT demo code, so you need to convert it to a 16bit C code array and add it to the project. "小恩来啦!请吩咐",“好的“:voice detect feedback, it is saved in the filesystem ZH01,ZH02 area. 2.3 playback audio data prepare and burn   There are two playback audio file, it is "小恩来啦!请吩咐",“好的“,it is saved in the filesystem ZH01,ZH02 area. Filesystem memory map like this: Pic 9 So, we need to convert the 48K16bit wav file to the filesystem needed format, we need to use the official tool::Ivaldi_sln_local2_iot Reference document:SLN-LOCAL2-IOT-DG chapter 10.1 Generating filesystem-compatible files Use bash input the commands like the following picture: Pic10 Use the convert command to get the playback bin file: python file_format.py -if xiaoencoming_48k16bit.wav -of xiaoencoming_48k16bit.bin -ft H At last, it will generate the file: "小恩来啦!请吩咐"->xiaoencoming_48k16bit.bin,burn to flash address 0x6184_0000 “好的”->OK_48k16bit.bin, burn to flash address 0x6180_0000 Then, use MCUBootUtility tool burn the above two file to the related images. Here, take OK_48k16bit.bin as an example, demo enter the serial download mode(J27-0), power off and power on. Flash chip need to select hyper flash IS26KSXXS, use the boot device memory windows, write button to burn the .bin file to the related address, length is 0X40000 Pic11 Pic12 xiaoencoming_48k16bit.bin can use the same method to download to 0x6184_0000,Length is 0X40000.   2.4 Demo audio prepare and add The prepared baiduclouddemo_48K16bit.wav(“百度云端语音测试demo “) need to convert to the 16bit C array code, and put to the project code, calls by the code, this is used for the demo mode play. The convert need to use the WAVToCode, the operation like this: Pic 13 The generated baiducloulddemo_48K16bit.c,add it to the demo project C files: sln_local_iot_local_demo->audio->demos->smart_home.c。 2.5 WW and VC prepare Wake-up word are generated through the cyberon DSMT tool, which supports a wide range of language, customers can request the tool through Figure 2. The Chinese wake-up words and voice command words in this article are also generated through DSMT. DSMT can have multiple groups, group1 as a wake-up word configuration, CmdMapID s 1. Other groups act as voice command words, such as CMD-IOT in this article, cmdMapID=2. Pic 14   Pic 15 Wake word continuously detects the input audio stream, uses group1, and if successfully wakes up, will do the voice command detection uses group2, or other identifying groups as well as custom groups. The wake-up words using the DSMT tool, the configuration are as follows: Pic 16 The WW can support more words, customer can add the needed one in the group 1. Use the DSMT configure VC like this: Pic 17 Then, save the file, code used file are: _witMapID.bin, CMD_IOT.xml,WW.xml. In the generated files, CYBase.mod is the base model, WW.mod is the WW model, CMD_IOT.mod is the VC model. After Pic 16,17, it finishes the WW and VC command prepare, we can put the DSMT project to the RT106S demo project folder: sln_local2_iot_local_demo\local_voice\oob_demo_zh 3 Code prepare Based on the official SLN-LOCAL2-IOT SDK local_demo, the code in this article modifies the Chinese wake-up words and recognition words (or you can build a new customer custom group directly), add local voice detect the led status operations, Then feedback Chinese audio, demo Chinese audio, Wifi network communication MQTT protocol code, and Baidu cloud shadow connection publish. Source reference code SDK path: SDK_2_8_0_SLN-LOCAL2-IOT\boards\sln_local2_iot\sln_voice_examples\local_demo   SDK_2_8_0_SLN-LOCAL2-IOT\boards\sln_local2_iot\sln_boot_apps SLN-LOCAL2-IOT and SLN-LOCAL-IOT code are nearly the same, the only difference is that the ASR library file is different, for RT106S (SLN-LOCAL2-IOT) using SDK it’s own libsln_asr.a library, for RT106L (SLN-LOCAL-IOT) need to use the corresponding libsln_asr_eval.a library.    Importing code requires three projects: local_demo, bootloader, bootstrap. The three projects store in different spaces. See SLN-LOCAL2-IOT-DG .pdf, chapter 3.3 Device memory map    This is the 3 chip project boot process: Pic 18 This document is for demo testing and requires debug, so this article turns off the encryption mechanism, configures bootloader, bootstrap engineering macro definition: DISABLE_IMAGE_VERIFICATION = 1, and uses JLINK to connect SLN-LOCAL/2-IOT's SWD interface to burn code. The following is to add modification code for app local_demo projects. 3.1 sln-local/2-iot code Sln-local-iot, sln-local2-iot platform, the following modification are the same for the two platform. 3.1.1 Voice recognition related code 1)Demo audio play Play content:“百度云端语音测试demo“ sln_local2_iot_local_demo_xe_ledwifi\audio\demos\ smart_home.c content is replaced by the previously generated baiducloulddemo_48K16bit.C. audio_samples.h,modify: #define SMART_HOME_DEMO_CLIP_SIZE 110733 This code is used for the main.c announce_demo API play:         case ASR_CMD_IOT:             ret = demo_play_clip((uint8_t *)smart_home_demo_clip, sizeof(smart_home_demo_clip));   2)command print information #define NUMBER_OF_IOT_CMDS      7 IndexCommands.h static char *cmd_iot_en[] = {"Red led on", "Red led off", "Green led on", "Green led off",                              "cycle led",        "remote led on",         "remote led off"}; static char *cmd_iot_zh[] = {"开红灯", "关红灯", "开绿灯", "关绿灯", "灯闪烁", "开远程灯", "关远程灯"}; Here is the source code modification using IOT, you can actually add your own speech recognition group directly, and add the relevant command identification.   3)sln_local_voice.c Line757 , add led-related notification information in ASR_CMD_IOT mode. oob_demo_control.ledCmd = g_asrControl.result.keywordID[1];     The code is used to obtain the recognized VC command data, and the value of keywordID[1] represents the number. This number can let the code know which detail voice is detected. so that you can do specific things in the app based on the value of ledcmd. The value of keywordID[1] corresponds to Command List in Figure 17. For example, “开远程灯“, if woke up, and recognized "开远程灯", then keywordID[1] is 5, and will transfer to oob_demo_control.ledCmd, which will be used in the appTask API to realize the detail control. 4) main.c void appTask(void *arg) Under case kCommandGeneric: if the language is Chinese, then add the recognition related control code, at first, it will play the feedback as “好的”. Then, it will check the voice detect value, give the related local led control. else if (oob_demo_control.language == ASR_CHINESE) { // play audio "OK" in Chinese #if defined(SLN_LOCAL2_RD) ret = audio_play_clip((uint8_t *)AUDIO_ZH_01_FILE_ADDR, AUDIO_ZH_01_FILE_SIZE); #elif defined(SLN_LOCAL2_IOT) ret = audio_play_clip(AUDIO_ZH_01_FILE); #endif //kerry add operation code==================================================begin RGB_LED_SetColor(LED_COLOR_OFF); if (oob_demo_control.ledCmd == LED_RED_ON) { RGB_LED_SetColor(LED_COLOR_RED); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_RED_OFF) { RGB_LED_SetColor(LED_COLOR_OFF); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_BLUE_ON) { RGB_LED_SetColor(LED_COLOR_BLUE); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_BLUE_OFF) { RGB_LED_SetColor(LED_COLOR_OFF); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == CYCLE_SLOW) { for (int i = 0; i < 3; i++) { RGB_LED_SetColor(LED_COLOR_RED); vTaskDelay(400); RGB_LED_SetColor(LED_COLOR_OFF); RGB_LED_SetColor(LED_COLOR_GREEN); vTaskDelay(400); RGB_LED_SetColor(LED_COLOR_OFF); RGB_LED_SetColor(LED_COLOR_BLUE); vTaskDelay(400); } } … } In addition to local voice recognition control, this article also add remote control functions, mainly through wifi connection, use the mqtt protocol to connect Baidu cloud server, when local speech recognition get the remote control command, it publish the corresponding control message to Baidu cloud, and then the cloud send the message to the client which subscribe this message,  after the client get the message, it will refer to the message content do the related control.   3.1.3 Network connection code 1)sln_local2_iot_local_demo_xe_ledwifi\lwip\src\apps\mqtt     Add mqtt.c 2)sln_local2_iot_local_demo_xe_ledwifi\lwip\src\include\lwip\apps Add mqtt.h, mqtt_opts.h,mqtt_prv.h The related mqtt driver is from the RT1060 sdk, which already added in the attachment project. 3)sln_tcp_server.c   Add MQTT application layer API function code, client ID, server host, MQTT server port number, user name, password, subscription topic, publishing topic and data, etc., more details, check the attachment code.    The MQTT application code is ported from the mqtt project of the RT1060 SDK and added to the sln_tcp_server.c. TCP_OTA_Server function is used to initialize the wifi network, realize wifi connection, connect to the network, resolve Baidu cloud server URL to get IP, and then connect Baidu cloud server through mqtt, after the successful connection, publish the message at first, so that after power-up through mqttfx to see whether the power on network publishing message is successful. TCP_OTA_Server function code is as follows: static void TCP_OTA_Server(void *param) //kerry consider add mqtt related code { err_t err = ERR_OK; uint8_t status = kCommon_Failed; #if USE_WIFI_CONNECTION /* Start the WiFi and connect to the network */ APP_NETWORK_Init(); while (status != kCommon_Success) { status_t statusConnect; statusConnect = APP_NETWORK_Wifi_Connect(true, true); if (WIFI_CONNECT_SUCCESS == statusConnect) { status = kCommon_Success; } else if (WIFI_CONNECT_NO_CRED == statusConnect) { APP_NETWORK_Uninit(); /* If there are no credential in flash delete the TPC server task */ vTaskDelete(NULL); } else { status = kCommon_Failed; } } #endif #if USE_ETHERNET_CONNECTION APP_NETWORK_Init(true); #endif /* Wait for wifi/eth to connect */ while (0 == get_connect_state()) { /* Give time to the network task to connect */ vTaskDelay(1000); } configPRINTF(("TCP server start\r\n")); configPRINTF(("MQTT connection start\r\n")); mqtt_client = mqtt_client_new(); if (mqtt_client == NULL) { configPRINTF(("mqtt_client_new() failed.\r\n");) while (1) { } } if (ipaddr_aton(EXAMPLE_MQTT_SERVER_HOST, &mqtt_addr) && IP_IS_V4(&mqtt_addr)) { /* Already an IP address */ err = ERR_OK; } else { /* Resolve MQTT broker's host name to an IP address */ configPRINTF(("Resolving \"%s\"...\r\n", EXAMPLE_MQTT_SERVER_HOST)); err = netconn_gethostbyname(EXAMPLE_MQTT_SERVER_HOST, &mqtt_addr); configPRINTF(("Resolving status: %d.\r\n", err)); } if (err == ERR_OK) { configPRINTF(("connect to mqtt\r\n")); /* Start connecting to MQTT broker from tcpip_thread */ err = tcpip_callback(connect_to_mqtt, NULL); configPRINTF(("connect status: %d.\r\n", err)); if (err != ERR_OK) { configPRINTF(("Failed to invoke broker connection on the tcpip_thread: %d.\r\n", err)); } } else { configPRINTF(("Failed to obtain IP address: %d.\r\n", err)); } int i=0; /* Publish some messages */ for (i = 0; i < 5;) { configPRINTF(("connect status enter: %d.\r\n", connected)); if (connected) { err = tcpip_callback(publish_message_start, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } i++; } sys_msleep(1000U); } vTaskDelete(NULL); } Please note the following published json data, it can’t be publish directly in the code. {   "reported": {     "LEDstatus": false,     "humid": 88,     "temp": 22   } } Which need to use this web https://www.bejson.com/ realize the json data compression and convert: {\"reported\" : {     \"LEDstatus\" : true,     \"humid\" : 88,     \"temp\" : 11    } }   4)main appTask Under case kCommandGeneric: , if the language is Chinese, then add the corresponding voice recognition control code. "开远程灯": turn on the local yellow light, publish the “remote led on” mqtt message to Baidu cloud, control remote 1060EVK board lights on. "关远程灯": turn on the local white light, publish the “remote led off” mqtt message to Baidu cloud, control the remote 1060EVK board light off. Related operation code: else if (oob_demo_control.ledCmd == LED_REMOTE_ON) { RGB_LED_SetColor(LED_COLOR_YELLOW); vTaskDelay(5000); err_t err = ERR_OK; err = tcpip_callback(publish_message_on, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } } else if (oob_demo_control.ledCmd == LED_REMOTE_OFF) { RGB_LED_SetColor(LED_COLOR_WHITE); vTaskDelay(5000); err_t err = ERR_OK; err = tcpip_callback(publish_message_off, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } } 3.2 MIMXRT1060-EVK code The main function of the MIMXRT1060-EVK code is to configure another client in the cloud, subscribe to the message published by SLN-LOCAL/2-IOT which detect the remote command, and then the LED on the control board is used to test the voice recognition remote control function, this code is based on Ethernet, through the Ethernet port on the board, to achieve network communication, and then use mqtt to connect baidu cloud, and subscribe the message from local2, This enables the reception and execution of the Local2 command. the network code part is similar to SLN-LOCAL2-IOT board network code, the servers, cloud account passwords, etc. are all the same, the main function is to subscribe messages. See the code from attachment RT1060, lwip_mqtt_freertos.c file. When receives data published by the server, it needs to do a data analysis to get the status of the led light and then control it. Normal data from Baidu cloud shadow sent as follows Received 253 bytes from the topic "$baidu/iot/shadow/RT1060BTCDShadow/update/accepted": "{"requestId":"2fc0ca29-63c0-4200-843f-e279e0f019d3","reported":{"LEDstatus":false,"humid":44,"temp":33},"desired":{},"lastUpdatedTime":{"reported":{"LEDstatus":1635240225296,"humid":1635240225296,"temp":1635240225296},"desired":{}},"profileVersion":159}" Then you need to parse the data of LEDstatus from the received data, whether it is false or true. Because the amount of data is small, there is no json-driven parsing here, just pure data parsing, adding the following parsing code to the mqtt_incoming_data_cb function: mqtt_rec_data.mqttindex = mqtt_rec_data.mqttindex + len; if(mqtt_rec_data.mqttindex >= 250) { PRINTF("kerry test \r\n"); PRINTF("idex= %d", mqtt_rec_data.mqttindex); datap = strstr((char*)mqtt_rec_data.mqttrecdata,"LEDstatus"); if(datap != NULL) { if(!strncmp(datap+11,strtrue,4))//char strtrue[]="true"; { GPIO_PinWrite(GPIO1, 3, 1U); //pull high PRINTF("\r\ntrue"); } else if(!strncmp(datap+11,strfalse,5))//char strfalse[]="false"; { GPIO_PinWrite(GPIO1, 3, 0U); //pull low PRINTF("\r\nfalse"); } } mqtt_rec_data.mqttindex =0; It use the strstr search the “LEDstatus“ in the received data, and get the pointer position, then add the fixed length to get the LED status is true or flash. If it is true, turn on the led, if it is false, turn off the led. 4 Test Result    This section gives the test results and video of the system. Before testing the voice function, first use MQTTfx to test baidu cloud connection, release, subscription is no problem, and then test sln-local2-iot combined with mimxrt1060-evk voice wake-up recognition and remote control functions.    For SLN-LOCAL2-IOT wifi hotspot join, enter the command in the print terminal: setup AWS kerry123456   4.1 MQTT.fx test baidu cloud connection MQTT.fx is an EclipsePaho-based MQTT client tool written in the Java language that supports subscription and publishing of messages through Topic.    4.1.1 MQTT fx configuration     Download and install the tool, then open it, at first, need to do the configuration, click edit connection: Pic19 Profile name:connect name Profile type: MQTT broker Broker address: It is the baidu could generated broker address, with 1883 no encryption transfer. Broker port:1883 No encryption Client ID: RT1060BTCDShadow, here need to note, this name should be the same as the could shadow name, otherwise, on the baidu webpage, the connection is not be detected. If this Client ID name is the same as the shadow name, then when the MQTT fx connect, the online side also can see the connection is OK. User credentials: add the thing User name and password from the baidu cloud. After the configuration, click connect, and refresh the website. Before conection: Pic 20 After connection: Pic 21 4.1.2 MQTT fx subscribe When it comes to subscription publishing, what is the topic of publishing subscriptions?  Here you can open your thing shadow, select the interaction, and see that the page has given the corresponding topic situation: Pic 22 Subscribe topic is: $baidu/iot/shadow/RT1060BTCDShadow/update/accepted  Publish topic is: $baidu/iot/shadow/RT1060BTCDShadow/update Pic 23 Click subscribe, we can see it already can used to receive the data.   4.1.3 MQTT fx publish Publish need to input the topic: $baidu/iot/shadow/RT1060BTCDShadow/update It also need to input the content, it will use the json content data. Pic 24 Here, we can use this json data: {   "reported" : {     "LEDstatus" : true,     "humid" : 88,     "temp" : 11    } } The json data also can use the website to check the data: https://www.bejson.com/jsonviewernew/ Pic 25 Input the publish data, and click pubish button: Pic 26 4.1.4 Publish data test result   Before publish, clean the website thing data: Pic 27 MQTT fx publish data, then check the subscribe data and the website situation: Pic 28 We can see, the published data also can be see in the website and the mqttfx subscribe area. Until now, the connection, data transfer test is OK.   4.2 Voice recognition and remote control test This is the device connection picture: Pic 29 4.2.1 voice recognition local control Pic 30 This is the SLN-LOCAL2-IOT print information after recognize the voice WW and VC. Red led on: led cycle: 4.2.2 voice recognition remote control   Following test, wakeup + remote on, wakeup+remote off, and also give the print result and the video. Pic 31 remote control:  
View full article
Face recognition Actually, face recognition technology is used in many scenes in our daily life, for instance, when taking pictures with the mobile phone, the camera software will automatically recognize the faces in the lens and focus, scan face for real-name verification when registering the App and scan face for pay, etc. The basic steps of face recognition are shown in the below figure. Firstly, the camera captures image data, then through preprocessing such as noise elimination and image format conversion, the image data will be transmitted to the processor for face detection and recognition calculations. After recognizing the face successful, continue to do the follow-up operations. Fig1 The basic steps of face recognition i.MX RT106F MCU based solution for face recognition The below figure is the block diagram of i.MX RT106F MCU-based solution for face recognition provided by the NXP. Comparing with the general processor (CPU) solution, it has comparative advantages in cost and power consumption. Further, the PCB size will be smaller too and the MCU usually can boot up within a few hundred milliseconds even with RTOS, versus to the boot-up speed of the processor (CPU) equipped with a Linux system that is about 10 seconds, it will give customers a better user experience. Fig2 i.MX RT106F MCU based solution for face recognition Of course, the i.MX RT106F MCU-based solution face recognition solution is not intended to replace the solution based on the processor (CPU). As aforementioned, face recognition technology has a lot of application cases, and it will definitely be used in more fields in the future, so the MCU-based face recognition solution provides customers and the market with another choice. i.MX RT106F MCU The i.MX RT106F face recognition crossover processor is an EdgeReady™ solution-specific variant of the i.MX RT1060 family of crossover processors, targeting face recognition applications. It features NXP’s advanced implementation of the Arm Cortex®-M7 core, which operates at speeds up to 600 MHz to provide high CPU performance and the best real-time response. i.MX RT106F based solutions enable system designers to easily and inexpensively add face recognition capabilities to a wide variety of smart appliances, smart homes, smart retail, and smart industrial devices. The i.MX RT106F is licensed to run the OASIS Lite library for face recognition (as the below figure shows) which include: Face detection Anti-spoofing Face tracking Face alignment Glass detection Face recognition Confidence measure Face recognition quantified results, etc Fig3 OASIS Recognition Software Pipeline sln_viznas_iot_elock_oobe The sln_viznas_iot_elock_oobe project is the application on the SLN-VIZNAS-IOT (as the below figure shows, regarding the Bootstrap and Bootloader in the software flowchart, I will introduce them in the future). The following development work is based on the sln_viznas_iot_elock_oobe project, however, I need to sketch the basic workflow of it prior to starting real development work. Fig4 SLN-VIZNAS-IOT software flowchart sln_viznas_iot_elock_oobe's workflow flow In the Camera_Start() function, the task (Camera_Init_Task) completes the initialization of the RGB and IR cameras, then creates a task (Camera_Task); In the Display_Start() function, after the task (Display_Init_Task) completes the initialization of the display medium (USB or LCD), it immediately creates the task (Display_Task) and sends the message queue s_DisplayReqMsg.id = QMSG_DISPLAY_FRAME_REQ to the task (Camera_Task), then the pDispData will point to the s_BufferLcd[0] array for storing the image data to be displayed; In the Oasis_Start() function, firstly, OASISLT_init() completes the initialization of the OAISIT library, then creates a task (Oasis_Task) to send the message queues gFaceDetReqMsg.id = QMSG_FACEREC_FRAME_REQ and gFaceInfoMsg.id = QMSG_FACEREC_INFO_UPDATE to the task (Camera_Task) to make the pDetIR and pDetRGB point to the face block diagram captured by the RGB and IR cameras, and update the content pointed by infoMsgIn. After the camera is initialized, the RGB camera works at first. After the image data is captured, an interrupt is triggered and the callback function Camera_Callback() sends the message queue DQMsg.id = QMSG_CAMERA_DQ to the task (Camera_Task), and DQIndex++; CAMERA_RECEIVER_GetFullBuffer() extracts the image data captured by the RGB camera, and sends the message queue DPxpMsg.id = QMSG_PXP_DISPLAY to the task (PXP_Task) created in the APP_PXP_Start() function and EQIndex++, meanwhile switch the camera from RGB to IR. After the APP_PXPStartCamera2Display() function in the task (PXP_Task) completes processing, it sends the message queue s_DResMsg.id = QMSG_PXP_DISPLAY to the task (Camera_Task), and the task (Camera_Task) sends the message queue DresMsg.id = QMSG_DISPLAY_FRAME_RES to the task (Display_Task) after receiving the above message queue. The task (Display_Task) completes display, then it sends the message queue s_DisplayReqMsg.id = QMSG_DISPLAY_FRAME_REQ to the task (Camera_Task) to make pDispData point to the s_BufferLcd[1] array; After the IR camera completes capturing work, CAMERA_RECEIVER_GetFullBuffer() extracts the image data and sends the message queue DPxpMsg.id = QMSG_PXP_DISPLAY to the (PXP_Task) task created in the APP_PXP_Start() function, continue to execute EQIndex++ and switch to RGB camera again, and repeat the steps 5. Finally, send the message queue FPxpMsg.id = QMSG_PXP_FACEREC to the task (PXP_Task) and set irReady = true. After the task (PXP_Task) receives the above message queue, it calls APP_PXPStartCamera2DetBuf() and after completes the processing, sends the message queue s_FResMsg.id = QMSG_PXP_FACEREC to the task (Camera_Task); CAMERA_RECEIVER_GetFullBuffer() extracts the image data collected by the RGB camera, repeat step 5, when (pDetRGB && irReady) condition is met, send the message queue FPxpMsg.id = QMSG_PXP_FACEREC to the task (PXP_Task) and set irReady = false, pDetRGB = NULL, pDetIR = NULL. After the task (PXP_Task) receives the above message queue, it calls APP_PXPStartCamera2DetBuf() and after completes the processing, sends the message queue s_FResMsg.id = QMSG_PXP_FACEREC to the task (Camera_Task). At this time, the (!pDetIR && !pDetRGB) condition is met and the Queue message FResMsg.id = QMSG_FACEREC_FRAME_RES is sent to the task (Oasis_Task), run OASISLT_run_extend to perform face recognition calculation, and send the message queue gFaceDetReqMsg.id = QMSG_FACEREC_FRAME_REQ to the task (Camera_Task) to make the pDetIR and pDetRGB point to the face block diagram captured by the RGB and IR cameras again. keep repeat steps 6 and 7; Fig5 sln_viznas_iot_elock_oobe's workflow flow Smart Coffee machine Fig 6 is the workflow of the smart coffee machine that I want to develop for, as there is no LCD board on hand, in the below development process, I will select Win10's camera (as the below figure shows) to output the captured image, further, take advantage of the Shell command to simulate the LCD's touch feature to interact with the board.   Fig6 workflow of the smart coffee machine Fig7 Camera Code modification In the commondef.h, add a new member variable 'uint16_t coffee_taste' in Union FeatureItem to stand for the favorite coffee taste; typedef union { struct { /*put char/unsigned char together to avoid padding*/ unsigned char magic; char name[FEATUREDATA_NAME_MAX_LEN]; int index; // this id identify a feature uniquely,we should use it as a handler for feature add/del/update/rename uint16_t id; uint16_t pad; // Add a new component uint16_t coffee_taste; /*put feature in the last so, we can take it as dynamic, size limitation: * (FEATUREDATA_FLASH_PAGE_SIZE * 2 - 1 - FEATUREDATA_NAME_MAX_LEN - 4 - 4 -2)/4*/ float feature[0]; }; unsigned char raw[FEATUREDATA_FLASH_PAGE_SIZE * 2]; } FeatureItem; // 1kB   In featuredb.h, add two member functions into class FeatureDB:  set_taste()  and  get_taste() , and add the definition of the above two member functions in featuredb.cpp; class FeatureDB { public: FeatureDB(); ~FeatureDB(); int add_feature(uint16_t id, const std::string name, float *feature); int update_feature(uint16_t id, const std::string name, float *feature); int del_feature(uint16_t id, std::string name); int del_feature(const std::string name); int del_feature_all(); std::vector<std::string> get_names(); int get_name(uint16_t id, std::string &name); std::vector<uint16_t> get_ids(); int ren_name(const std::string oldname, const std::string newname); int feature_count(); int get_free(int &index); int database_save(int count); int get_feature(uint16_t id, float *feature); void set_autosave(bool auto_save); bool get_autosave(); //Add two customize member functions int set_taste(const std::string username, uint16_t taste_number); int get_taste(const std::string username); private: bool auto_save; int load_feature(); int erase_feature(int index); int save_feature(int index = 0); int reassign_feature(); int get_free_mapmagic(); int get_remain_map(); }; int FeatureDB::set_taste(const std::string username, uint16_t taste_number) { int index = FEATUREDATA_MAX_COUNT; for (int i = 0; i < FEATUREDATA_MAX_COUNT; i++) { if (s_FeatureData.item[i].magic == FEATUREDATA_MAGIC_VALID) { if (!strcmp(username.c_str(), s_FeatureData.item[i].name)) { index = i; } } } if (index != FEATUREDATA_MAX_COUNT) { s_FeatureData.item[index].coffee_taste = taste_number; return 0; } else { return -1; } } int FeatureDB::get_taste(const std::string username) { int index = FEATUREDATA_MAX_COUNT; int taste_number; for (int i = 0; i < FEATUREDATA_MAX_COUNT; i++) { if (s_FeatureData.item[i].magic == FEATUREDATA_MAGIC_VALID) { if (!strcmp(username.c_str(), s_FeatureData.item[i].name)) { index = i; } } } if (index != FEATUREDATA_MAX_COUNT) { taste_number = s_FeatureData.item[index].coffee_taste; return taste_number; } else { return -1; } }   In database.h, add the declarations of  DB_Set_Taste()  and  DB_Get_Taste()  functions, and in database.cpp, add the related codes of the above two functions. These two functions are equivalent to encapsulating the newly added member functions set_taste() and get_taste() of the FeatureDB class; int DB_Del(uint16_t id, std::string name); int DB_Del(string name); int DB_DelAll(); int DB_Ren(const std::string oldname, const std::string newname); int DB_GetFree(int &index); int DB_GetNames(std::vector<std::string> *names); int DB_Count(int *count); int DB_Save(int count); int DB_GetFeature(uint16_t id, float *feature); int DB_Add(uint16_t id, float *feature); int DB_Add(uint16_t id, std::string name, float *feature); int DB_Update(uint16_t id, float *feature); int DB_GetIDs(std::vector<uint16_t> &ids); int DB_GetName(uint16_t id, std::string &names); int DB_GenID(uint16_t *id); int DB_SetAutoSave(bool auto_save); // Add two customize functions int DB_Set_Taste(const std::string username, const uint16_t taste); int DB_Get_Taste(const std::string username); int DB_Set_Taste(const std::string username, const uint16_t taste) { int ret = DB_MGMT_FAILED; ret = DB_Lock(); if (DB_MGMT_OK == ret) { ret = s_DB->set_taste(username, taste); DB_UnLock(); } return ret; } int DB_Get_Taste(const std::string username) { int ret = DB_MGMT_FAILED; ret = DB_Lock(); if (DB_MGMT_OK == ret) { ret = s_DB->get_taste(username); DB_UnLock(); } return ret; } In sln_api.h, add the declarations of the functions  VIZN_SetTaste() ,  VIZN_GetTaste()  and  VIZN_Is_Rec_User() , and add the codes of the above three functions in sln_api.cpp. The VIZN_SetTaste() and VIZN_GetTaste() functions are equivalent to the encapsulation of the DB_Set_Taste() and DB_Get_Taste() functions. Why is it so complicated? To follow the code layering mechanism of the elock_oobe project and reduce the difficulty of code implementation through code layered encapsulation. /** * @brief Set user's favorite coffee taste. * * @Param clientHandle The client handler which required this action * @Param userName Pointer to a buffer which contains the name of the new user. * @Param taste Coffee taste */ vizn_api_status_t VIZN_SetTaste(VIZN_api_client_t *clientHandle, char *UserName, cfg_Coffee_taste taste); /** * @brief Set user's favorite coffee taste. * * @Param clientHandle The client handler which required this action * @Param userName Pointer to a buffer which contains the name of the new user. * @Param taste Pointer to the Coffee taste */ vizn_api_status_t VIZN_GetTaste(VIZN_api_client_t *clientHandle, char *UserName, int *taste); vizn_api_status_t VIZN_Is_Rec_User(VIZN_api_client_t *clientHandle, char *UserName); ~~~~~~~~~ vizn_api_status_t VIZN_SetTaste(VIZN_api_client_t *clientHandle, char *UserName, cfg_Coffee_taste taste) { int32_t status; if (!IsValidUserName(UserName)) { return kStatus_API_Layer_RenameUser_InvalidUserName; } status = DB_Set_Taste(std::string(UserName), (uint16_t)taste); if (status == 0) { return kStatus_API_Layer_Success; } else if (status == -1) { return kStatus_API_Layer_SetTaste_Failed; } } vizn_api_status_t VIZN_GetTaste(VIZN_api_client_t *clientHandle, char *UserName, int *taste) { int32_t status; if (!IsValidUserName(UserName)) { return kStatus_API_Layer_RenameUser_InvalidUserName; } *taste = DB_Get_Taste(std::string(UserName)); if (*taste != -1) { return kStatus_API_Layer_Success; } else { return kStatus_API_Layer_GetTaste_Failed; } } vizn_api_status_t VIZN_Is_Rec_User(VIZN_api_client_t *clientHandle, char *UserName) { if (!IsValidUserName(UserName)) { return kStatus_API_Layer_RenameUser_InvalidUserName; } return kStatus_API_Layer_Success; } In sln_api_init.cpp, declare the variable:  std::string Current_User = "" ; which is used to store the name corresponding to the face after recognition, and add the processing function  Coffee_Rec()  after successful face recognition in the structure variable ops2; std::string Current_User = " "; //Add customize function int Coffee_Rec(VIZN_api_client_t *pClient, face_info_t face_info); client_operations_t ops2 = { .detect = NULL, .recognize = Coffee_Rec,//NULL, .enrolment = NULL, }; //Add customize function int Coffee_Rec(VIZN_api_client_t *pClient, face_info_t face_info) { Current_User = face_info.name; return 1; } In sln_timers.h, increase MS_SYSTEM_LOCKED to extend the locked status time to 25 seconds; ~~~~~~~~ #define MS_SYSTEM_LOCKED 25000 //2000 // MS in which the board is in a locked state after a reg/rec. ~~~~~~~~ In sln_cli.cpp, add three Shell commands: order, set_taste, get_taste to stand for the operations of brewing coffee, setting coffee taste, and checking coffee taste; SHELL_COMMAND_DEFINE(set_taste, (char *)"\r\n\"set_taste username <0|1|2|3|~>\": set user's favorite taste\r\n" "0 - Cappuccino\r\n" "1 - Black Coffee\r\n" "2 - Coffee latte\r\n" "3 - Flat White\r\n" "4 - Cortado\r\n" "5 - Mocha\r\n" "6 - Con Panna\r\n" "7 - Lungo\r\n" "8 - Ristretto\r\n" "9 - Others \r\n", FFI_CLI_SetTasteCommand, SHELL_IGNORE_PARAMETER_COUNT); SHELL_COMMAND_DEFINE(get_taste, (char *)"\r\n\"get_taste username\": return user's favorite taste \r\n", FFI_CLI_GetTasteCommand, SHELL_IGNORE_PARAMETER_COUNT); SHELL_COMMAND_DEFINE(order, (char *)"\r\n\"order <0|1|2|3|~>\": order a favorite taste \r\n", FFI_CLI_OrderCommand, SHELL_IGNORE_PARAMETER_COUNT); ~~~~~~ static shell_status_t FFI_CLI_SetTasteCommand(shell_handle_t shellContextHandle, int32_t argc, char **argv) { if (argc != 3) { SHELL_Printf(shellContextHandle, "Wrong parameters\r\n"); return kStatus_SHELL_Error; } return UsbShell_QueueSendFromISR(shellContextHandle, argc, argv, SHELL_EV_FFI_CLI_SET_TASTE); } static shell_status_t FFI_CLI_GetTasteCommand(shell_handle_t shellContextHandle, int32_t argc, char **argv) { if (argc != 2) { SHELL_Printf(shellContextHandle, "Wrong parameters\r\n"); return kStatus_SHELL_Error; } return UsbShell_QueueSendFromISR(shellContextHandle, argc, argv, SHELL_EV_FFI_CLI_GET_TASTE); } shell_status_t FFI_CLI_OrderCommand(shell_handle_t shellContextHandle, int32_t argc, char **argv) { if (argc > 2) { SHELL_Printf(shellContextHandle, "Wrong parameters\r\n"); return kStatus_SHELL_Error; } return UsbShell_QueueSendFromISR(shellContextHandle, argc, argv, SHELL_EV_FFI_CLI_ORDER); } ~~~~~~ shell_status_t RegisterFFICmds(shell_handle_t shellContextHandle) { SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(list)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(add)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(del)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(rename)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(verbose)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(camera)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(version)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(save)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(updateotw)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(reset)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(emotion)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(liveness)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(detection)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(display)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(wifi)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(app_type)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(low_power)); // Add three Shell commands SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(order)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(set_taste)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(get_taste)); return kStatus_SHELL_Success; } In sln_cli.cpp, it needs to add corresponding codes for handle order, set_taste, get_taste instructions in task UsbShell_CmdProcess_Task else if (queueMsg.shellCommand == SHELL_EV_FFI_CLI_SET_TASTE) { int coffee_taste = atoi(queueMsg.argv[2]); if (coffee_taste >= Cappuccino && coffee_taste <= Others) { status = VIZN_SetTaste(&VIZN_API_CLIENT(Shell),(char *)queueMsg.argv[1], (cfg_Coffee_taste)coffee_taste); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s like coffee taste: %s \r\n", queueMsg.argv[1], Coffee_type[coffee_taste]); } else { SHELL_Printf(shellContextHandle, "Cannot set coffee taste\r\n"); } } else { SHELL_Printf(shellContextHandle, "Unsupported coffee taste\r\n"); } } else if (queueMsg.shellCommand == SHELL_EV_FFI_CLI_GET_TASTE) { int get_taste_num = 0; status = VIZN_GetTaste(&VIZN_API_CLIENT(Shell),(char *)queueMsg.argv[1], &get_taste_num); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s like coffee taste: %s \r\n", queueMsg.argv[1], Coffee_type[(cfg_Coffee_taste)(get_taste_num)]); } else { SHELL_Printf(shellContextHandle, "Cannot get coffee taste\r\n"); } } else if (queueMsg.shellCommand == SHELL_EV_FFI_CLI_ORDER) { status = VIZN_Is_Rec_User(&VIZN_API_CLIENT(Shell),(char *)Current_User.c_str()); if (status == kStatus_API_Layer_Success) { if (queueMsg.argc == 1) { int get_taste_num = 0; status = VIZN_GetTaste(&VIZN_API_CLIENT(Shell),(char*)Current_User.c_str(), &get_taste_num); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s order the a cup of %s \r\n", Current_User.c_str(), Coffee_type[(cfg_Coffee_taste)(get_taste_num)]); } else { SHELL_Printf(shellContextHandle, "Sorry, please order again, Current user is %s\r\n",Current_User.c_str()); } } else if(queueMsg.argc == 2) { int coffee_taste = atoi(queueMsg.argv[1]); if (coffee_taste >= Cappuccino && coffee_taste <= Others) { status = VIZN_SetTaste(&VIZN_API_CLIENT(Shell),(char*)Current_User.c_str(), (cfg_Coffee_taste)coffee_taste); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s order a cup of %s \r\n", Current_User.c_str(), Coffee_type[coffee_taste]); } else { SHELL_Printf(shellContextHandle, "Cannot set coffee taste, Current user is %s\r\n",Current_User.c_str()); } } else { SHELL_Printf(shellContextHandle, "Unsupported coffee taste\r\n"); } } } } Use the cafe logo of《Friends》to replace the original Welcome_home picture, use the BmpCvt tool to convert the picture into the corresponding array, and add it to welcomehome_320x122.h. static const unsigned short Coffee_shop_320_122[] = { 0x59E6, 0x6227, 0x6247, 0x59C5, 0x59C5, 0x59A5, 0x4103, 0x6A67, 0x6A47, 0x6227, 0x6A47, 0x6A68, 0x7268, 0x6A67, 0x6A67, 0x6A47, 0x72A9, 0x6A68, 0x7268, 0x6A48, 0x5A06, 0x6A88, 0x6A68, 0x6247, 0x6A47, 0x7289, 0x7289, 0x6A47, 0x6A47, 0x6A47, 0x6227, 0x6A68, 0x6206, 0x6A47, 0x5A26, 0x6247, 0x6227, 0x6A27, 0x4924, 0x836D, 0x5207, 0x7BAC, 0x5247, 0x83ED, 0x4A47, 0x2923, 0x7B8C, 0x49E5, 0x49E5, 0x4A05, 0x28C1, 0x5226, 0x6267, 0x6A87, 0x72E9, 0x6267, 0x6AA9, 0x5A27, 0x6AA9, 0x6AA9, 0x5A47, 0x6A88, 0x5A06, 0x5A47, 0x6AA9, 0x5A47, 0x62A9, 0x5206, 0x6288, 0x6268, 0x5A47, 0x5A27, 0x5A47, 0x5A27, 0x49E6, 0x4A07, 0x4A07, 0x5A89, 0x49C6, 0x5A48, 0x5A28, 0x5A47, 0x5226, 0x49E6, 0x49C6, 0x41A6, 0x5208, 0x2082, 0x52A8, 0x6B6B, 0x39A5, 0x39A5, 0x3964, 0x49E7, 0x3104, 0x49C7, 0x3945, 0x41A6, 0x28A2, 0x2061, 0x3965, 0x28E3, 0x1881, 0x3944, 0x3103, 0x3103, 0x3903, 0x4145, 0x51A6, 0x51C6, 0x4985, 0x51E6, 0x51E6, 0x61E7, 0x6A48, 0x6A28, 0x6A28, 0x6A27, 0x61E6, 0x6207, 0x6A68, 0x59E7, 0x4185, 0x51E6, 0x51A6, 0x6228, 0x5A07, 0x6228, 0x5A08, 0x4184, 0x41A5, 0x4164, 0x3944, 0x3944, 0x736B, 0x83ED, 0x41A5, 0x83ED, 0x6288, 0x8BAB, 0x836A, 0x6287, 0x6B2A, 0x5267, 0x83CD, 0x5A68, 0x5228, 0x3986, 0x3985, 0x7B0A, 0x6A67, 0x7267, 0x832B, 0x49A5, 0x6206, 0x8AC9, 0x72A8, 0x82C9, 0x82E9, 0x8309, 0x6A46, 0x8B2B, 0x3860, 0x8329, 0x6A67, 0x7288, 0x7268, 0x61E6, 0x7267, 0x6A67, 0x59C5, 0x51A4, 0x6A46, 0x7AA8, 0x6A26, 0x7287, 0x7AA8, 0x72A8, 0x72A9, 0x51C5, 0x5A27, 0x5A27, 0x3923, 0x ~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~ 0x7B8C, 0x734B, 0x6B0A, 0x83CD, 0x83ED, 0x8C0E, 0x7B8C, 0x7B6C, 0x20C2, 0x5227, 0x83ED, 0x6AE9, 0x734B, 0x62A9, 0x7B6B, 0x7B8C, 0x62E9, 0x7BAC, 0x7B6B, 0x732A, 0x940D, 0x83AC, 0x732A, 0x7309, 0x8BCC, 0x7309, 0x8BCD, 0x83AC, 0x7B6B, 0x940D, 0x3943, 0x942E, 0x7B6B, 0x734A, 0x7B8B, 0x62C8, 0x7B8B, 0x7B6A, 0x7BAB, 0x732A, 0x7B6B, 0x7B6B, 0x83CC, 0x6B09, 0x6AA9, 0x6AE9, 0x7B6B, 0x7B8B, 0x83AC, 0x734B, 0x6AC9, 0x6B0A, 0x734B, 0x734A, 0x62A8, 0x732A, 0x8C0E, 0x8BCD, 0x944F, 0x734B, 0x7B8B, 0x732A, 0x942E, 0x8BCD, 0x83AD, 0x732B, 0x6B0A, 0x6AEA, 0x62C9, 0x9C90, 0x28C2, 0x8BEE, 0x93EE, 0x8BCD, 0x4183, 0x838B, 0x7B6A, 0x6287, 0x8BCB }; Programming the new project After saving the modified code and recompile the sln_viznas_iot_elock_oobe project (as shown in the figure below), then connect the MCU-LINK to J6 on the SLN-VIZNAS-IOT, just like Fig9 shows. Fig8 Recompile code Fig9 MCU-LINK (Note: it needs to reselect the Flash driver, as the below figure shows.) Fig10 Flash driver After that, it's able to program the code project to the on-board Hyperflash. Test & Summary When the new code project boot-up, please refer to Get Started with the SLN-VIZNAS-IOT to use the serial terminal to test the newly added three Shell commands: orders, set_taste, and get_taste. Once a face is successfully recognized, the cafe logo will appear up (as shown in Fig11). Fig11 Cafe logo Definitely, this smart coffee machine seems like a 'toy' demo, and there is a lot of work to improve it. Below is the list of my future work plans, Use the LCD panel instead of USB to display; Connect an external amplifier to enable voice prompt feature; Enable the Wifi feature to connect to the App; Use the GUI library to enhance UI experience; Add a voice recognition feature to control; And I'll be glad to hear any comments from you.    
View full article
When: TUESDAY, SEPTEMBER 14TH AT 11 AM EST Click here to register today.   Topic of discussion From consumer to industrial devices, a paradigm shift has already begun. Our everyday experiences with smartphones are driving the demand for higher performance, more connectivity, and an exceptional user experience as the cornerstones of the embedded products we use. But how can you make it easier to take your product to the next level? Join NXP and Crank Software to learn why the NXP I.MX RT1170 crossover MCU is the right embedded hardware to create and can help lower development risks and how developing engaging user experiences can easily become part of your development workflow. During this session, you’ll learn: About optimizing power and performance with i.MX RT [1170] Crossover MCUs Just how embedded GUI development can be a collaborative experience between development and design How Storyboard’s Rapid Design and Iteration technology embraces UI design changes during development What integrated capabilities can help leverage the hardware’s full potential How easy it is to develop GUI apps via a live demo of a Storyboard  
View full article
Introduction NXP i.MXRT106x has two USB2.0 OTG instance. And the RT1060 EVK has both of the USB interface on the board. But the RT1060 SDK only has single USB host example. Although RT1060’s USB host stack support multiple devices, but we still need a USB HUB when user want to connect two device. This article will show you how to make both USB instance as host. RT1060 SDK has single host examples which support multiple devices, like host_hid_mouse_keyboard_bm. But this application don’t use these examples. Instead, MCUXpresso Config Tools is used to build the demo from beginning. The config tool is a very powerful tool which can configure clock, pin and peripherals, especially the USB. In this application demo, it can save 95% coding work. Hardware and software tools RT1060 EVK MCUXpresso 11.4.0 MIMXRT1060 SDK 2.9.1 Step 1 This project will support USB HID mouse and USB CDC. First, create an empty project named MIMXRT1062_usb_host_dual_port. When select SDK components, select “USB host CDC” and “”USB host HID” in Middleware label. IDE will select other necessary component automatically.     After creating the empty project, clock should be configured first. Both of the USB PHY need 480M clock.   Step 2 Next step is to configure USB host in peripheral config tool. Due to the limitation of config tool, only one host instance of the USB component is allowed. In this project, CDC VCOM is added first.   Step 3 After these settings, click “Update Code” in control bar. This will turn all the configurations into code and merge into project. Then click the “copy to clipboard” button. This will copy the host task call function. Paste it in the forever while loop in the project’s main(). Besides that, it also need to add BOARD_InitBootPeripherals() function call in main(). At this point, USB VCOM is ready. The tool will not only copy the file and configure USB, but also create basic implementation framework. If compile and download the project to RT1060 EVK, it can enumerate a USB CDC VCOM device on USB1. If characters are send from CDC device, the project can send it out to DAPLink UART port so that you can see the character on a terminal interface in computer. Step 4 To get USB HID mouse code, it need to create another USB HID project. The workflow is similar to the first project. Here is the screenshot of the USB HID configuration.   Click “Update code”, the HID mouse code will be generated. The config tool generate two files, usb_host_interface_0_hid_mouse.c and usb_host_interface_0_hid_mouse. Copy them to the “source” folder in dual host project.     Step 5 Next step is to modify some USB macro definitions. <usb_host_config.h> #define USB_HOST_CONFIG_EHCI 2 /*means there are two host instance*/ #define USB_HOST_CONFIG_MAX_HOST 2 /*The USB driver can support two ehci*/ #define USB_HOST_CONFIG_HID (1U) /*for mouse*/ Next step is merge usb_host_app.c. The project initialize USB hardware and software in USB_HostApplicationInit(). usb_status_t USB_HostApplicationInit(void) { usb_status_t status; USB_HostClockInit(kUSB_ControllerEhci0); USB_HostClockInit(kUSB_ControllerEhci1); #if ((defined FSL_FEATURE_SOC_SYSMPU_COUNT) && (FSL_FEATURE_SOC_SYSMPU_COUNT)) SYSMPU_Enable(SYSMPU, 0); #endif /* FSL_FEATURE_SOC_SYSMPU_COUNT */ status = USB_HostInit(kUSB_ControllerEhci0, &g_HostHandle[0], USB_HostEvent); status = USB_HostInit(kUSB_ControllerEhci1, &g_HostHandle[1], USB_HostEvent); /*each usb instance have a g_HostHandle*/ if (status != kStatus_USB_Success) { return status; } else { USB_HostInterface0CicVcomInit(); USB_HostInterface0HidMouseInit(); } USB_HostIsrEnable(); return status; } In USB_HostIsrEnable(), add code to enable USB2 interrupt.   irqNumber = usbHOSTEhciIrq[1]; NVIC_SetPriority((IRQn_Type)irqNumber, USB_HOST_INTERRUPT_PRIORITY); EnableIRQ((IRQn_Type)irqNumber); Then add and modify USB interrupt handler. void USB_OTG1_IRQHandler(void) { USB_HostEhciIsrFunction(g_HostHandle[0]); } void USB_OTG2_IRQHandler(void) { USB_HostEhciIsrFunction(g_HostHandle[1]); } Since both USB instance share the USB stack, When USB event come, all the event will call USB_HostEvent() in usb_host_app.c. HID code should also be merged into this function. static usb_status_t USB_HostEvent(usb_device_handle deviceHandle, usb_host_configuration_handle configurationHandle, uint32_t eventCode) { usb_status_t status1; usb_status_t status2; usb_status_t status = kStatus_USB_Success; /* Used to prevent from multiple processing of one interface; * e.g. when class/subclass/protocol is the same then one interface on a device is processed only by one interface on host */ uint8_t processedInterfaces[USB_HOST_CONFIG_CONFIGURATION_MAX_INTERFACE] = {0}; switch (eventCode & 0x0000FFFFU) { case kUSB_HostEventAttach: status1 = USB_HostInterface0CicVcomEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); status2 = USB_HostInterface0HidMouseEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); if ((status1 == kStatus_USB_NotSupported) && (status2 == kStatus_USB_NotSupported)) { status = kStatus_USB_NotSupported; } break; case kUSB_HostEventNotSupported: usb_echo("Device not supported.\r\n"); break; case kUSB_HostEventEnumerationDone: status1 = USB_HostInterface0CicVcomEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); status2 = USB_HostInterface0HidMouseEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); if ((status1 != kStatus_USB_Success) && (status2 != kStatus_USB_Success)) { status = kStatus_USB_Error; } break; case kUSB_HostEventDetach: status1 = USB_HostInterface0CicVcomEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); status2 = USB_HostInterface0HidMouseEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); if ((status1 != kStatus_USB_Success) && (status2 != kStatus_USB_Success)) { status = kStatus_USB_Error; } break; case kUSB_HostEventEnumerationFail: usb_echo("Enumeration failed\r\n"); break; default: break; } return status; } USB_HostTasks() is used to deal with all the USB messages in the main loop. At last, HID work should also be added in this function. void USB_HostTasks(void) { USB_HostTaskFn(g_HostHandle[0]); USB_HostTaskFn(g_HostHandle[1]); USB_HostInterface0CicVcomTask(); USB_HostInterface0HidMouseTask(); }   After all these steps, the dual USB function is ready. User can insert USB mouse and USB CDC device into any of the two USB port simultaneously. Conclusion All the RT/LPC/Kinetis devices with two OTG or HOST can support dual USB host. With the help of MCUXpresso Config Tool, it is easy to implement this function.
View full article
Obtaining the footprint for Kinetis/LPC/i.MXRT part numbers is very straightforward using the Microcontroller Symbols, Footprints and Models Library homepage, on the following link: https://www.nxp.com/design/software/models/microcontroller-symbols-footprints-and-models:MCUCAD?tid=vanMCUCAD What some users may not be aware of is that the BXL file available for NXP Kinetis/LPC/i.MXRT part numbers also contain the 3D model of the package, which is often needed when working on the industrial design of your application. You may follow the steps below to export the 3D model of the package in STEP (Standard for the Exchange of Product Data) format using the Ultra Librarian software, which can be downloaded from the link on the models library homepage. A STEP (.step,stp) file stores the model in ASCII format. This format can be imported into many CAD suites that allow to work with 3D solids. First, obtain the BXL file for the part number you are interested in. In this example the MIMXRT1052CVL5B.blx.   Then, open the Ultra Librarian project and load this file using the “Load Data” button, and select the “3D Step Model” checkbox from the Select Tools options. Finally, select the Export to Select Tools option. Once the exporting process is finished, the step file will be available on the path UltraLibrarian/Library/Exported.  The STEP (.stp) file can be opened in CAD suites that support solid 3D objects, like FreeCAD which is open source.
View full article
As we know, the RT series MCUs support the XIP (Execute in place) mode and benefit from saving the number of pins, serial NOR Flash is most commonly used, as the FlexSPI module can high efficient fetch the code and data from the Serial NOR flash for Cortex-M7 to execute. The fetch way is implementing via utilizing the Quad IO Fast Read command, meanwhile, the serail NOR flash works in the SDR (Single Data transfer Rate) mode, it receives data on SCLK rise edge and transmits data on SCLK fall edge. Comparing to the SDR mode, the DDR (Dual Data transfer Rate) mode has a higher throughput capacity, whether it can provide better performance of XIP mode, and how to do that if we want the Serial NOR Flash to work in DDR (Dual Data transfer Rate) mode? SDR & DDR mode SDR mode: In SDR (Single Data transfer Rate) mode, data is only clocked on one edge of the clock (either the rising or falling edge). This means that for SDR to have data being transmitted at X Mbps, the clock bit rate needs to be 2X Mbps. DDR mode: For DDR (Dual Data transfer Rate) mode, also known as DTR (Dual Transfer Rate) mode, data is transferred on both the rising and falling edge of the clock. This means data is transmitted at X Mbps only requires the clock bit rate to be X Mbps, hence doubling the bandwidth (as Fig 1 shows).   Fig 1 Enable DDR mode The below steps illustrate how to make the i.MX RT1060 boot from the QSPI with working in DDR mode. Note: The board is MIMXRT1060, IDE is MCUXpresso IDE Open a hello_world as the template Modify the FDCB(Flash Device Configuration Block) a)Set the controllerMiscOption parameter to supports DDR read command. b) Set Serial Flash frequency to 60 MHz. c)Parase the DDR read command into command sequence. The following table shows a template command sequence of DDR Quad IO FAST READ instruction and it's almost matching with the FRQDTR (Fast Read Quad IO DTR) Sequence of IS25WP064 (as Fig 2 shows).   Fig2 FRQDTR Sequence d)Adjust the dummy cycles. The dummy cycles should match with the specific serial clock frequency and the default dummy cycles of the FRQDTR sequence command is 6 (as the below table shows).   However, when the serial clock frequency is 60MHz, the dummy cycle should change to 4 (as the below table shows).   So it needs to configure [P6:P3] bits of the Read Register (as the below table shows) via adding the SET READ PARAMETERS command sequence(as Fig 3 shows) in FDCB manually. Fig 3 SET READ PARAMETERS command sequence In further, in DDR mode, the SCLK cycle is double the serial root clock cycle. The operand value should be set as 2N, 2N-1 or 2*N+1 depending on how the dummy cycles defined in the device datasheet. In the end, we can get an adjusted FCDB like below. // Set Dummy Cycles #define FLASH_DUMMY_CYCLES 8 // Set Read register command sequence's Index in LUT table #define CMD_LUT_SEQ_IDX_SET_READ_PARAM 7 // Read,Read Status,Write Enable command sequences' Index in LUT table #define CMD_LUT_SEQ_IDX_READ 0 #define CMD_LUT_SEQ_IDX_READSTATUS 1 #define CMD_LUT_SEQ_IDX_WRITEENABLE 3 const flexspi_nor_config_t qspiflash_config = { .memConfig = { .tag = FLEXSPI_CFG_BLK_TAG, .version = FLEXSPI_CFG_BLK_VERSION, .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad, .csHoldTime = 3u, .csSetupTime = 3u, // Enable DDR mode .controllerMiscOption = kFlexSpiMiscOffset_DdrModeEnable | kFlexSpiMiscOffset_SafeConfigFreqEnable, .sflashPadType = kSerialFlash_4Pads, //.serialClkFreq = kFlexSpiSerialClk_100MHz, .serialClkFreq = kFlexSpiSerialClk_60MHz, .sflashA1Size = 8u * 1024u * 1024u, // Enable Flash register configuration .configCmdEnable = 1u, .configModeType[0] = kDeviceConfigCmdType_Generic, .configCmdSeqs[0] = { .seqNum = 1, .seqId = CMD_LUT_SEQ_IDX_SET_READ_PARAM, .reserved = 0, }, .lookupTable = { // Read LUTs [4*CMD_LUT_SEQ_IDX_READ] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xED, RADDR_DDR, FLEXSPI_4PAD, 0x18), // The MODE8_DDR subsequence costs 2 cycles that is part of the whole dummy cycles [4*CMD_LUT_SEQ_IDX_READ + 1] = FLEXSPI_LUT_SEQ(MODE8_DDR, FLEXSPI_4PAD, 0x00, DUMMY_DDR, FLEXSPI_4PAD, FLASH_DUMMY_CYCLES-2), [4*CMD_LUT_SEQ_IDX_READ + 2] = FLEXSPI_LUT_SEQ(READ_DDR, FLEXSPI_4PAD, 0x04, STOP, FLEXSPI_1PAD, 0x00), // READ STATUS REGISTER [4*CMD_LUT_SEQ_IDX_READSTATUS] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x05, READ_SDR, FLEXSPI_1PAD, 0x01), [4*CMD_LUT_SEQ_IDX_READSTATUS + 1] = FLEXSPI_LUT_SEQ(STOP, FLEXSPI_1PAD, 0x00, 0, 0, 0), // WRTIE ENABLE [4*CMD_LUT_SEQ_IDX_WRITEENABLE] = FLEXSPI_LUT_SEQ(CMD_SDR,FLEXSPI_1PAD, 0x06, STOP, FLEXSPI_1PAD, 0x00), // Set Read register [4*CMD_LUT_SEQ_IDX_SET_READ_PARAM] = FLEXSPI_LUT_SEQ(CMD_SDR,FLEXSPI_1PAD, 0x63, WRITE_SDR, FLEXSPI_1PAD, 0x01), [4*CMD_LUT_SEQ_IDX_SET_READ_PARAM + 1] = FLEXSPI_LUT_SEQ(STOP,FLEXSPI_1PAD, 0x00, 0, 0, 0), }, }, .pageSize = 256u, .sectorSize = 4u * 1024u, .blockSize = 64u * 1024u, .isUniformBlockSize = false, }; Is DDR mode real better? According to the RT1060's datasheet, the below table illustrates the maximum frequency of FlexSPI operation, as the MIMXRT1060's onboard QSPI flash is IS25WP064AJBLE, it doesn't contain the MQS pin, it means set MCR0.RXCLKsrc=1 (Internal dummy read strobe and loopbacked from DQS) is the most optimized option. operation mode RXCLKsrc=0 RXCLKsrc=1 RXCLKsrc=3 SDR 60 MHz 133 MHz 166 MHz DDR 30 MHz 66 MHz 166 MHz In another word, QSPI can run up to 133 MHz in SDR mode versus 66 MHz in DDR mode. From the perspective of throughput capacity, they're almost the same. It seems like DDR mode is not a better option for IS25WP064AJBLE and the following experiment will validate the assumption. Experiment mbedtls_benchmark I use the mbedtls_benchmark as the first testing demo and I run the demo under the below conditions: 100MH, SDR mode; 133MHz, SDR mode; 66MHz, DDR mode; According to the corresponding printout information (as below shows), I make a table for comparison and I mark the worst performance of implementation items among the above three conditions, just as Fig 4 shows. SDR Mode run at 100 MHz. FlexSPI clock source is 3, FlexSPI Div is 6, PllPfd2Clk is 720000000 mbedTLS version 2.16.6 fsys=600000000 Using following implementations: SHA: DCP HW accelerated AES: DCP HW accelerated AES GCM: Software implementation DES: Software implementation Asymmetric cryptography: Software implementation MD5 : 18139.63 KB/s, 27.10 cycles/byte SHA-1 : 44495.64 KB/s, 12.52 cycles/byte SHA-256 : 47766.54 KB/s, 11.61 cycles/byte SHA-512 : 2190.11 KB/s, 267.88 cycles/byte 3DES : 1263.01 KB/s, 462.49 cycles/byte DES : 2962.18 KB/s, 196.33 cycles/byte AES-CBC-128 : 52883.94 KB/s, 10.45 cycles/byte AES-GCM-128 : 1755.38 KB/s, 329.33 cycles/byte AES-CCM-128 : 2081.99 KB/s, 279.72 cycles/byte CTR_DRBG (NOPR) : 5897.16 KB/s, 98.15 cycles/byte CTR_DRBG (PR) : 4489.58 KB/s, 129.72 cycles/byte HMAC_DRBG SHA-1 (NOPR) : 1297.53 KB/s, 448.03 cycles/byte HMAC_DRBG SHA-1 (PR) : 1205.51 KB/s, 486.04 cycles/byte HMAC_DRBG SHA-256 (NOPR) : 1786.18 KB/s, 327.70 cycles/byte HMAC_DRBG SHA-256 (PR) : 1779.52 KB/s, 328.93 cycles/byte RSA-1024 : 202.33 public/s RSA-1024 : 7.00 private/s DHE-2048 : 0.40 handshake/s DH-2048 : 0.40 handshake/s ECDSA-secp256r1 : 9.00 sign/s ECDSA-secp256r1 : 4.67 verify/s ECDHE-secp256r1 : 5.00 handshake/s ECDH-secp256r1 : 9.33 handshake/s   DDR Mode run at 66 MHz. FlexSPI clock source is 2, FlexSPI Div is 5, PllPfd2Clk is 396000000 mbedTLS version 2.16.6 fsys=600000000 Using following implementations: SHA: DCP HW accelerated AES: DCP HW accelerated AES GCM: Software implementation DES: Software implementation Asymmetric cryptography: Software implementation MD5 : 16047.13 KB/s, 27.12 cycles/byte SHA-1 : 44504.08 KB/s, 12.54 cycles/byte SHA-256 : 47742.88 KB/s, 11.62 cycles/byte SHA-512 : 2187.57 KB/s, 267.18 cycles/byte 3DES : 1262.66 KB/s, 462.59 cycles/byte DES : 2786.81 KB/s, 196.44 cycles/byte AES-CBC-128 : 52807.92 KB/s, 10.47 cycles/byte AES-GCM-128 : 1311.15 KB/s, 446.53 cycles/byte AES-CCM-128 : 2088.84 KB/s, 281.08 cycles/byte CTR_DRBG (NOPR) : 5966.92 KB/s, 97.55 cycles/byte CTR_DRBG (PR) : 4413.15 KB/s, 130.42 cycles/byte HMAC_DRBG SHA-1 (NOPR) : 1291.64 KB/s, 449.47 cycles/byte HMAC_DRBG SHA-1 (PR) : 1202.41 KB/s, 487.05 cycles/byte HMAC_DRBG SHA-256 (NOPR) : 1748.38 KB/s, 328.16 cycles/byte HMAC_DRBG SHA-256 (PR) : 1691.74 KB/s, 329.78 cycles/byte RSA-1024 : 201.67 public/s RSA-1024 : 7.00 private/s DHE-2048 : 0.40 handshake/s DH-2048 : 0.40 handshake/s ECDSA-secp256r1 : 8.67 sign/s ECDSA-secp256r1 : 4.67 verify/s ECDHE-secp256r1 : 4.67 handshake/s ECDH-secp256r1 : 9.00 handshake/s   Fig 4 Performance comparison We can find that most of the implementation items are achieve the worst performance when QSPI works in DDR mode with 66 MHz. Coremark demo The second demo is running the Coremark demo under the above three conditions and the result is illustrated below. SDR Mode run at 100 MHz. FlexSPI clock source is 3, FlexSPI Div is 6, PLL3 PFD0 is 720000000 2K performance run parameters for coremark. CoreMark Size : 666 Total ticks : 391889200 Total time (secs): 16.328717 Iterations/Sec : 2449.671999 Iterations : 40000 Compiler version : MCUXpresso IDE v11.3.1 Compiler flags : Optimization most (-O3) Memory location : STACK seedcrc : 0xe9f5 [0]crclist : 0xe714 [0]crcmatrix : 0x1fd7 [0]crcstate : 0x8e3a [0]crcfinal : 0x25b5 Correct operation validated. See readme.txt for run and reporting rules. CoreMark 1.0 : 2449.671999 / MCUXpresso IDE v11.3.1 Optimization most (-O3) / STACK   SDR Mode run at 133 MHz. FlexSPI clock source is 3, FlexSPI Div is 4, PLL3 PFD0 is 664615368 2K performance run parameters for coremark. CoreMark Size : 666 Total ticks : 391888682 Total time (secs): 16.328695 Iterations/Sec : 2449.675237 Iterations : 40000 Compiler version : MCUXpresso IDE v11.3.1 Compiler flags : Optimization most (-O3) Memory location : STACK seedcrc : 0xe9f5 [0]crclist : 0xe714 [0]crcmatrix : 0x1fd7 [0]crcstate : 0x8e3a [0]crcfinal : 0x25b5 Correct operation validated. See readme.txt for run and reporting rules. CoreMark 1.0 : 2449.675237 / MCUXpresso IDE v11.3.1 Optimization most (-O3) / STACK   DDR Mode run at 66 MHz. FlexSPI clock source is 2, FlexSPI Div is 5, PLL3 PFD0 is 396000000 2K performance run parameters for coremark. CoreMark Size : 666 Total ticks : 391890772 Total time (secs): 16.328782 Iterations/Sec : 2449.662173 Iterations : 40000 Compiler version : MCUXpresso IDE v11.3.1 Compiler flags : Optimization most (-O3) Memory location : STACK seedcrc : 0xe9f5 [0]crclist : 0xe714 [0]crcmatrix : 0x1fd7 [0]crcstate : 0x8e3a [0]crcfinal : 0x25b5 Correct operation validated. See readme.txt for run and reporting rules. CoreMark 1.0 : 2449.662173 / MCUXpresso IDE v11.3.1 Optimization most (-O3) / STACK   After comparing the CoreMark scores, it gets the lowest CoreMark score when QSPI works in DDR mode with 66 MHz. However, they're actually pretty close. Through the above two testings, we can get the DDR mode maybe not a better option, at least for the i.MX RT10xx series MCU.
View full article
i.MXRT1170 crossover MCUs are a new generation product in the RT family of NXP. It has 1 GHz speed and rich on-chip peripherals. Among RT1170 sub-family, RT1173/RT1175/RT1176 have dual core. One cortex-M7 core runs in 1 GHz, and one cortex-M4 core runs in 400 MHz. The two cores can be debugged through one SWD port. In MIMXRT1170-EVK,the Freelink debug interface default use CMSIS-DAP as debug probe. When debug two core project, for example the evkmimxrt1170_hello_world_cm7 project and evkmimxrt1170_hello_world_cm4 project, just click the debug button in CM7 project. After CM7 project become debug status, CM4 project start to debug automatically. But if developer want to use jlink as debug probe, he will find the CM4 project will not start automatically. If he start CM4 project debugging manually, it will fail. Can jlink debug dual core simultaneously? Yes, it can. In order to debug dual core by jlink, there are some additional settings need to be done. IDE and SDK MCUXpresso IDE 11.3, MIMXRT1170-EVK SDK 2.9.1, Jlink probe version 9 or above or change Freelink application firmware to jlink, Segger jlink firmware JLink_Windows_V698a. Import SDK example, here we select multicore_examples/evkmimxrt1170_hello_world_cm7. MCUXpresso IDE can import both CM4 and CM7 project automatically. Compile both project. Debug the CM7 project first. Then switch to CM4 project and also click the debug button. The CM4 project will not debug properly. So, we exit debug. With this step, the IDE created two deug configurations in RUN->Debug Configurations. Click the evkmimxrt1170_hello_world_cm4 JLink Debug, click JLink Debugger label, Add evkmimxrt1170_connect_cm4_cm4side.jlinkscript. Then unselect the “Attach to a running target” checkbox.   Set a breakpoint at start of main() function of the CM4 project. This is because some time the IDE can’t suspend at start of main() when start debugging. A second breakpoint can be helpful. Take care to set the break point on BOARD_ConfigMPU() or below code. Don’t set break point on “gpio_pin_config_t led_config…”. Otherwise, debug will fail. Now we can start to debug CM7 project. Click the debug button in RUN-> evkmimxrt1170_hello_world_cm4 JLink Debug. This is because the IDE will enable “attach to a running target” automatically. We must disable it again. When CM7 debug circumstance is ready, switch to CM4 project and click “debug” button. Then resume the CM7 project. The CM4 project will start debugging and suspend at the breakpoint.   Notes: If you follow this guide but still can’t debug both core, please try to erase whole chip and try again. If CM7 project run fails in MCMGR_INIT(), please check the Boot Configure pin. It should be set to Internal Boot mode.
View full article
There is an issue with the DCD file used in the SDK 2.9.0 release for the i.MX RT1170 processor. When the included DCD file is used in a project to configure the SDRAM memory on the EVK, the refresh for the memory is not enabled. This can lead to corruption/data loss over time.   To fix the problem, replace the dcd.c file in your project with the attached file instead.   We are working on a fix, and a new revision of the SDK will be released soon.
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/Design-an-IoT-edge-node-for-CV-application-base-on-the-i/ta-p/1127423 
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-Community-Articles/Effortless-GUI-Development-with-NXP-Microcontrollers/ba-p/1131179  
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-345190  
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-on-i-MX-RT1064-EVK/ta-p/1123602 
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/RT1050-HAB-Encrypted-Image-Generation-and-Analysis/ta-p/1124877  
View full article
In the tutorial, I'd like to show the steps of deploying an image classification model on i.MX RT1060 to enabling you to classify fashion images and categories. In the first part of this tutorial, we will review the Fashion MNIST dataset, including how to download it to your system. From there we’ll define a simple CNN network using the TensorFlow platform. Next, we’ll train our CNN model on the Fashion MNIST dataset, train it, and review the results. Finally, we'll optimize the model, after that, the model will be smaller and increase inferencing speed, which is valuable for source-limited devices such as MCU. Let’s go ahead and get started! Fashion MNIST dataset The Fashion MNIST dataset was created by the e-commerce company, Zalando. Fig 1 Fashion MNIST dataset As they note on their official GitHub repo for the Fashion MNIST dataset, there are a few problems with the standard MNIST digit recognition dataset: It’s far too easy for standard machine learning algorithms to obtain 97%+ accuracy. It’s even easier for deep learning models to achieve 99%+ accuracy. The dataset is overused. MNIST cannot represent modern computer vision tasks. Zalando, therefore, created the Fashion MNIST dataset as a drop-in replacement for MNIST. 60,000 training examples 10,000 testing examples 10 classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot 28×28 grayscale images The code below loads the Fashion-MNIST dataset using the TensorFlow and creates a plot of the first 25 images in the training dataset. import tensorflow as tf import numpy as np # For easy reset of notebook state. tf.keras.backend.clear_session() # load dataset fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() lass_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] plt.figure(figsize=(8,8)) for i in range(25): plt.subplot(5,5,i+1,) plt.tight_layout() plt.imshow(train_images[i]) plt.xlabel(lass_names[train_labels[i]]) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.show() Fig 2 Running the code loads the Fashion-MNIST train and test dataset and prints their shape. Fig 3 We can see that there are 60,000 examples in the training dataset and 10,000 in the test dataset and that images are indeed square with 28×28 pixels. Creating model We need to define a neural network model for the image classify purpose, and the model should have two main parts: the feature extraction and the classifier that makes a prediction. Defining a simple Convolutional Neural Network (CNN) For the convolutional front-end, we build 3 layers of convolution layer with a small filter size (3,3) and a modest number of filters followed by a max-pooling layer. The last filter map is flattened to provide features to the classifier. As we know, it's a multi-class classification task, so we will require an output layer with 10 nodes in order to predict the probability distribution of an image belonging to each of the 10 classes. In this case, we will require the use of a softmax activation function. And between the feature extractor and the output layer, we can add a dense layer to interpret the features. All layers will use the ReLU activation function and the He weight initialization scheme, both best practices. We will use the Adam optimizer to optimize the sparse_categorical_crossentropy loss function, suitable for multi-class classification, and we will monitor the classification accuracy metric, which is appropriate given we have the same number of examples in each of the 10 classes. The below code will define and run it will show the struct of the model. # Define a Model model = tf.keras.models.Sequential() # First Convolution ,Kernel:16*3*3 model.add( tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_uniform',input_shape=(28, 28, 1))) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Second Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Third Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Flatten()) model.add( tf.keras.layers.Dense(32, activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Dense(10, activation='softmax')) Fig 4 Training Model After the model is defined, we need to train it. The model will be trained using 5-fold cross-validation. The value of k=5 was chosen to provide a baseline for both repeated evaluation and to not be too large as to require a long running time. Each validation set will be 20% of the training dataset or about 12,000 examples. The training dataset is shuffled prior to being split and the sample shuffling is performed each time so that any model we train will have the same train and validation datasets in each fold, providing an apples-to-apples comparison. We will train the baseline model for a modest 20 training epochs with a default batch size of 32 examples. The validation set for each fold will be used to validate the model during each epoch of the training run, so we can later create learning curves, and at the end of the run, we use the test dataset to estimate the performance of the model. As such, we will keep track of the resulting history from each run, as well as the classification accuracy of the fold. The train_model() function below implements these behaviors, taking the training dataset and test dataset as arguments, and returning a list of accuracy scores and training histories that can be later summarized. from sklearn.model_selection import KFold # train a model using k-fold cross-validation def train_model(dataX, dataY, n_folds=5): scores, histories = list(), list() # prepare cross validation kfold = KFold(n_folds, shuffle=True, random_state=1) for train_ix, validate_ix in kfold.split(dataX): # select rows for train and test trainX, trainY, validate_X, validate_Y = dataX[train_ix], dataY[train_ix], dataX[validate_ix], dataY[validate_ix] # fit model history = model.fit(trainX, trainY, epochs=20, batch_size=32, validation_data=(validate_X, validate_Y), verbose=0) # evaluate model _, acc = model.evaluate(validate_X, validate_Y, verbose=0) print("Accurary: {:.4f},Total number of figures is {:0>2d}".format(acc * 100.0, len(testY))) # append scores scores.append(acc) histories.append(history) return scores, histories Module Summary After the model has been trained, we can present the results. There are two key aspects to present: the diagnostics of the learning behavior of the model during training and the estimation of the model performance. These can be implemented using separate functions. First, the diagnostics involve creating a line plot showing model performance on the train and validate set during each fold of the k-fold cross-validation. These plots are valuable for getting an idea of whether a model is overfitting, underfitting, or has a good fit for the dataset. We will create a single figure with two subplots, one for loss and one for accuracy. Blue lines will indicate model performance on the training dataset and orange lines will indicate performance on the hold-out validate dataset. The summarize_diagnostics() function below creates and shows this plot given the collected training histories. # plot diagnostic learning curves def summarize_diagnostics(histories): for i in range(len(histories)): # plot loss plt.subplot(2,1,1) plt.title('Cross Entropy Loss') plt.plot(histories[i].history['loss'], color='blue', label='train') plt.plot(histories[i].history['val_loss'], color='orange', label='test') # plot accuracy plt.subplot(2,1,2) plt.title('Classification Accuracy') plt.plot(histories[i].history['accuracy'], color='blue', label='train') plt.plot(histories[i].history['val_accuracy'], color='orange', label='test') plt.show() Fig 5 Next, the classification accuracy scores collected during each fold can be summarized by calculating the mean and standard deviation. This provides an estimate of the average expected performance of the model trained on the test dataset, with an estimate of the average variance in the mean. We will also summarize the distribution of scores by creating and showing a box and whisker plot. The summarize_performance() function below implements this for a given list of scores collected during model training. # summarize model performance def summarize_performance(scores): # print summary print('Accuracy: mean={:.4f} std={:.4f}, n={:0>2d}'.format(np.mean(trained_scores)*100, np.std(trained_scores)*100, len(scores))) # box and whisker plots of results plt.boxplot(scores) plt.show()   Fig 6 Verifying predictions According to the above figure, we see that the final trained model can get up to around 87.6% accuracy when predicting the test dataset. And with the trained model, running the below code will demonstrate the result of predictions about some images. def plot_image(i, predictions_array, true_label, img): true_label, img = true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): true_label = true_label[i] plt.grid(False) plt.xticks(range(10)) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') predictions = model.predict(test_images) # Plot the first X test images, their predicted labels, and the true labels. # Color correct predictions in blue and incorrect predictions in red. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions[i], test_labels) plt.tight_layout() plt.show()   Fig 7 Model quantization Post-training quantization is a conversion technique that can reduce model size while also improving CPU and hardware accelerator latency, with little degradation in model accuracy, especially it's crucial to embedded platforms, as it lacks the compute-intensive performance, the Flash and RAM memory is also very limited. TensorFlow Lite is able to be used to convert an already-trained float TensorFlow model to the TensorFlow Lite format. In addition, the TensorFlow Lite provides several approaches to optimize the mode, among these ways, Integer quantization is an optimization strategy that converts 32-bit floating-point numbers (such as weights and activation outputs) to the nearest 8-bit fixed-point numbers. This results in a smaller model and increased inferencing speed, which is very valuable for low-power devices such as microcontrollers. The below codes show how to implement the Integer quantization of the trained model, and after running these codes, we can find that the size of Tensorflow Lite mode reduces almost 64.9 KB versus the original model, becomes about 32% of the original size(Fig 8). import os # Convert using integer-only quantization def representative_data_gen(): for input_value in tf.data.Dataset.from_tensor_slices(tf.cast(train_images,tf.float32)).shuffle(500).batch(1).take(150): yield [input_value] # Convert using dynamic range quantization converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model_quant = converter.convert() # Save the model to disk open("model_dynamic_range_quantization.tflite", "wb").write(tflite_model_quant) ## Size difference Dynamic_range_quantization_model_size = os.path.getsize("model_dynamic_range_quantization.tflite") print("Dynamic range quantization model is %d bytes" % Dynamic_range_quantization_model_size) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_data_gen # Ensure that if any ops can't be quantized, the converter throws an error converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # Set the input and output tensors to uint8 (APIs added in r2.3) converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 tflite_model_advanced_quant = converter.convert() # Save the model to disk open("model_integer_only_quantization.tflite", "wb").write(tflite_model_advanced_quant) Integer_only_quantization_model_size = os.path.getsize("model_integer_only_quantization.tflite") print("Integer_only_quantization_model is %d bytes" % Integer_only_quantization_model_size) difference = Dynamic_range_quantization_model_size - Integer_only_quantization_model_size print("Difference is %d bytes" % difference) Fig 8 Evaluating the TensorFlow Lite model Now we'll run inferences using the TensorFlow Lite Interpreter to compare the model accuracies. First, we need a function that runs inference with a given model and images, and then returns the predictions: # Helper function to run inference on a TFLite model def run_tflite_model(tflite_file, test_image_indices): # Initialize the interpreter interpreter = tf.lite.Interpreter(model_path=str(tflite_file)) interpreter.allocate_tensors() input_details = interpreter.get_input_details()[0] output_details = interpreter.get_output_details()[0] predictions = np.zeros((len(test_image_indices),), dtype=int) for i, test_image_index in enumerate(test_image_indices): test_image = test_images[test_image_index] test_label = test_labels[test_image_index] # Check if the input type is quantized, then rescale input data to uint8 if input_details['dtype'] == np.uint8: input_scale, input_zero_point = input_details["quantization"] test_image = test_image / input_scale + input_zero_point test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"]) interpreter.set_tensor(input_details["index"], test_image) interpreter.invoke() output = interpreter.get_tensor(output_details["index"])[0] predictions[i] = output.argmax() return predictions Next, we'll compare the performance of the original model and the quantized model on one image. model_basic_quantization.tflite is the original TensorFlow Lite model with floating-point data. model_integer_only_quantization.tflite is the last model we converted using integer-only quantization (it uses uint8 data for input and output). Let's create another function to print our predictions and run it for testing. import matplotlib.pylab as plt # Change this to test a different image test_image_index = 1 ## Helper function to test the models on one image def test_model(tflite_file, test_image_index, model_type): global test_labels predictions = run_tflite_model(tflite_file, [test_image_index]) plt.imshow(test_images[test_image_index].reshape(28,28)) template = model_type + " Model \n True:{true}, Predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0]))) plt.grid(False) Fig 9 Fig 10 Then evaluate the quantized model by using all the test images we loaded at the beginning of this tutorial. After summarizing the prediction result of the test dataset, we can see that the prediction accuracy of the quantized model decrease 7% less than the original model, it's not bad. # Helper function to evaluate a TFLite model on all images def evaluate_model(tflite_file, model_type): test_image_indices = range(test_images.shape[0]) predictions = run_tflite_model(tflite_file, test_image_indices) accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images) print('%s model accuracy is %.4f%% (Number of test samples=%d)' % ( model_type, accuracy, len(test_images))) Deploying model Converting TensorFlow Lite model to C file The following code runs xxd on the quantized model, writes the output to a file called model_quantized.cc, in the file, the model is defined as an array of bytes, and prints it to the screen. The output is very long, so we won’t reproduce it all here, but here’s a snippet that includes just the beginning and end. # Save the file as a C source file xxd -i model_integer_only_quantization.tflite > model_quantized.cc # Print the source file cat model_quantized.cc Fig 11 Deploying the C file to project We use the tensorflow_lite_cifar10 demo as a prototype, then replace the original model and do some code modification, below is the code in the modified main file. #include "board.h" #include "fsl_debug_console.h" #include "pin_mux.h" #include "timer.h" #include <iomanip> #include <iostream> #include <string> #include <vector> #include "tensorflow/lite/kernels/register.h" #include "tensorflow/lite/model.h" #include "tensorflow/lite/optional_debug_tools.h" #include "tensorflow/lite/string_util.h" #include "get_top_n.h" #include "model.h" #define LOG(x) std::cout // ---------------------------- Application ----------------------------- // Lenet Mnist model input data size (bytes). #define LENET_MNIST_INPUT_SIZE 28*28*sizeof(char) // Lenet Mnist model number of output classes. #define LENET_MNIST_OUTPUT_CLASS 10 // Allocate buffer for input data. This buffer contains the input image // pre-processed and serialized as text to include here. uint8_t imageData[LENET_MNIST_INPUT_SIZE] = { #include "clothes_select.inc" }; /* Tresholds */ #define DETECTION_TRESHOLD 60 /*! * @brief Initialize parameters for inference * * @param reference to flat buffer * @param reference to interpreter * @param pointer to storing input tensor address * @param verbose mode flag. Set true for verbose mode */ void InferenceInit(std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor** input_tensor, bool isVerbose) { model = tflite::FlatBufferModel::BuildFromBuffer(Fashion_MNIST_model, Fashion_MNIST_model_len); if (!model) { LOG(FATAL) << "Failed to load model\r\n"; return; } tflite::ops::builtin::BuiltinOpResolver resolver; tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (!interpreter) { LOG(FATAL) << "Failed to construct interpreter\r\n"; return; } int input = interpreter->inputs()[0]; const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); if (interpreter->AllocateTensors() != kTfLiteOk) { LOG(FATAL) << "Failed to allocate tensors!"; return; } /* Get input dimension from the input tensor metadata assuming one input only */ *input_tensor = interpreter->tensor(input); auto data_type = (*input_tensor)->type; if (isVerbose) { const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); LOG(INFO) << "input: " << inputs[0] << "\r\n"; LOG(INFO) << "number of inputs: " << inputs.size() << "\r\n"; LOG(INFO) << "number of outputs: " << outputs.size() << "\r\n"; LOG(INFO) << "tensors size: " << interpreter->tensors_size() << "\r\n"; LOG(INFO) << "nodes size: " << interpreter->nodes_size() << "\r\n"; LOG(INFO) << "inputs: " << interpreter->inputs().size() << "\r\n"; LOG(INFO) << "input(0) name: " << interpreter->GetInputName(0) << "\r\n"; int t_size = interpreter->tensors_size(); for (int i = 0; i < t_size; i++) { if (interpreter->tensor(i)->name) { LOG(INFO) << i << ": " << interpreter->tensor(i)->name << ", " << interpreter->tensor(i)->bytes << ", " << interpreter->tensor(i)->type << ", " << interpreter->tensor(i)->params.scale << ", " << interpreter->tensor(i)->params.zero_point << "\r\n"; } } LOG(INFO) << "\r\n"; } } /*! * @brief Runs inference input buffer and print result to console * * @param pointer to image data * @param image data length * @param pointer to labels string array * @param reference to flat buffer model * @param reference to interpreter * @param pointer to input tensor */ void RunInference(const uint8_t* image, size_t image_len, const std::string* labels, std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor* input_tensor) { /* Copy image to tensor. */ memcpy(input_tensor->data.uint8, image, image_len); /* Do inference on static image in first loop. */ auto start = GetTimeInUS(); if (interpreter->Invoke() != kTfLiteOk) { LOG(FATAL) << "Failed to invoke tflite!\r\n"; return; } auto end = GetTimeInUS(); const float threshold = (float)DETECTION_TRESHOLD /100; std::vector<std::pair<float, int>> top_results; int output = interpreter->outputs()[0]; TfLiteTensor *output_tensor = interpreter->tensor(output); TfLiteIntArray* output_dims = output_tensor->dims; // assume output dims to be something like (1, 1, ... , size) auto output_size = output_dims->data[output_dims->size - 1]; /* Find best image candidates. */ GetTopN<uint8_t>(interpreter->typed_output_tensor<uint8_t>(0), output_size, 1, threshold, &top_results, false); if (!top_results.empty()) { auto result = top_results.front(); const float confidence = result.first; const int index = result.second; if (confidence * 100 > DETECTION_TRESHOLD) { LOG(INFO) << "----------------------------------------\r\n"; LOG(INFO) << " Inference time: " << (end - start) / 1000 << " ms\r\n"; LOG(INFO) << " Detected: " << std::setw(10) << labels[index] << " (" << (int)(confidence * 100) << "%)\r\n"; LOG(INFO) << "----------------------------------------\r\n\r\n"; } } } /*! * @brief Main function */ int main(void) { const std::string labels[] = {"T-shirt/top", "Trouser","Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"}; /* Init board hardware. */ BOARD_ConfigMPU(); BOARD_InitPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); InitTimer(); std::unique_ptr<tflite::FlatBufferModel> model; std::unique_ptr<tflite::Interpreter> interpreter; TfLiteTensor* input_tensor = 0; InferenceInit(model, interpreter, &input_tensor, false); LOG(INFO) << "Fashion MNIST object recognition example using a TensorFlow Lite model.\r\n"; LOG(INFO) << "Detection threshold: " << DETECTION_TRESHOLD << "%\r\n"; /* Run inference on static ship image. */ LOG(INFO) << "\r\nStatic data processing:\r\n"; RunInference((uint8_t*)imageData, (size_t)LENET_MNIST_INPUT_SIZE, labels, model, interpreter, input_tensor); while(1) {} } Testing result After deploying the model in the demo project, then we'll run this demo on the MIMXRT1060 (Fig 12) board for testing. Fig 12 Run the below code to covert the Fashion MNIST image to text The process_image() function can convert a Fashion MNIST image to an include file as static data, then include this file in the demo project. def process_image(image, output_path, num_batch=1): img_data = np.transpose(image, (2, 0, 1)) # Repeat image for batch processing (resulting tensor is NCHW or NHWC) img_data = np.reshape(img_data, (num_batch, img_data.shape[0], img_data.shape[1], img_data.shape[2])) img_data = np.repeat(img_data, num_batch, axis=0) img_data = np.reshape(img_data, (num_batch, img_data.shape[1], img_data.shape[2], img_data.shape[3])) # Serialize image batch img_data_bytes = bytearray(img_data.tobytes(order='C')) image_bytes_per_line = 20 with open(output_path, 'wt') as f: idx = 0 for byte in img_data_bytes: f.write('0X%02X, ' % byte) if idx % image_bytes_per_line == (image_bytes_per_line - 1): f.write('\n') idx = idx + 1 # Return serialized image size return len(img_data_bytes)      2. Run the demo project on board.
View full article
                                      配置RT600开发环境 RT600开发入门培训视频。 https://www.nxp.com/document/guide/getting-started-with-i-mx-rt600-evaluation-kit:GS-MIMXRT685-EVK?&tid=vanGS-MIMXRT685-EVK#title2.1   下载I.MX RT600 SDK。下载链接: https://mcuxpresso.nxp.com/en/select?device=EVK-MIMXRT685     下载MCUXpresso IDE。注意需要安装MCUXpresso IDE 11.1.1及最新版本。https://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=MCUXPRESSO               下载安装LPCScrypt,可以将默认板载的CMSIS-DAP固件升级改为J-LINK。通过J-LINK,可以下载调试HiFi4 DSP固件。下载链接https://www.nxp.com/design/microcontrollers-developer-resources/lpc-microcontroller-utilities/lpcscrypt-v2-1-1:LPCSCRYPT?&tab=Design_Tools_Tab     下载安装J-LINK驱动。下载链接https://www.segger.com/downloads/jlink/   下载安装Cadence HiFi 4 DSP IDE for MIMXRT600。 第一次下载,注册用户https://tensilicatools.com/register/。国内用户注册时,如果页面没有出现下面人机身份验证,说明IP被GW Firewall屏蔽了。需要通过代理或者其他特殊手段,否则用户注册将无法成功提交。   下载HiFi DSP Development Tools for i.MX RT600开发工具。 https://tensilicatools.com/download/rt600-download-page/   申请License for RT600 SDK。注意输入绑定网卡MAC地址时,需要去除中间‘:’等字符,否则提示失败。   申请成功后,可以下载License文件。   启动Xplorer 8.0.13后,在菜单Help -- Xplorer License Keys安装License文件。安装成功后显示如下:     Xplorer下载调试器配置。 将xt-ocd.exe所在目录加入到系统Path环境变量。   使能”Use XOCD Manager”,指定Topology File   设置Download binary为Always,取消每次下载前都弹出提示框,节省下载时间。     通过J-Link下载HiFi4 DSP固件,可以单步调试代码。    
View full article
The i.MX RT600 crossover MCU combines an ultra-low power MCU with a high performance DSP to enable the next generation of ML/AI, voice and audio applications. Get started today and order your MIMXRT685-EVK.
View full article
Get 500 MHz for just $1 with NXP's new i.MX RT1010 crossover MCU.  Targeted for a variety of applications, this video highlights two very popular example use-cases for i.MX RT1010 -audio and motor control.
View full article
RT1050 SDRAM app code boot from SDcard burn with 3 tools Abstract       This document is about the RT series app running on the external SDRAM, but boot from SD card. The content contains SDRAM app code generate with the RT1050 SDK MCUXpresso IDE project, burn the code to the external SD card with flashloader MFG tool, and MCUXPresso Secure Provisioning. The MCUBootUtility method can be found from this post: https://community.nxp.com/docs/DOC-346194       Software and Hardware platform: SDK 2.7.0_EVKB-IMXRT1050 MCUXpresso IDE MXRT1050_GA MCUBootUtility MCUXPresso Secure Provisioning MIMXRT1050-EVKB 2 RT1050 SDRAM app image generation     Porting SDK_2.7.0_EVKB-IMXRT1050 iled_blinky project to the MCUXPresso IDE, to generate the code which is located in SDRAM, the configuration is modified like the following items:       2.1 Copy code to RAM 2.2  Modify memory location to SDRAM address 0X80002000 The code which boots from SD card and running in the SDRAM is the non-xip code, so the IVT offset is 0X400, in our test, we put the image from the SDRAM memory address 0x800002000, the configuration is: 2.3 Modify the symbol 2.4 Generate the .s19 file      After build has no problems, then generate the app.s19 file:   Rename the app.19 image file to evkbimxrt1050_iled_blinky_sdram_0x2000.s19, and copy it to the flashloader folder: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win   3, Flashloader configuration and download    This chapter will use flashloader to configure the image which can download the SDRAM app code to the external SD card with MFGTool.       We need to prepare the following files: SDRAM interface configuration file CFG_DCD.bin imx-sdram-unsigned-dcd.bd program_sdcard_image.bd 3.1 SDRAM DCD file preparation      MIMXRT1050-EVKB on board SDRAM is IS42S16160J, we can use the attached dcd_model\ISSI_IS42S16160J\dcd.cfg and dcdgen.exe tool to generate the CFG_DCD.bin, the commander is: dcdgen -inputfile=dcd.cfg -bout -cout   Copy CFG_DCD.bin file to the flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win 3.2 imx-sdram-unsigned-dcd.bd file Prepare the imx-sdram-unsigned-dcd.bd file content as: options {     flags = 0x00;     startAddress = 0x80000000;     ivtOffset = 0x400;     initialLoadSize = 0x2000;     DCDFilePath = "CFG_DCD.bin";     # Note: This is required if the default entrypoint is not the Reset_Handler     #       Please set the entryPointAddress to Reset_Handler address     entryPointAddress = 0x800022f1; }   sources {     elfFile = extern(0); }   section (0) { }  The above entrypointAddress data is from the .s19 reset handler(0X80002000+4 address data): Copy imx-sdram-unsigned-dcd.bd file to flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Open cmd, run the following command: elftosb.exe -f imx -V -c imx-sdram-unsigned-dcd.bd -o ivt_evkbimxrt1050_iled_blinky_sdram_0x2000.bin evkbimxrt1050_iled_blinky_sdram_0x2000.s19 After running the command, two app IVT files will be generated: 3.3 program_sdcard_image.bd file Prepare the program_sdcard_image.bd file content as: # The source block assign file name to identifiers sources {  myBootImageFile = extern (0); }   # The section block specifies the sequence of boot commands to be written to the SB file section (0) {       #1. Prepare SDCard option block     load 0xd0000000 > 0x100;     load 0x00000000 > 0x104;       #2. Configure SDCard     enable sdcard 0x100;       #3. Erase blocks as needed.     erase sdcard 0x400..0x14000;       #4. Program SDCard Image     load sdcard myBootImageFile > 0x400;         #5. Program Efuse for optimal read performance (optional)     # Note: It is just a template, please program the actual Fuse required in the application     # and remove the # to enable the command     #load fuse 0x00000000 > 0x07;   } Copy program_sdcard_image.bd to the flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Open cmd, run the following command: elftosb.exe -f kinetis -V -c program_sdcard_image.bd -o boot_image.sb ivt_evkbimxrt1050_iled_blinky_sdram_0x2000_nopadding.bin Copy the generated boot_image.sb file to the following flashloader path: \Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\mfgtools-rel\Profiles\MXRT105X\OS Firmware 3.4 MFGTool burn code to SD card    Prepare one SD card, insert it to J20, let the board enter the serial download mode, SW7:1-ON 2-OFF 3-OFF 4-ON. Find two USB cable, one is connected to J28, another is connected to J9, we use the HID to download the image.    Open MFGTool.exe, and click the start button:          Modify the boot mode to internal boot, and boot from the external SD card, SW7:1-ON 2-OFF 3-ON 4-OFF.      Power off and power on the board again, you will find the onboard LED D18 is blinking, it means the external SDRAM APP code is boot from external SD card successfully. 4, MCUBootUtility configuration and code download    Please check this community document: https://community.nxp.com/docs/DOC-346194     Here just give one image readout memory map, it will be useful to understand the image location information:     After download, we can readout the SD card image, from 0X400 is the IVT, BD, DCD data, from 0X1000 is the image which is the same as the app.s19 file.     5, MCUXpresso Secure Provisioning configuration and download   This software is released in the NXP official website, it is also the GUI version, which can realize the normal code and the secure code downloading, it will be more easy to use than the flashloader tool, customer don’t need to input the command, the tool help the customer to do it, the function is similar to the MCUBootUtility, MCUBootUtility tool is the opensource tool which is shared in the github, but is not released in the NXP official website.   Now, we use the new official realized tool to download the SDRAM app code to the external SD card, the board still need to enter the serial download mode, just like the flashloader and the MCUBootUtility too, the detail operation is:  We can find this tool is also very easy to use, customer still need to provide the app.19 and the dcd.bin, then give the related boot device configuration is OK.    After the code is downloaded successfully, modify the boot mode to internal boot, and boot from the external SD card, SW7:1-ON 2-OFF 3-ON 4-OFF.     Power off and power on the board again, you will find the onboard LED D18 is blinking, it means the external SDRAM APP code is boot from external SD card successfully.   Until now, all the three methods to download the SDRAM app code to the SD card is working, flashloader is the command based tool, MCUBootUtility and MCUXPresso Secure Provisioning is the GUI tool, which is more easy to use.        
View full article
Introduction A common need for GUI applications is to implement a clock function.  Whether it be to create a clock interface for the end user's benefit, or just to time animations or other actions, implementing an accurate clock is a useful and important feature for GUI applications.  The aim of this document is to help you implement clock functions in your AppWizard project.   Methods When implementing a real-time clock, there are a couple of general methods to do so.   Use an independent timer in your MCU Using animation objects Each of these methods have their advantages and disadvantages.  If you just need a timer that doesn't require extra code and you don't require control or assurance of precision, or maybe you can't spare another timer, using an animation object (method #2) may be a good option in that application.  If your application requires an assurance of precision or requires other real-time actions to be performed that AppWizard can't control, it is best to implement an independent timer in your MCU (method #1).  Method 1:  Independent MCU Timer Implementing a timer via an independent MCU timer allows better control and guarantees the precision because it isn't a shared clock and the developer can adjust the interrupt priorities such that the timer interrupt has the highest priority.  AppWizard timing uses a common timer and then time slices activities similar to how an operating system works.  It is for this reason that implementing an independent MCU timer is best when you need control over the precision of the timer or you need other real-time actions to be triggered by this timer.  When implementing a timer using an independent MCU timer (like the RTC module), an understanding of how to interact with Text widgets is needed. Let's look at this first.   Interacting with Text Widgets Editing Text widgets occurs through the use of the emWin library API (the emWin library is the underlying code that AppWizard builds upon). The Text widget API functions are documented in the emWin Graphic Library User Guide and Reference Manual, UM3001.  Most of the Text widget API functions require a Text widget handle.  Be sure to not confuse this handle for the AppWizard ID.  Imagine a clock example where there are two Text widgets in the interface:  one for the minutes and one for the seconds.  The AppWizard IDs of these objects might be ID_TEXT_MINS and ID_TEXT_SECONDS respectively (again, these are not to be confused with the handle to the Text widget for use by emWin library functions).  The first action software should take is to obtain the handle for the Text widgets.   This can be done using the WM_GetDialogItem function.  The code to get the active window handle and the handle for the two Text widgets is shown below: activeWin = WM_GetActiveWindow(); textBoxMins = WM_GetDialogItem(activeWin, ID_TEXT_MINS); textBoxSecs = WM_GetDialogItem(activeWin, ID_TEXT_SECONDS);‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Note that this function requires the handle to the parent window of the Text widget.  If your application has multiple windows or screens, you may need to be creative in how you acquire this handle, but for this example, the software can simply call the WM_GetActiveWindow function (since there is only one screen).  When to call these functions can be a bit tricky as well.  They can be called before the MainTask() function of the application is called and the application will not crash.  However, the handles won't be correct and the Text widgets will not be updated as expected.  It's recommended that these handles be initialized when the screen is initialized.  An example of how this would be done is shown below: void cbID_SCREEN_CLOCK(WM_MESSAGE * pMsg) { extern WM_HWIN activeWin; extern WM_HWIN textBoxMins; extern WM_HWIN textBoxSecs; extern WM_HWIN textBoxDbg; if(pMsg->MsgId == WM_INIT_DIALOG) { activeWin = WM_GetActiveWindow(); textBoxMins = WM_GetDialogItem(activeWin, ID_TEXT_MINS); textBoxSecs = WM_GetDialogItem(activeWin, ID_TEXT_SECONDS); textBoxDbg = WM_GetDialogItem(activeWin, ID_TEXT_DBG); } GUI_USE_PARA(pMsg); }‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Once the Text widget handles have been acquired, the text can be updated using the TEXT_SetText() function or the TEXT_SetDec() function in this case, because the Text widgets are configured for decimal mode, since we want to display numbers.  An example of the code to do this is shown below.  /* TEXT_SetDec(Text Widget Handle, Value as Int, Length, Shift, Sign, Leading Spaces) */ if(TEXT_SetDec(textBoxSecs, (int)gSecs, 2, 0, 0, 0)) { /* Perform action here if necessary */ } if(TEXT_SetDec(textBoxMins, (int)gMins, 2, 0, 0, 0)) { /* Perform action here if necessary */ } ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Method 2:  Animation Objects When implementing a real-time clock using animation objects, it is necessary to implement a loop.  This could be done outside of the AppWizard GUI (in your code) but because the timing precision can't be guaranteed, it's just as easy to implement a loop in the AppWizard GUI if you know how (it isn't very intuitive as to how to do this). Before examining the interactions to do this, let's look at the variables and objects needed to do this.  ID_VAR_SECS - This variable holds the current seconds value. ID_VAR_SECS_1 - This variable holds the next second value.  ID_TEXT_SECONDS - Text box that displays the current seconds value. ID_END_CNT - Variable that holds the value at which the seconds rolls over and increments the minute count ID_TEXT_MINS - Text box that holds the current minute count. ID_MIN_END_CNT - Variable that holds the value at which the minutes rolls over (which would also increment the hour count if the hours were implemented). ID_BUTTON_SECS - This is a hidden button that initiates actions when the seconds variable has reached the end count.  Now, here are the interactions used to implement the clock feature using animation interactions.  The heart of the loop are the interactions triggered by ID_VAR_SECS.  ID_VAR_SECS -> ID_VAR_SECS_1:  When ID_VAR_SECS changes, it needs to add one to ID_VAR_SECS_1 so that the animation will animate to one second from the current time. ID_VAR_SECS -> ID_TEXT_SECONDS:  When ID_VAR_SECS changes, it also needs to start the animation from the current value to the next second (ID_VAR_SECS_1). A very essential part of the loop is ensuring the animation restarts every time.  So ID_TEXT_SECONDS needs to change the value of ID_VAR_SECS when the animation ends. ID_VAR_SECS is changed to the current time value, ID_VAR_SECS_1. When the ID_TEXT_SECONDS animation ends, it must also decrement the ID_VAR_END_CNT variable.  This is analogous to the control variable of a "For" loop being updated. This is done using the ADDVALUE job, adding '-1' to the variable, ID_VAR_END_CNT. When ID_VAR_END_CNT changes, it updates the hidden button, ID_BUTTON_SECS, with the new value.  This is analogous to a "For" loop checking whether its control variable is still within its limits.   The interactions in group 5 are interactions that restart the loop when the seconds reach the count that we desire.  When the loop is restarted, the following actions must be taken: Set ID_VAR_SECS and ID_VAR_SECS_1 to the initial value for the next loop ('0' in this case).  Note that ID_VAR_SECS_1 MUST be set before ID_VAR_SECS.  Additionally, if the loop is to continue, ID_VAR_SECS and ID_VAR_SECS_1 must be set to the same value.   ID_TEXT_SECONDS is set to the initial value.  If this isn't done, then the text box will try to animate from the final value to the initial value and then will look "weird". ID_VAR_END_CNT is reset to its initial value (60 in this case).  ID_BUTTON_SECS is also responsible for updating the minutes values.  In this case, it's incrementing the ID_TEXT_MINS value (counting up in minutes) and decrementing the ID_VAR_MIN_END_CNT  Adjusting the time of an animation object The animation object (as well as other emWin objects) use the GUI_X_DELAY function for timing.  It is up to the host software to implement this function.  In the i.MX RT examples, the General Purpose Timer (GPT) is used for this timer.  So how the GPT is configured will affect the timing of the application and the how fast or slow the animations run. The GPT is configured in the function BOARD_InitGPT() which resides in the main source file.  The recommended way to adjust the speed of the timer is by changing the divider value to the GPT. Conclusion So we have seen two different methods of implementing a real-time clock in an AppWizard GUI application.  Those methods are: Use an independent timer in your MCU Using animation objects Using an independent timer in your MCU may be preferred as it allows for better control over the timing, can allow for real-time actions to be performed that AppWizard can't control, and provides some assurance of precision.  Using animation objects may be preferred if you just need a quick timer implementation that doesn't require you to manually add code to your project or use a second timer.  
View full article