i.MX RT知识库

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

i.MX RT Knowledge Base

讨论

排序依据:
1.  Abstract NXP EdgeReady solution can use RT106/5 S/L/A/F to achieve speech recognition, but the relevant support software libraries for the RT4-bit series are limited to the S/L/A/F series, if you want to use normal RT chips, how to achieve speech recognition functions? NXP officially launched the VIT software package in the SDK, which can support RT1060, RT1160, RT1170, RT600, RT500 to achieve SDK-based speech recognition functions. For the acquisition of weather information, usually customer can connect with a third-party platform or the cloud weather API, using http client method to access directly, the current weather API platforms, you can register it, then call the API directly, so you can use the RT SDK lwip socket client method to call the corresponding weather API, to achieve real-time specific geographical location weather forecast data.     This article will use MIMXRT1060-EVK to implement customer-defined wake-up word(WW) and voice recognition word recognition(VC) based on SDK VIT lib, and LWIP socket client to achieve real-time weather information acquisition in Shanghai, then print it to the terminal, this article mainly use the print to share the weather information, for the sound broadcasts, it also add the simple method to broadcast the fixed sound with mp3 audio data, but for the freely sound broadcast, it may need to use real-time TTS function, which is not added now.     The system block diagram of this document is as follows:   Fig 1 System Block diagram The VIT custom wake-up word of this system is "小恩小恩", and after waking up, one of the following recognition words can be recognized: ”开灯”("Turn on the lights"),“关灯”("Turn off the lights"),”今天天气”("Today's weather"),“明天天气”("Tomorrow's Weather"),“后天天气”("The day after tomorrow's weather"). Turn on the light or Turn off the lights , that is to control  the external LED red light on the EVK board. ”今天天气” gets today’s weather forecast, it is in the following format:                     "date": "2022-05-27",                     "week": "5",                     "dayweather": "阴",                     "nightweather": "阴",                     "daytemp": "28",                     "nighttemp": "21",                     "daywind": "东南",                     "nightwind": "东南",                     "daypower": "≤3",                     "nightpower": "≤3" “明天天气”,“后天天气” are the same format, but it is 1-2 days after the date of today. To get the weather data, the MIMXRT1060-EVK board needs to connect the network to achieve the acquisition of the Gaode Map(restapi.amap.com) Weather API data. 2.  Related preparations 2.1 Weather API Platform     At present, there are many third-party platforms that can obtain weather on the Internet for Chinese, such as: Baidu Intelligent Cloud, Baidu Map API, Huawei cloud platform, Juhe weather, Gaode Map API, and so on. This article tried several platform, the test results found: Baidu intelligent cloud, the number of daily free calls is small, the need for real-time synthesis of AK, SK, cumbersome to call; Baidu Map API needs to upload ID card information; Several others have a similar situation. In the end, the Gaode Map API with convenient registration, many daily calls and relatively full feedback weather data information was selected.     Here, we mainly talk about the Gaode Map API usage, the link is: https://lbs.amap.com/api/webservice/guide/api/weatherinfo Create the account and the API key, then add the relevant parameters to implement the call of the weather API, the application for API Key is as follows: Fig 2 Gaode map API key The following diagram shows the call volume:   Fig 3 Gaode Map API call volume This is the API calling format:   Fig 4 Weather API calling parameters So, the full Gaode Map API link should like this: https://restapi.amap.com/v3/weather/weatherInfo?key=xxxxxxx&city=xxx&extensions=all&output=JSON If need to test the Shanghai weather, city code is 310000. 2.2 Postman test weather API     Postman is an interface testing tool, when doing interface testing, Postman is equivalent to a client, it can simulate various HTTP requests initiated by users, send the request data to the server, obtain the corresponding response results, and verify whether the result data in the response matches the expected value. Postman download link: https://www.postman.com/   After finding the proper weather API platform and the calling link, use the postman do the http GET operation to capture the weather data, refer to the Fig 4, fill the related parameters to the postman: Fig 5 Postman call weather API Send Get command, we can find the weather information in the position 7, the complete all information is: {     "status": "1",     "count": "1",     "info": "OK",     "infocode": "10000",     "forecasts": [         {             "city": "上海市",             "adcode": "310000",             "province": "上海",             "reporttime": "2022-05-27 17:34:12",             "casts": [                 {                     "date": "2022-05-27",                     "week": "5",                     "dayweather": "阴",                     "nightweather": "阴",                     "daytemp": "28",                     "nighttemp": "21",                     "daywind": "东南",                     "nightwind": "东南",                     "daypower": "≤3",                     "nightpower": "≤3"                 },                 {                     "date": "2022-05-28",                     "week": "6",                     "dayweather": "小雨",                     "nightweather": "中雨",                     "daytemp": "24",                     "nighttemp": "20",                     "daywind": "东南",                     "nightwind": "东南",                     "daypower": "≤3",                     "nightpower": "≤3"                 },                 {                     "date": "2022-05-29",                     "week": "7",                     "dayweather": "大雨",                     "nightweather": "小雨",                     "daytemp": "23",                     "nighttemp": "20",                     "daywind": "南",                     "nightwind": "南",                     "daypower": "≤3",                     "nightpower": "≤3"                 },                 {                     "date": "2022-05-30",                     "week": "1",                     "dayweather": "小雨",                     "nightweather": "晴",                     "daytemp": "27",                     "nighttemp": "20",                     "daywind": "北",                     "nightwind": "北",                     "daypower": "≤3",                     "nightpower": "≤3"                 }             ]         }     ] }   We can see, it can capture the continuous 4 days information, with this information, we can get the weather information easily. From the postman, we also can see the Get code, like this: Fig 6 postman API HTTP code     With this API which already passed the testing, it can capture the complete weather information, here, we can consider adding the working http API to the MIMXRT1060-EVK code.    2.3 VIT custom commands     From the maestro code of the RT1060 SDK, we can know that the SDK already supports the VIT library, what is VIT?     VIT's full name: Voice Intelligent Technology, the library provides voice recognition services designed to wake up and recognize specific commands, control IOT, and the smart home. Fig 7 VIT system block diagram     In NXP RT1060 SDK code, the generated wake word and command word have been provided and placed in the VIT_Model.h file. If in the customer's project, how to customize the wake word and command word? With the NXP's efforts, we have made a web page form for customers to choose their own command, and then generate the corresponding VIT_Model.h file for code to call. VIT command word generation web page is: https://vit.nxp.com/#/home     Login the NXP account, choose the RT chip partn umber, wakeup word, voice command. Please note, the current supported RT chip is: RT1060,RT1160,RT1170,RT600,RT500 The following is the example for generating wakeup word and voice command:   Fig 8 Custom VIT configuration Fig 9 generated result Download the generated model, you can get VIT_Model_cn.h, open to see the command word information and related model data stored in the const PL_MEM_ALIGN (PL_UINT8 VIT_Model_cn[], VIT_MODEL_ALIGN_BYTES) array, the command word information is as follows: WakeWord supported : " 小恩 小恩 " Voice Commands supported     Cmd_Id : Cmd_Name       0    : UNKNOWN       1    : 开灯       2    : 关灯       3    : 今天 天气       4    : 明天 天气       5    : 后天 天气 Use the RT1060 SDK maestro_record demo to test this custom command result:   Fig 10 Custom Wakeup word and voice command test From the test result, we can see, both the wakeup word and voice command is detected. 3 Software code 3.1 LWIP socket client code capture weather API From chapter 2.2, we have been able to obtain the weather API and through testing, we can successfully achieve weather acquisition, so we need to add relevant commands in combination with the needs of our own system. For the acquisition of the weather API, the lwip code based on the RT1060 SDK is in the form of socket client. The relevant code is as follows: #define PORT 80 #define IP_ADDR "59.82.9.133" uint8_t get_weather[]= "GET /v3/weather/weatherInfo?key=xxx&city=310000&extensions=all&output=JSON HTTP/1.1\r\nHost: restapi.amap.com\r\n\r\n\r\n\r\n"; if (sys_thread_new("weather_main", weathermain_thread, NULL, HTTPD_STACKSIZE, HTTPD_PRIORITY) == NULL) LWIP_ASSERT("main(): Task creation failed.", 0); static void weathermain_thread(void *arg) { static struct netif netif; ip4_addr_t netif_ipaddr, netif_netmask, netif_gw; ethernetif_config_t enet_config = { .phyHandle = &phyHandle, .macAddress = configMAC_ADDR, }; LWIP_UNUSED_ARG(arg); mdioHandle.resource.csrClock_Hz = EXAMPLE_CLOCK_FREQ; IP4_ADDR(&netif_ipaddr, configIP_ADDR0, configIP_ADDR1, configIP_ADDR2, configIP_ADDR3); IP4_ADDR(&netif_netmask, configNET_MASK0, configNET_MASK1, configNET_MASK2, configNET_MASK3); IP4_ADDR(&netif_gw, configGW_ADDR0, configGW_ADDR1, configGW_ADDR2, configGW_ADDR3); tcpip_init(NULL, NULL); netifapi_netif_add(&netif, &netif_ipaddr, &netif_netmask, &netif_gw, &enet_config, EXAMPLE_NETIF_INIT_FN, tcpip_input); netifapi_netif_set_default(&netif); netifapi_netif_set_up(&netif); PRINTF("\r\n************************************************\r\n"); PRINTF(" TCP client example\r\n"); PRINTF("************************************************\r\n"); PRINTF(" IPv4 Address : %u.%u.%u.%u\r\n", ((u8_t *)&netif_ipaddr)[0], ((u8_t *)&netif_ipaddr)[1], ((u8_t *)&netif_ipaddr)[2], ((u8_t *)&netif_ipaddr)[3]); PRINTF(" IPv4 Subnet mask : %u.%u.%u.%u\r\n", ((u8_t *)&netif_netmask)[0], ((u8_t *)&netif_netmask)[1], ((u8_t *)&netif_netmask)[2], ((u8_t *)&netif_netmask)[3]); PRINTF(" IPv4 Gateway : %u.%u.%u.%u\r\n", ((u8_t *)&netif_gw)[0], ((u8_t *)&netif_gw)[1], ((u8_t *)&netif_gw)[2], ((u8_t *)&netif_gw)[3]); PRINTF("************************************************\r\n"); sys_thread_new("weather", weather_thread, NULL, DEFAULT_THREAD_STACKSIZE, DEFAULT_THREAD_PRIO); vTaskDelete(NULL); } static void weather_thread(void *arg) { int sock = -1,rece; struct sockaddr_in client_addr; char* host_ip; ip4_addr_t dns_ip; err_t err; uint32_t *pSDRAM= pvPortMalloc(BUF_LEN);// host_ip = HOST_NAME; PRINTF("host name : %s , host_ip : %s\r\n",HOST_NAME,host_ip); while(1) { sock = socket(AF_INET, SOCK_STREAM, 0); if (sock < 0) { PRINTF("Socket error\n"); vTaskDelay(10); continue; } client_addr.sin_family = AF_INET; client_addr.sin_port = htons(PORT); client_addr.sin_addr.s_addr = inet_addr(host_ip); memset(&(client_addr.sin_zero), 0, sizeof(client_addr.sin_zero)); if (connect(sock, (struct sockaddr *)&client_addr, sizeof(struct sockaddr)) == -1) { PRINTF("Connect failed!\n"); closesocket(sock); vTaskDelay(10); continue; } PRINTF("Connect to server successful!\r\n"); write(sock,get_weather,sizeof(get_weather)); while (1) { rece = recv(sock, (uint8_t*)pSDRAM, BUF_LEN, 0);//BUF_LEN if (rece <= 0) break; memcpy(weather_data.weather_info, pSDRAM,1500);//max 1457 } Weather_process(); memset(pSDRAM,0,BUF_LEN); closesocket(sock); vTaskDelay(10000); } }  3.2 VIT detect customer command code    Put the generated VIT_Model_cn.h to the maestro_record folder path:   vit\RT1060_CortexM7\Lib    The specific wake word and voice command related code can be viewed from the code vit_pro.c, mainly involving function is: int VIT_Execute(void *arg, void *inputBuffer, int size) The code is modified as follows, mainly to record the wake and wake word number, for specific function control, the command directly controlled here is the local "开灯:turn on the light", "关灯:turn off the light" command, as for the weather command needs to call the socket client API, so in the main lwip call area combined with the command word recognition number to call: if (VIT_DetectionResults == VIT_WW_DETECTED) { PRINTF(" - WakeWord detected \r\n"); weather_data.ww_flag = 1; //kerry } else if (VIT_DetectionResults == VIT_VC_DETECTED) { // Retrieve id of the Voice Command detected // String of the Command can also be retrieved (when WW and CMDs strings are integrated in Model) VIT_Status = VIT_GetVoiceCommandFound(VITHandle, &VoiceCommand); if (VIT_Status != VIT_SUCCESS) { PRINTF("VIT_GetVoiceCommandFound error: %d\r\n", VIT_Status); return VIT_Status; // will stop processing VIT and go directly to MEM free } else { PRINTF(" - Voice Command detected %d", VoiceCommand.Cmd_Id); weather_data.vc_index = VoiceCommand.Cmd_Id;//kerry 1:ledon 2:ledoff 3:today weather 4:tomorrow weather 5:aftertomorrow weather if(weather_data.vc_index == 1)//1 { GPIO_PinWrite(GPIO1, 3, 1U); //pull high PRINTF(" led on!\r\n"); } else if(weather_data.vc_index == 2)//2 { GPIO_PinWrite(GPIO1, 3, 0U); //pull low PRINTF(" led off!\r\n"); } // Retrieve CMD Name: OPTIONAL // Check first if CMD string is present if (VoiceCommand.pCmd_Name != PL_NULL) { PRINTF(" %s\r\n", VoiceCommand.pCmd_Name); } else { PRINTF("\r\n"); } } }  3.3 Voice recognize weather information    In the weather_thread while, check the wakeup word and voice command, if meet the requirement, then create the socket connection, write the API and capture the weather data.   The related code is: while(1) { //add the command request, only cmd == weather flag, then call it. if((weather_data.ww_flag == 1)) { if(weather_data.vc_index >= 3) { // create connection //write API and read API Weather_process(); } memset(weather_data.weather_info, 0, sizeof(weather_data.weather_info)); weather_data.ww_flag = 0; weather_data.vc_index = 0; } vTaskDelay(10000); } void Weather_process(void) { char * datap, *datap1; datap = strstr((char*)weather_data.weather_info,"date"); if(datap != NULL) { memcpy(today_weather, datap,184);//max 1457 if(weather_data.vc_index == 3) { PRINTF("\r\n*******************today weather***********************************\n\r"); PRINTF("%s\r\n",today_weather); return; } } else return; datap1 = strstr(datap+4,"date"); if(datap1 != NULL) { memcpy(tomorr_weather, datap1,184);//max 1457 if(weather_data.vc_index == 4) { PRINTF("\r\n*******************tomorrow weather*******************************\n\r"); PRINTF("%s\r\n",tomorr_weather); return; } } else return; datap = strstr(datap1+4,"date"); if(datap != NULL) { memcpy(aftertom_weather, datap,184);//max 1457 if(weather_data.vc_index == 5) { PRINTF("\r\n*******************after tomorrow weather**************************\n\r"); PRINTF("%s\r\n",aftertom_weather); } } else return; }   Function Weather_process is used to refer to the voice recognized weather number to get the related date’s weather, and printf it. 4 Test result  the test result video: Print the log results as shown in Figure 11, after testing, you can see that the wakeup word and voice command can be successfully recognized, in the recognition of word sequence numbers 3, 4, 5 is the weather acquisition, you can successfully call the lwip socket client API, successfully obtain weather information and printf it.   Fig 11 system test print result  evkmimxrt1060_maestro_weather_backup.zip is the project without sound playback, weather information will print to the terminal! 5 Meet issues conclusion 5.1 LWIP failed to get weather    When creating the code, call the postman provided http code: GET /v3/weather/weatherInfo?key=8f777fc7d867908eebbad7f96a13af10&amp; city=310000&amp; extensions=all&amp; output=JSON HTTP/1.1 Host: restapi.amap.com    Add it to the socket API function: uint8_t get_weather[]= "GET /v3/weather/weatherInfo?key=xxx&amp;city=310000&amp;extensions=all&amp;output=JSON HTTP/1.1\r\nHost: restapi.amap.com\r\n\r\n\r\n\r\n";    The test result is:   Fig 12 socket weather API return issues     We can see, server connection is OK, http also return back the data, but it report the parameter issues, after checking, we use the postman C code, and put it to the get_weather: uint8_t get_weather[]= "GET /v3/weather/weatherInfo?key=xxx&city=310000&extensions=all&output=JSON HTTP/1.1\r\nHost: restapi.amap.com\r\n\r\n\r\n\r\n"; Then, it can capture the weather data, the same as postman test result. 5.2 VIT LWIP merger memory is not enough     After combining the maestro_record and lwip socket code together, compile it, it will meet the DTCM memory overflow issues. Fig 13 memory overflow After optimize, still meet the DTCM overflow issues, so, at last, choose to reconfigure the FlexRAM: OCRAM 192K, DTCM 256K, ITCM 64K Compile it, and the memory overflow issues disappear:   Fig 14 FlexRAM recofiguration 5.3 Print Chinese word in tera    Directly use teraterm, when the weather API returns the Chinese word, the print out information is the garbled code, and then after the following configuration, to achieve Chinese printing: Setup  ->  Terminal Locale    : american->chinese Codepage : 65001 ->936 Fig 15 Tera Term Chinese word print In summary, after various data collection and problem solving, in MIMXRT1060-EVK board  combined with the official SDK complete the function of customizing VIT voice commands to obtain real-time weather and local control.So, even if the ordinary RT series which is not S/L/A/F series, you also can use VIT to implement speech recognition functions. 6 Add the sound broadcast    This chapter mainly gives the method how to add the sound broadcast with the mp3 video data which is stored in the memory, but to the realtime weather data playback, it is not very freely, it needs to check the weather data, and use the video mp3 data lib get the correct mp3 data, as it is not the online TTS method.     So, here, just share one example add the sound broadcast, eg: WW : “小恩小恩”    ->   “小恩来了,请吩咐!” VC  :“今天天气”   ->   “温度32.1度” VC playback is fixed now, if need to play real data, it needs to generate the mp3 voice data lib, then according to the feedback weather information, to generate the correct weather mp3 data array, and play it, as this is a little complicated, but not difficult, so here, just use one fixed sound give an example of it. 6.1 MP3 playback audio data preparation     For audio broadcasting which need to convert the Chinese word into MP3 files, you can use some online speech synthesis software, here use Baidu online speech synthesis function, you can view the previous article, chapter 2.2.2 online speech synthesis: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/RT106L-S-voice-control-system-based-on-the-Baidu-cloud/ta-p/1363295     If use the Baidu online speech synthesis generated mp3 file to convert to the c array directly, it will meet the first audio play issues, so, here we use the Audacity to convert the mp3 file, the convert configuration is like this:  Fig 16 Audacity convert configuration     After the regeneration of mp3, you can use xxd .exe to convert the mp3 file to an array of C files, and then put it into RT-related memory or external flash , xxd .exe can be found at the following link: https://github.com/baldram/ESP_VS1053_Library/issues/18 The convert command like this: xxd -i your-sound.mp3 ready-to-use-header.c Convert the xiaoencoming.mp3 and temptest.mp3 file to the C array, then modify the data to the C file, save file as: xiaoencoming.h and temptest.h. Here, take xiaoencoming.c as an example: #define XIAOEN_MP3_SIZE  6847 unsigned char xiaoencoming_mp3[XIAOEN_MP3_SIZE] = {   0x49, 0x44, 0x33, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x21, 0x54, 0x58, …   0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55 }; unsigned int xiaoencoming1_mp3_len = XIAOEN_MP3_SIZE;//6847; Until now, the playback audio data is finished.     Copy xiaoencoming.h and temptest.h to project path: evkmimxrt1060_maestro_weather_mp3\source 6.2 Play the MP3 data from memory    Here, share the related code. 6.2.1 app_streamer.c added code    #include "xiaoencoming.h" #include "temptest.h" void *voice_inBuf = NULL; void *voice_outBuf = NULL; status_t STREAMER_file_Create(streamer_handle_t *handle, char *filename, int eap_par) { STREAMER_CREATE_PARAM params; OsaThreadAttr thread_attr; int ret; ELEMENT_PROPERTY_T prop; MEMSRC_SET_BUFFER_T inBufInfo = {0}; SET_BUFFER_DESC_T outBufInfo = {0}; PRINTF("Kerry test begin!\r\n"); if(filename == "temptest.mp3") inBufInfo = (MEMSRC_SET_BUFFER_T){.location = (int8_t *)temptest_mp3, .size = TEMPtest_MP3_SIZE}; else if(filename == "xiaoencoming.mp3") inBufInfo = (MEMSRC_SET_BUFFER_T){.location = (int8_t *)xiaoencoming_mp3, .size = XIAOEN_MP3_SIZE}; /* Create message process thread */ osa_thread_attr_init(&thread_attr); osa_thread_attr_set_name(&thread_attr, STREAMER_MESSAGE_TASK_NAME); osa_thread_attr_set_stack_size(&thread_attr, STREAMER_MESSAGE_TASK_STACK_SIZE); ret = osa_thread_create(&msg_thread, &thread_attr, STREAMER_MessageTask, (void *)handle); osa_thread_attr_destroy(&thread_attr); if (ERRCODE_NO_ERROR != ret) { return kStatus_Fail; } /* Create streamer */ strcpy(params.out_mq_name, APP_STREAMER_MSG_QUEUE); params.stack_size = STREAMER_TASK_STACK_SIZE; params.pipeline_type = STREAM_PIPELINE_MEM; params.task_name = STREAMER_TASK_NAME; params.in_dev_name = "buffer"; params.out_dev_name = "speaker"; handle->streamer = streamer_create(&params); if (!handle->streamer) { return kStatus_Fail; } prop.prop = PROP_DECODER_DECODER_TYPE; prop.val = (uintptr_t)DECODER_TYPE_MP3; ret = streamer_set_property(handle->streamer, prop, true); if (ret != STREAM_OK) { streamer_destroy(handle->streamer); handle->streamer = NULL; return kStatus_Fail; } prop.prop = PROP_MEMSRC_SET_BUFF; prop.val = (uintptr_t)&inBufInfo; ret = streamer_set_property(handle->streamer, prop, true); if (ret != STREAM_OK) { streamer_destroy(handle->streamer); handle->streamer = NULL; return kStatus_Fail; } handle->audioPlaying = false; error: PRINTF("End STREAMER_file_Create\r\n"); PRINTF("Kerry test end!\r\n"); return kStatus_Success; }   The code implements the thread build, creates a streamer, defines it as playing from memory, decodes the properties for MP3, and specifies an array of MP3 files in memory. Specify a different array of mp3 files in memory based on the calling file name. 6.2.2 cmd.c added code void play_file(char *filename, int eap_par) { STREAMER_Init(); int ret = STREAMER_file_Create(&streamerHandle, filename, eap_par); if (ret != kStatus_Success) { PRINTF("STREAMER_file_Create failed\r\n"); goto file_error; } STREAMER_Start(&streamerHandle); PRINTF("Starting playback\r\n"); file_playing = true; while (streamerHandle.audioPlaying) { osa_time_delay(100); } file_playing = false; file_error: PRINTF("[play_file] Cleanup\r\n"); STREAMER_Destroy(&streamerHandle); osa_time_delay(100); }   Play file, it calls the STREAMER_file_Create API function, start play, and wait the play finished, then release the STREAMER. shellRecMIC API function add the VIT recorded flag, which is used to play feedback audio file. static shell_status_t shellRecMIC(shell_handle_t shellHandle, int32_t argc, char **argv) { … //kerry PRINTF("Kerry MP3 stream data test!\r\n"); PRINTF("---weather_data.ww_flag =%d--\r\n ", weather_data.ww_flag); PRINTF("---weather_data.vc_inde =%d--\r\n ", weather_data.vc_index); PRINTF("---weather_data.mp3_flag =%d--\r\n ", weather_data.mp3_flag); if(weather_data.ww_flag == 1) { play_file("xiaoencoming.mp3", 0); } if(weather_data.vc_index == 3) { play_file("temptest.mp3", 0); } if(weather_data.mp3_flag != 0) { weather_data.ww_flag = 0; weather_data.vc_index = 0; } weather_data.mp3_flag = 0; /* Delay for cleanup */ osa_time_delay(100); return kStatus_SHELL_Success; } If detect the Wakeup Word: “小恩小恩”, play feedback audio: “小恩来了请吩咐”. If detect the voice command: “今天天气”, play feedback audio: “温度32.1度”, please note, this playback just an example, it is the fixed audio, you also can create audio word lib, then according to the received weather information, combine the related word audio together, then playback it. This is a little complicated, but not difficult. So, if need to play the free audio, also can consider the online TTS method in real time. 6.2.3 VIT WW and VC flag VIT_Execute function int VIT_Execute(void *arg, void *inputBuffer, int size) { … if (VIT_DetectionResults == VIT_WW_DETECTED) { PRINTF(" - WakeWord detected \r\n"); weather_data.ww_flag = 1; //kerry weather_data.mp3_flag = 1; } else if (VIT_DetectionResults == VIT_VC_DETECTED) { // Retrieve id of the Voice Command detected // String of the Command can also be retrieved (when WW and CMDs strings are integrated in Model) VIT_Status = VIT_GetVoiceCommandFound(VITHandle, &VoiceCommand); if (VIT_Status != VIT_SUCCESS) { PRINTF("VIT_GetVoiceCommandFound error: %d\r\n", VIT_Status); return VIT_Status; // will stop processing VIT and go directly to MEM free } else { PRINTF(" - Voice Command detected %d", VoiceCommand.Cmd_Id); weather_data.vc_index = VoiceCommand.Cmd_Id;//kerry 1:ledon 2:ledoff 3:today weather 4:tomorrow weather 5:aftertomorrow weather weather_data.mp3_flag = 2; if(weather_data.vc_index == 1)//1 { GPIO_PinWrite(GPIO1, 3, 1U); //pull high PRINTF(" led on!\r\n"); } else if(weather_data.vc_index == 2)//2 { GPIO_PinWrite(GPIO1, 3, 0U); //pull low PRINTF(" led off!\r\n"); } // Retrieve CMD Name: OPTIONAL // Check first if CMD string is present if (VoiceCommand.pCmd_Name != PL_NULL) { PRINTF(" %s\r\n", VoiceCommand.pCmd_Name); } else { PRINTF("\r\n"); } } } return VIT_Status; }   Until now, all the code is added. 6.2.4  playback audio test result     This is the audio playback test result:   Fig 17 playback audio log   From the test result, we can see, we also can use the mp3 data which is stored in the memory and play it as audio playback.   The code project is: evkmimxrt1060_maestro_weather_mp3.zip.  
查看全文
Note: for similar EVKs, see: Using J-Link with MIMXRT1060-EVK or MIMXRT1064-EVK Using J-Link with MIMXRT1170-EVKB Using J-Link with MIMXRT1160-EVK or MIMXRT1170-EVK. This article provides details using a J-Link debug probe with this EVK.  There are two options: the onboard debug circuit can be updated with Segger J-Link firmware, or an external J-Link debug probe can be attached to the EVK.  Using the onboard debug circuit is helpful as no other debug probe is required.  However, the onboard debug circuit will no longer power the EVK when updated with the J-Link firmware.  Appnote AN13206 has more details on this, and the comparison of the firmware options for the debug circuit.  This article details the steps to use either J-Link option.     Using external J-Link debug probe Segger offers several J-Link probe options.  To use one of these probes with these EVKs, configure the EVK with these settings: Remove jumpers J9 and J10, to disconnect the SWD signals from onboard debug circuit.  These jumpers or installed by default. Use default power selection on J40 with pins 5-6 shorted. Connect the J-Link probe to J2, 20-pin dual-row 0.1" header. Power the EVK with one of the power supply options.  Typically USB connector J1 is used to power the board, and provides a UART/USB bridge through the onboard debug circuit.   Using onboard debug circuit with J-Link firmware Follow Appnote AN13206 to program the J-Link firmware to the EVK. Use jumper J12 to change the mode of the onboard debug circuit: Install J12 to force bootloader mode, to update the firmware image Remove J12 to use the onboard debugger Install jumpers J9 and J10, to connect the SWD signals from onboard debug circuit.  These jumpers are installed by default. Plug USB cable to J1.  This provides connection for J-Link debugger and UART/USB bridge.  However, with J-Link firmware, J1 no longer powers the EVK Power the EVK with another source.  Here we will use another USB port.  Move the jumper on J40 to short pins 3-4 (default shorts pins 5-6) Connect a 2nd USB cable to J48 to power the EVK.  The green LED next to J40 will be lit when the EVK is properly powered.  
查看全文
RT600 MCUXpresso JLINK debug QSPI flash 1 Introduction     MIMXRT600-EVK is the NXP official board, which onboard flash is the external octal flash, the octal flash is connected to the RT685 flexSPI portB. In practical usage, the customer board may use other flash types, eg QSPI flash, and connect to the FlexSPI A port. Recently, nxp published one RT600 customer flash application note: https://www.nxp.com/docs/en/application-note/AN13386.pdf This document mainly gives the CMSIS DAP related flash algorithm usage, which modifies the option data to generate the new flash algo for the different flash types. Some customer’s own board may use the RT600 QSPI flash+MCUXPresso+JLINK to debug the application code. Recently, one of the customers find on his own customer board, when they use debugger JLINK associated with the MCUXPresso download code to the RT600 QSPI flash, they meet download issues, but when using the CMSIS DAP as a debugger and the related QSPI cfx file, they can download OK. So this document mainly gives the experience of how to use the RT600, MCUXpresso IDE, and JLINK to download and debug the code which is located in the external QSPI flash. 2 JLINK driver prepare and test   MCUXpresso IDE use the JLINK download, it will call the JLINK driver related script and the flash algorithm, but to RT600, the JLINK driver will use the RT600 EVK flexSPI port B octal flash in default, so, if the customer board changes to other flexSPI port and to QSPI flash, they need to provide the related QSPI flash algorithm and script file, otherwise, even they can find the ARM CM33 core, the download will be still failed. If customers want to use the MCUXpresso IDE and the JLINK, they need to make sure the JLINK driver attached tool can do the external flash operation, eg, erase, read, write successfully at first. Now, give the JLINK driver related tool how to add the RT600 QSPI flash driver and script file. 2.1 JLINK driver install   Download the Segger JLINK driver from the following link: https://www.segger.com/downloads/jlink/JLink_Windows_V754b_x86_64.exe This document will use the jlink v7.54b to test, other version is similar. Install the driver, the default driver install path is: C:\Program Files\SEGGER 2.2 Universal flashloader RT-UFL    RT-UFL v1.0 is a universal flashloader, which uses one .FLM file for all i.MXRT chips, and the different external flash, it is mainly used for the Segger JLINK debugger. RT-UFL v1.0 downoad link: https://github.com/JayHeng/RT-UFL/archive/refs/tags/v1.0.zip    Now, to the RT600 QSPI, give the related flash algo file patch.    Copy the following path file: \RT-UFL-1.0\algo\SEGGER\JLink_Vxxx To the JLINK install path: \SEGGER\JLink Then copy the content in file: RT-UFL-master\test\SEGGER\JLink_Vxxx\Devices\NXP\iMXRT6xx\archive2\evkmimxrt685.JLinkScript To replace the content in: C:\Program Files\SEGGER\JLink\Devices\NXP\iMXRT_UFL\iMXRT6xx_CortexM33.JLinkScript Otherwise, the MCUXpresso IDE debug reset button function will not work. So, need to add the JLINKScript code for ResetTarget, which will reset the external flash. pic1 The RT-UFL provide 3 types download flash algo: MIMXRT600_UFL_L0, MIMXRT600_UFL_L1, MIMXRT600_UFL_L2. Pic 2 _L0 used for the QSPI Flash and Octal Flash(page size 256 Bytes, sector size 4KB), _L1/2 used for the hyper flash(Page size 512 Bytes,Sector size 4KB/64KB). The JLINKDevices.xml content also can get the detail information. Different name will call different .FLM, the .FLM is the flash algorithm file, the source code can be found in RT-UFL v1.0, it will use different option0 option1 to configure the different external memory when the memory chip can support SFDP. <Device> <ChipInfo Vendor="NXP" Name="MIMXRT600_UFL_L0" WorkRAMAddr="0x00000000" WorkRAMSize="0x00480000" Core="JLINK_CORE_CORTEX_M33" JLinkScriptFile="Devices/NXP/iMXRT_UFL/iMXRT6xx_CortexM33.JLinkScript" Aliases="MIMXRT633S; MIMXRT685S_M33"/> <FlashBankInfo Name="Octal Flash" BaseAddr="0x08000000" MaxSize="0x08000000" Loader="Devices/NXP/iMXRT_UFL/MIMXRT_FLEXSPI_UFL_256B_4KB.FLM" LoaderType="FLASH_ALGO_TYPE_OPEN" /> </Device> <!------------------------> <Device> <ChipInfo Vendor="NXP" Name="MIMXRT600_UFL_L1" WorkRAMAddr="0x00000000" WorkRAMSize="0x00480000" Core="JLINK_CORE_CORTEX_M33" JLinkScriptFile="Devices/NXP/iMXRT_UFL/iMXRT6xx_CortexM33.JLinkScript" Aliases="MIMXRT633S; MIMXRT685S_M33"/> <FlashBankInfo Name="Octal Flash" BaseAddr="0x08000000" MaxSize="0x08000000" Loader="Devices/NXP/iMXRT_UFL/MIMXRT_FLEXSPI_UFL_512B_4KB.FLM" LoaderType="FLASH_ALGO_TYPE_OPEN" /> </Device> <!------------------------> <Device> <ChipInfo Vendor="NXP" Name="MIMXRT600_UFL_L2" WorkRAMAddr="0x00000000" WorkRAMSize="0x00480000" Core="JLINK_CORE_CORTEX_M33" JLinkScriptFile="Devices/NXP/iMXRT_UFL/iMXRT6xx_CortexM33.JLinkScript" Aliases="MIMXRT633S; MIMXRT685S_M33"/> <FlashBankInfo Name="Octal Flash" BaseAddr="0x08000000" MaxSize="0x08000000" Loader="Devices/NXP/iMXRT_UFL/MIMXRT_FLEXSPI_UFL_512B_64KB.FLM" LoaderType="FLASH_ALGO_TYPE_OPEN" /> </Device> 2.3 JLINK commander test Please note, the device need to select as MIMXRT600_UFL_L0 when using the QSPI flash. Pic 3                                         pic 4 Pic 5 We can find, the JLINK command can realize the external QSPI flash read, erase function. 2.4 Jflash Test Operation steps: Target->connect->production programming Pic 6 We can find, the Jflash also can realize the RT600 external QSPI flash erase and program. Please note, not all the JLINK can support JFLASH, this document is using Segger JLINK plus. 3 MCUXpresso configuration and test MCUXpresso: v11.4.0 SDK_2_10_0_EVK-MIMXRT685 MCUXPresso IDE import the SDK project, eg. Helloworld or led_output. 3.1 QSPI FCB configuration    FCB is located from the flash offset address 0X08000400, which is used for the FlexSPI Nor boot configuration, the detailed content of the FCB can be found from the RT600 user manual Table 997. FlexSPI flash configuration block. Different external Flash, the configuration is different, if need to use the QSPI flash, the FCB should use the QSPI related configuration and its own LUT table.    Modify SDK project flash_config folder flash_config.c and flash_config.h, LUT contains fast read, status read, write enable, sector erase, block erase, page program, erase the whole chip. If the external QSPI flash command is different, the LUT command should be modified by following the flash datasheet mentioned related command. const flexspi_nor_config_t flexspi_config = { .memConfig = { .tag = FLASH_CONFIG_BLOCK_TAG, .version = FLASH_CONFIG_BLOCK_VERSION, .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackInternally, .csHoldTime = 3, .csSetupTime = 3, .columnAddressWidth = 0, .deviceModeCfgEnable = 0, .deviceModeType = 0, .waitTimeCfgCommands = 0, .deviceModeSeq = {.seqNum = 0, .seqId = 0,}, .deviceModeArg = 0, .configCmdEnable = 0, .configModeType = {0}, .configCmdSeqs = {0}, .configCmdArgs = {0}, .controllerMiscOption = (0), .deviceType = 1, .sflashPadType = kSerialFlash_4Pads, .serialClkFreq = kFlexSpiSerialClk_133MHz, .lutCustomSeqEnable = 0, .sflashA1Size = BOARD_FLASH_SIZE, .sflashA2Size = 0, .sflashB1Size = 0, .sflashB2Size = 0, .csPadSettingOverride = 0, .sclkPadSettingOverride = 0, .dataPadSettingOverride = 0, .dqsPadSettingOverride = 0, .timeoutInMs = 0, .commandInterval = 0, .busyOffset = 0, .busyBitPolarity = 0, .lookupTable = { #if 0 [0] = 0x08180403, [1] = 0x00002404, [4] = 0x24040405, [12] = 0x00000604, [20] = 0x081804D8, [36] = 0x08180402, [37] = 0x00002080, [44] = 0x00000460, #endif // Fast Read [4*0+0] = FLEXSPI_LUT_SEQ(CMD_SDR , FLEXSPI_1PAD, 0xEB, RADDR_SDR, FLEXSPI_4PAD, 0x18), [4*0+1] = FLEXSPI_LUT_SEQ(MODE4_SDR, FLEXSPI_4PAD, 0x00, DUMMY_SDR , FLEXSPI_4PAD, 0x09), [4*0+2] = FLEXSPI_LUT_SEQ(READ_SDR , FLEXSPI_4PAD, 0x04, STOP_EXE , FLEXSPI_1PAD, 0x00), //read status [4*1+0] = FLEXSPI_LUT_SEQ(CMD_SDR , FLEXSPI_1PAD, 0x05, READ_SDR, FLEXSPI_1PAD, 0x04), //write Enable [4*3+0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x06, STOP_EXE, FLEXSPI_1PAD, 0), // Sector Erase byte LUTs [4*5+0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x20, RADDR_SDR, FLEXSPI_1PAD, 0x18), // Block Erase 64Kbyte LUTs [4*8+0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xD8, RADDR_SDR, FLEXSPI_1PAD, 0x18), //Page Program - single mode [4*9+0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x02, RADDR_SDR, FLEXSPI_1PAD, 0x18), [4*9+1] = FLEXSPI_LUT_SEQ(WRITE_SDR, FLEXSPI_1PAD, 0x04, STOP_EXE, FLEXSPI_1PAD, 0x0), //Erase whole chip [4*11+0]= FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x60, STOP_EXE, FLEXSPI_1PAD, 0), }, }, .pageSize = 0x100, .sectorSize = 0x1000, .ipcmdSerialClkFreq = 1, .isUniformBlockSize = 0, .blockSize = 0x10000, }; This code has been tested on the RT685+ QSPI flash MT25QL128ABA1ESE, the code boot is working. 3.2 Debug configuration Configure the JLINK options in the MCUXpresso IDE as the JLINK driver: JLinkGDBServerCL.exe Windows->preferences Pic 7 Press debug, generate .launch file. Pic 8 Run->Debug configurations           Pic 9 Choose the device as MIMXRT600_UFL_L0, if the SWD wire is long and not stable, also can define the speed as the fixed low frequency. 3.3 Download and debug test Before download, need to check the RT685 ISP mode configuration, as this document is using the 4 wire QSPI and connect to the FlexSPI A port, so the ISP boot mode should be FlexSPI boot from Port A: ISP2 PIO1_17 low, ISP1 PIO1_16 high, ISP0 PIO1_15 high Click debug button, we can see the code enter the debug mode, and enter the main function, the code address is located in the flexSPI remap address. Pic 10 Click run, we can find the RT685 pin P0_26 is toggling, and the UART interface also can printf information. The application code is working. 4 External SPI flash operation checking To the customer designed board, normally we will use the JLINK command to check whether it can find the ARM core or not at first, make sure the RT chip can work, then will check the external flash operation or not. 4.1 SDK IAP flash code test We can use the SDK related code to test the external flash operation or not at first, the SDK code path is: SDK_2_10_0_EVK-MIMXRT685\boards\evkmimxrt685\driver_examples\iap\iap_flash Then, check the external flash, and modify the code’s related option0, option1 to match the external flash. About the option 0 and option1 definition, we can find it from the RT600 user manual Table 1004.Option0 definition and Table 1005.Option1 definition Pic 11 Pic 12 To the external QSPI flash which is connected to the FLexSPI portA, we can modify the option to the following code:     option.option0.U = 0xC0000001;//EXAMPLE_NOR_FLASH;     option.option1.U = 0x00000000;//EXAMPLE_NOR_FLASH_OPTION1; Then burn the IAP_flash project to the RT685 internal RAM, debug to run it. Pic 13 We can find, the external QSPI flash initialization, erase, read and write all works, and the memory also can find the correct data. 4.2 MCUBootUtility test   Chip enter the ISP mode, then use the MCUBootUtility tool to connect the RT685 and QSPI flash, to do the application code program and read test. ISP mode:ISP2:high, ISP1: high ISP0 low Configure FlexSPI NOR Device Configuration as QSPI, we can use the template: ISSI_IS25LPxxxA_IS25WPxxxA. Pic 14 Click connect to ROM button, check whether it can recognize the external flash: Pic 15 After connection, we can use the tool attached RT685 image to download: NXP-MCUBootUtility-3.3.1\apps\NXP_MIMXRT685-EVK_Rev.E\led_blinky_0x08001000_fdcb.srec Pic 16 We can find, the connection, erase, program and read are all work, it also indicates the RT685+external QSPI flash is working. Then can go to debug it with IDE and debugger.    
查看全文
Note: for similar EVKs, see: Using J-Link with MIMXRT1060-EVKB or MIMXRT1040-EVK Using J-Link with MIMXRT1170-EVKB Using J-Link with MIMXRT1160-EVK or MIMXRT1170-EVK This article provides details using a J-Link debug probe with either of these EVKs.  There are two options: the onboard debug circuit can be updated with Segger J-Link firmware, or an external J-Link debug probe can be attached to the EVK.  Using the onboard debug circuit is helpful as no other debug probe is required.  However, the onboard debug circuit will no longer power the EVK when updated with the J-Link firmware.  Appnote AN13206 has more details on this, and the comparison of the firmware options for the debug circuit.  This article details the steps to use either J-Link option.   Using external J-Link debug probe Segger offers several J-Link probe options.  To use one of these probes with these EVKs, configure the EVK with these settings: Remove jumpers J47 and J48, to disconnect the SWD signals from onboard debug circuit.  These jumpers or installed by default. Use default power selection on J1 with pins 5-6 shorted. Connect the J-Link probe to J21, 20-pin dual-row 0.1" header. Power the EVK with one of the power supply options.  Typically USB connector J41 is used to power the board, and provides a UART/USB bridge through the onboard debug circuit.   Using onboard debug circuit with J-Link firmware Follow Appnote AN13206 to program the J-Link firmware to the EVK Install jumpers J47 and J48, to connect the SWD signals from onboard debug circuit.  These jumpers or installed by default. Plug USB cable to J41.  This provides connection for J-Link debugger and UART/USB bridge.  However, with J-Link firmware, J41 no longer powers the EVK Power the EVK with another source.  Here we will use another USB port.  Move the jumper on J1 to short pins 3-4 (default shorts pins 5-6) Connect a 2nd USB cable to J9 to power the EVK.  The green LED next to J1 will be lit when the EVK is properly powered.
查看全文
Realize a panoramic video layer with OpenGL
查看全文
RT10xx SAI basic and SDCard wave file play 1. Introduction NXP RT10xx's audio modules are SAI, SPDIF, and MQS. The SAI module is a synchronous serial interface for audio data transmission. SPDIF is a stereo transceiver that can receive and send digital audio, MQS is used to convert I2S audio data from SAI3 to PWM, and then can drive external speakers, but in practical usage, it still need to add the amplifier drive circuit. When we use the SAI module, it will be related to the audio file play and the data obtained. This article will be based on the MIMXRT1060-EVK board, give the RT10xx SAI module basic knowledge, PCM waveform format, the audio file cut, and conversion tool, use the MCUXpresso IDE CFG peripheral tool to create the SAI project, play the audio data, it will also provide the SDcard with fatfs system to read the wave file and play it. 2. Basic Knowledge and the tools Before entering the project details and testing, just provide some SAI module knowledge, wave file format information, audio convert tools. 2.1 SAI module basic RT10xx SAI module can support I2S, AC97, TDM, and codec/DSP interface. SAI module contains Transmitter and Receiver, the related signals:     SAI_MCLK: master clock, used to generate the bit clock, master output, slave input.     SAI_TX_BCLK: Transmit bit clock, master output, slave input     SAI_TX_SYNC: Transmit Frame sync, master output, slave input, L/R channel select     SAI_TX_DATA[4]:Transmit data line, 1-3 share with RX_DATA[1-3]     SAI_RX_BCLK: receiver bit clock     SAI_RX_SYNC: receiver frame sync     SAI_RX_DATA[4]: receiver data line SAI module clocks: audio master clock, bus clock, bit clock SAI module Frame sync has 3 modes:      1)Transmit and receive using its own BCLK and SYNC      2)Transmit async, receive sync: use transmit BCLK and SYNC, transmit enable at first, disable at last.      3)Transmit sync, receive async: use receive BCLK and SYNC, receiver enable at first, disable at last. Valid frame sync is also ignored (slave mode) or not generated (master mode) for the first four-bit clock cycles after enabling the transmitter or receiver. Pic 1 SAI module clock structure: Pic 2 SAI module 3 clock sources:  PLL3_PFD3, PLL5, PLL4 In the above picture, SAI1_CLK_ROOT, which can be used as the MCLK, the BCLK is: BCLK= master clock/(TCR2[DIV]+1)*2 Sample rate = Bitclockfreq /(bitwidth*channel) 2.2 waveform audio file format WAVE file is used to save the PCM encode data, WAVE is using the RIFF format, the smallest unit in the RIFF file is the CK struct, CKID is the data type, the value can be: “RIFF”,“LIST”,“fmt”, “data” etc. RIFF file is little-endian. RIFF structure: typedef unsigned long DWORD;//4B typedef unsigned char BYTE;//1B typedef DWORD FOURCC; // 4B typedef struct { FOURCC ckID; //4B DWORD ckSize; //4B union { FOURCC fccType; // RIFF form type 4B BYTE ckData[ckSize]; //ckSize*1B } ckData; } RIFFCK; Pic 3 Take a 16Khz 2 channel wave file as the example: Pic 4 Yellow: CKID  Green: data length   Purple: data The detailed analysis as follows: Pic 5 We can find, the real audio data, except the wave header, the data size is 1279860bytes. 2.3 Audio file convert In practical usage, the audio file may not the required channel and the sample rate configuration, or the format is not the wave, or the time is too long, then we can use some tool to convert it to your desired format. We can use the ffmpeg tool: https://ffmpeg.org/ About the details, check the ffmpeg document, normally we use these command: mp3 file converts to 16k, 16bit, 2 channel wave file: ffmpeg -i test.mp3 -acodec pcm_s16le -ar 16000 -ac 2 test.wav or: ffmpeg -i test.mp3 -aq 16 -ar 16000 -ac 2 test.wav test.wav, cut 35s from 00:00:00, and can convert save to test1.wav: ffmpeg -ss 00:00:00 -i test.wav -t 35.0 -c copy test1.wav Pic 6 Pic 7 2.4 Obtain wave L/R channel audio data Just like the SDK code, save the L/R audio data directly in the RT RAM array, so here, we need to obtain the audio data from the wav file. We can use the python readout the wav header, then get the audio data size, and save the audio data to one array in the .h files. The related Python code can be: import sys import wave def wav2hex(strWav, strHex): with wave.open(strWav, "rb") as fWav: wavChannels = fWav.getnchannels() wavSampleWidth = fWav.getsampwidth() wavFrameRate = fWav.getframerate() wavFrameNum = fWav.getnframes() wavFrames = fWav.readframes(wavFrameNum) wavDuration = wavFrameNum / wavFrameRate wafFramebytes = wavFrameNum * wavChannels * wavSampleWidth print("Channels: {}".format(wavChannels)) print("Sample width: {}bits".format(wavSampleWidth * 8)) print("Sample rate: {}kHz".format(wavFrameRate/1000)) print("Frames number: {}".format(wavFrameNum)) print("Duration: {}s".format(wavDuration)) print("Frames bytes: {}".format(wafFramebytes)) fWav.close() pass with open(strHex, "w") as fHex: # Print WAV parameters fHex.write("/*\n"); fHex.write(" Channels: {}\n".format(wavChannels)) fHex.write(" Sample width: {}bits\n".format(wavSampleWidth * 8)) fHex.write(" Sample rate: {}kHz\n".format(wavFrameRate/1000)) fHex.write(" Frames number: {}\n".format(wavFrameNum)) fHex.write(" Duration: {}s\n".format(wavDuration)) fHex.write(" Frames bytes: {}\n".format(wafFramebytes)) fHex.write("*/\n\n") # Print WAV frames fHex.write("uint8_t music[] = {\n") print("Transferring...") i = 0 while wafFramebytes > 0: if(wafFramebytes < 16): BytesToPrint = wafFramebytes else: BytesToPrint = 16 fHex.write(" ") for j in range(0, BytesToPrint): if j != 0: fHex.write(' ') fHex.write("0x{:0>2x},".format(wavFrames[i])) i+=1 j+=1 fHex.write("\n") wafFramebytes -= BytesToPrint fHex.write("};\n") fHex.close() print("Done!") wav2hex(sys.argv[1], sys.argv[2]) Take the music1.wave as an example: Pic 8 2.4 Audio data relationship with audio wave 16bit data range is: -32768 to 32767, the goldwave related value range is(-1~1).Use goldwave tool to open the example music1.wav, check the data in 1s position, the left channel relative data is -0.08227, right channel relative data is -0.2257. Pic 9                                                                          pic 10 Now, calculate the L/R real data, and find the position in the music1.h. Pic 11 From pic 8, we can know, the real wave R/L data from line 11, each line contains 16 bytes of data. So, from music1.wav related value, we can calculate the related data, and compare it with the real data in the array, we can find, it is totally the same. 3. SAI MCUXpresso project creation Based on SDK_2.9.2_EVK-MIMXRT1060, create one SAI DMA audio play project. The audio data can use the above music1.h. Create one bare-metal project: Drivers check: clock, common, dmamux, edma,gpio,i2c,iomuxc,lpuart,sai,sai_edma,xip_device Utilities check:       Debug_console,lpuart_adapter,serial_manager,serial_manager_uart Board components check:       Xip_board Abstraction Layer check:       Codec, codec_wm8960_adapter,lpi2c_adapter Software Components check:       Codec_i2c,lists,wm8960 After the creation of the project, open the clocks, configure the clock, core, flexSPI can use the default one, we mainly configure the SAI1 related clocks: Pic 12 Select the SAI1 clock source as PLL4, PLL4_MAIN_CLK configure as 786.48MHz. SAI1 clock configure as 6.144375MHz. After the configuration, update the code. Open Pins tool, configure the SAI1 related pins, as the codec also need the I2C, so it contains the I2C pin configuration. Pic 13 Update the code. Open peripherals, configure DMA, SAI, NVIC. Pic 14 Pic 15 DMA配置如下: pic16 After configuration, generate the code. In the above configuration, we have finished the SAI DMA transfer configuration, SAI master mode, 16bits, the sample rate is 16kHz, 2channel, DMA transfer, bit clock is 512Khz, the master clock is 6.1443Mhz. void callback(I2S_Type *base, sai_edma_handle_t *handle, status_t status, void *userData) { if (kStatus_SAI_RxError == status) { } else { finishIndex++; emptyBlock++; /* Judge whether the music array is completely transfered. */ if (MUSIC_LEN / BUFFER_SIZE == finishIndex) { isFinished = true; finishIndex = 0; emptyBlock = BUFFER_NUM; tx_index = 0; cpy_index = 0; } } } int main(void) { sai_transfer_t xfer; /* Init board hardware. */ BOARD_ConfigMPU(); BOARD_InitBootPins(); BOARD_InitBootClocks(); BOARD_InitBootPeripherals(); #ifndef BOARD_INIT_DEBUG_CONSOLE_PERIPHERAL /* Init FSL debug console. */ BOARD_InitDebugConsole(); #endif PRINTF(" SAI wav module test!\n\r"); /* Use default setting to init codec */ if (CODEC_Init(&codecHandle, &boardCodecConfig) != kStatus_Success) { assert(false); } /* delay for codec output stable */ DelayMS(DEMO_CODEC_INIT_DELAY_MS); CODEC_SetVolume(&codecHandle,2U,50); // set 50% volume EnableIRQ(DEMO_SAI_IRQ); SAI_TxEnableInterrupts(DEMO_SAI, kSAI_FIFOErrorInterruptEnable); PRINTF(" MUSIC PLAY Start!\n\r"); while (1) { PRINTF(" MUSIC PLAY Again\n\r"); isFinished = false; while (!isFinished) { if ((emptyBlock > 0U) && (cpy_index < MUSIC_LEN / BUFFER_SIZE)) { /* Fill in the buffers. */ memcpy((uint8_t *)&buffer[BUFFER_SIZE * (cpy_index % BUFFER_NUM)], (uint8_t *)&music[cpy_index * BUFFER_SIZE], sizeof(uint8_t) * BUFFER_SIZE); emptyBlock--; cpy_index++; } if (emptyBlock < BUFFER_NUM) { /* xfer structure */ xfer.data = (uint8_t *)&buffer[BUFFER_SIZE * (tx_index % BUFFER_NUM)]; xfer.dataSize = BUFFER_SIZE; /* Wait for available queue. */ if (kStatus_Success == SAI_TransferSendEDMA(DEMO_SAI, &SAI1_SAI_Tx_eDMA_Handle, &xfer)) { tx_index++; } } } } }   4. SAI test result     To check the real L/R data sendout situation, we modify the music array first 16 bytes data as: 0x55,0xaa,0x01,0x00,0x02,0x00,0x03,0x00,0x04,0x00,0x05,0x00,0x06,0x00,0x07,0x00 Then test SAI_MCLK,SAI_TX_BCLK,SAI_TX_SYNC,SAI_TXD pin wave, and compare with the defined data, because the polarity is configured as active low, it is falling edge output, sample at rising edge. The test point on the MIMXRT1060-EVK board is using the codec pin position: Pic 17 4.1 Logic Analyzer tool wave Pic 18 MCLK clock frequency is 6.144375Mhz, BCLK is 512KHz, SYNC is 16KHz. Pic 19 The first frame data is:1010101001010101 0000000000000001 0XAA55  0X0001 It is the same as the array defined L/R data. SYNC low is Left 16 bit, High is right 16 bit. 4.2 Oscilloscope test wave Just like the logic analyzer, the oscilloscope wave is the same: Pic 20 Add the music.h to the project, and let the main code play the music array data in loop, we will hear the music clear when insert the headphone to on board J12 or add a speaker. 5. SAI SDcard wave music play This part will add the sd card, fatfs system, to read out the 16bit 16K 2ch wave file in the sd card, and play it in loop. 5.1 driver add     Code is based on SDK_2.9.2_EVK-MIMXRT1060, just on the previous project, add the sdcard, sd fatfs driver, now the bare-metal driver situation is: Drivers check: cache, clock, common, dmamux, edma,gpio,i2c,iomuxc,lpuart,sai,sai_edma,sdhc, xip_device Utilities check:       Debug_console,lpuart_adapter,serial_manager,serial_manager_uart Middleware check:       File System->FAT File System->fatfs+sd, Memories Board components check:       Xip_board Abstraction Layer check:       Codec, codec_wm8960_adapter,lpi2c_adapter Software Components check:       Codec_i2c,lists,wm8960 5.2 WAVE header analyzer with code    From previous content, we can know the wav header structure, we need to play the wave file from the sd card, then we need to analyze the wave header to get the audio format, audio data-related information. The header analysis code is: uint8_t Fun_Wave_Header_Analyzer(void) { char * datap; uint8_t ErrFlag = 0; datap = strstr((char*)Wav_HDBuffer,"RIFF"); if(datap != NULL) { wav_header.chunk_size = ((uint32_t)*(Wav_HDBuffer+4)) + (((uint32_t)*(Wav_HDBuffer + 5)) << + (((uint32_t)*(Wav_HDBuffer + 6)) << 16) +(((uint32_t)*(Wav_HDBuffer + 7)) << 24); movecnt += 8; } else { ErrFlag = 1; return ErrFlag; } datap = strstr((char*)(Wav_HDBuffer+movecnt),"WAVEfmt"); if(datap != NULL) { movecnt += 8; wav_header.fmtchunk_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); wav_header.audio_format = ((uint16_t)*(Wav_HDBuffer+movecnt+4) + (uint16_t)*(Wav_HDBuffer+movecnt+5)); wav_header.num_channels = ((uint16_t)*(Wav_HDBuffer+movecnt+6) + (uint16_t)*(Wav_HDBuffer+movecnt+7)); wav_header.sample_rate = ((uint32_t)*(Wav_HDBuffer+movecnt+8)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 9)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 10)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 11)) << 24); wav_header.byte_rate = ((uint32_t)*(Wav_HDBuffer+movecnt+12)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 13)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 14)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 15)) << 24); wav_header.block_align = ((uint16_t)*(Wav_HDBuffer+movecnt+16) + (uint16_t)*(Wav_HDBuffer+movecnt+17)); wav_header.bps = ((uint16_t)*(Wav_HDBuffer+movecnt+18) + (uint16_t)*(Wav_HDBuffer+movecnt+19)); movecnt +=(4+wav_header.fmtchunk_size); } else { ErrFlag = 1; return ErrFlag; } datap = strstr((char*)(Wav_HDBuffer+movecnt),"LIST"); if(datap != NULL) { movecnt += 4; wav_header.list_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); movecnt +=(4+wav_header.list_size); } //LIST not Must datap = strstr((char*)(Wav_HDBuffer+movecnt),"data"); if(datap != NULL) { movecnt += 4; wav_header.datachunk_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); movecnt += 4; ErrFlag = 0; } else { ErrFlag = 1; return ErrFlag; } PRINTF("Wave audio format is %d\r\n",wav_header.audio_format); PRINTF("Wave audio channel number is %d\r\n",wav_header.num_channels); PRINTF("Wave audio sample rate is %d\r\n",wav_header.sample_rate); PRINTF("Wave audio byte rate is %d\r\n",wav_header.byte_rate); PRINTF("Wave audio block align is %d\r\n",wav_header.block_align); PRINTF("Wave audio bit per sample is %d\r\n",wav_header.bps); PRINTF("Wave audio data size is %d\r\n",wav_header.datachunk_size); return ErrFlag; } Mainly divide RIFF to 4 parts: “RIFF”,“fmt”,“LIST”,“data”. The 4 bytes data follows the “data” is the whole audio data size, it can be used to the fatfs to read the audio data. The above code also recodes the data position, then when using the fatfs read the wave, we can jump to the data area directly. 5.3 SD card wave data play     Define the array audioBuff[4* 512], used to read out the sd card wave file, and use these data send to the SAI EDMA and transfer it to the I2S interface until all the data is transmitted to the I2S interface.     Callback record each 512 bytes data send out finished, and judge the transmit data size is reached the whole wave audio data size. 5.4 sd card wave play result    Prepare one wave file, 16bit 16k sample rate, 2 channel file, named as music.wav, put in the sd card which already does the fat32 format, insert it to the MIMXRT1060-EVK J39, run the code, will get the printf information: Please insert a card into the board. Card inserted. Make file system......The time may be long if the card capacity is big. SAI wav module test! MUSIC PLAY Start! Wave audio format is 1 Wave audio channel number is 2 Wave audio sample rate is 16000 Wave audio byte rate is 64000 Wave audio block align is 4 Wave audio bit per sample is 16 Wave audio data size is 2728440 Playback is begin! Playback is finished! At the same time, after inserting the headphone or the speaker into the J12, we can hear the music. Attachment is the mcuxpresso10.3.0 and the wave samples.  
查看全文
This document describes the different source clocks and the main modules that manage which clock source is used to derive the system clocks that exists on the i.MX RT’s devices. It’s important to know the different clock sources available on our devices, modifying the default clock configuration may have different purposes since increasing the processor performance, achieving specific baud rates for serial communications, power saving, or simply getting a known base reference for a clock timer. The hardware used for this document is the following: i.MX RT: EVK-MIMXRT1060 Keep in mind that the described hardware and management clock modules in this document are a general overview of the different platforms and the devices listed above are used as a reference example, some terms and hardware modules functionality may vary between devices of the same platform. For more detailed information about the device hardware modules, please refer to your specific device Reference Manual. RT platforms The Clock Controller Module(CCM) facilitates the clock generation in the RT platforms, many clocking variations are possible and the maximum clock frequency for the i.MX RT1060 device is @600MHz.The following image shows a block diagram of the CCM, the three marked sub-modules are important to understand all the clock path from the clock generation(oscillators or crystals) to the clock management for all the peripherals of the board.    Figure 1. Clock Controller Module(CCM) Block Diagram        CCM Analog Submodule This submodule contains all the oscillators and several PLL’s that provide a clock source to the principal CMM module. For example, the i.MX RT1060 device supports 2 internal oscillators that combined with suitable external quartz crystal and external load capacitors provide an accurate clock source, another 2 internal oscillators are available for low power modes and as a backup when the system detects a loss of clock. These oscillators provide a fixed frequency for the several PLL’s inside this module. Internal Clock Sources with external components  Crystal Oscillator @24MHz Many of the serial IO modules depend on the fixed frequency of 24 MHz. The reference clock that generates this crystal oscillator provides an accurate clock source for all the PLL inputs.  Crystal Oscillator @32KHz Generally, RTC oscillators are either implemented with 32 kHz or 32.768 kHz crystals. This Oscillator should always be active when the chip is powered on. Internal Clock sources RC Oscillator @24MHz A lower-power RC oscillator module is available on-chip as a possible alternative to the 24 MHz crystal oscillator after a successful power-up sequence. The 24 MHz RC oscillator is a self-tuning circuit that will output the programmed frequency value by using the RTC clock as its reference. While the power consumption of this RC oscillator is much lower than the 24MHz crystal oscillator, one limitation of this RC oscillator module is that its clock frequency is not as accurate. Oscillator @32KHz The internal oscillator is automatically multiplexed in the clocking system when the system detects a loss of clock. The internal oscillator will provide clocks to the same on-chip modules as the external 32kHz oscillator. Also is used to be useful for quicker startup times and tampering prevention. Note. An external 32KHz clock source must be used since the internal oscillator is not precise enough for long term timekeeping. PLLs There are 7 PLLs in the i.MXRT1060 platform, some with specific functions, for example, create a reference clock for the ARM Core, USB peripherals, etc. Below these PLLs are listed. PLL1 - ARM PLL (functional frequency @600 MHz) PLL2 - System PLL (functional frequency @528 MHz)* PLL3 - USB1 PLL (functional frequency @480 MHz)* PLL4 - Audio PLL PLL5 - Video PLL PLL6 - ENET PLL PLL7 - USB2 PLL (functional frequency @480 MHz) * Two of these PLLs are each equipped with four Phase Fractional Dividers (PFDs) in order to generate additional frequencies for many clock roots.  Each PLLs configuration and control functions like Bypass, Output Enable/Disable, and Power Down modes are accessible individually through its PFDs and global configuration and status registers found at the CCM internal memory registers.        Clock Control Module(CCM) The Clock Control Module (CCM) generates and controls clocks to the various modules in the design and manages low power modes. This module uses the available clock sources(PLL reference clocks and PFDs) to generate the clock roots. There are two important sub-blocks inside the CCM listed below. Clock Switcher This sub-block provides the registers that control which PLLs and PFDs outputs are selected as the reference clock for the Clock Root Generator.  Clock Root Generator This sub-block provides the registers that control most of the secondary clock source programming, including both the primary clock source selection and the clock dividers. The clock roots are each individual clocks to the core, system buses, and all other SoC peripherals, among those, are serial clocks, baud clocks, and special function blocks. All of these clock references are delivered to the Low Power Clock Gating unit(LPCG).        Low Power Clock Gating unit(LPCG) The LPCG block receives the root clocks from CCM and splits them to clock branches for each peripheral. The clock branches are individually gated clocks. The following image shows a detailed block diagram of the CMM with the previously described submodules and how they link together. Figure 2. Clock Management System Example: Configure The ARM Core Clock (PLL1) to a different frequency. The Clock tools available in MCUXpresso IDE, allows you to understand and configure the clock source for the peripherals in the platform. The following diagram shows the default PLL1 mode configured @600MHz, the yellow path shows all the internal modules involved in the clock configuration.  Figure 3. Default PLL configuration after reset. From the previous image notice that PLL1 is attached from the 24MHz oscillator, then the PLL1 is configured with a pre-scaler of 50 to achieve a frequency @1.2GHz, finally, a frequency divider by 2 let a final frequency @600MHz. 1.1 Modify the PLL1 frequency For example, you can use the Clock tools to configure the PLL pre-scaler to 30, select the PLL1 block and then edit the pre-scaler value, therefore, the final clock frequency is @360MHz, these modifications are shown in the following figure.  Figure 4. PLL1 @720MHz, final frequency @360MHz    1.2 Export clock configuration to the project After you complete the clock configuration, the Clock Tool will update the source code in clock_config.c and clock_config.h, including all the clock functional groups that we created with the tool. This will include the clock source for specific peripherals. In the previous example, we configured the PLL1 (ARM PLL) to a functional frequency @360MHz; this is translated to the following structure in source code: “armPllConfig_BOARD_BootClockRUN” and it’s used by “CLOCK_InitArmPll();” inside the “BOARD_BootClockPLL150MRUN();” function.     Figure 5. PLL1 configuration struct  Figure 6. PLL configuration function Example: The next steps describe how to select a clock source for a specific peripheral. 1.1 Configure clock for specific peripheral For example, using the GPT(General Purpose Timer) the available clock sources are the following: Clock Source Off Peripheral Clock High-Frequency Reference Clock Clock Source from an external pin Low-Frequency Reference Clock Crystal Oscillator Figure 7. General Purpose Timer Clocks Diagram Using the available SDK example project “evkmimxrt1060_gpt_timer” a configuration struct for the peripheral “gptConfig” is called from the main initialization function inside the gpt_timer.c source file, the default configuration function with the configuration struct as a parameter, is shown in the following figure. Figure 8. Function that returns a GPT default configuration parameters The function loads several parameters to the configuration struct(gptConfig), one of the fields is the Clock Source configuration, modifying this field will let us select an appropriate clock source for our application, the following figure shows the default configuration parameters inside the “GPT_GetDefaultConfig();” function.  Figure 9. Configuration struct In the default GPT configuration struct, the Peripheral Clock(kGPT_CLockSource_Periph) is selected, the SDK comes with several macros located at “fsl_gpt.h” header file, that helps to select an appropriate clock source. The next figure shows an enumerated type of data that contains the possible clock sources for the GPT.  Figure 10. Available clock sources of the GPT. For example, to select the Low-Frequency Reference Clock the source code looks like the following figure.  Figure 11. Low-Frequency Reference Clock attached to GPT Notice that all the peripherals come with a specific configuration struct and from that struct fields the default clocking parameters can be modified to fit with our timing requirements. 1.2 Modify the Peripheral Clock frequency from Clock Tools One of the GPT clock sources is the “Peripheral Clock Source” this clock line can be modified from the Clock Tools, the following figure shows the default frequency configuration from Clock Tools view. Figure 12. GPT Clock Root inside CMM In the previous figure, the GPT clock line is @75MHz, notice that this is sourced from the primary peripheral clock line that is @600MHz attached to the ARM core clocks. For example, modify the PERCLK_PODF divider selecting it and changing the divider value to 4, the resulting frequency is @37.5Mhz, the following figure illustrates these changes.  Figure 13. GPT & PIT clock line @37.5MHz 1.3 Export clock configuration to the project After you complete the clock configuration, the Clock Tool will update the source code in clock_config.c and clock_config.h, including all the clock functional groups that we created with the tool. This will include the clock source for specific peripherals. In the previous example, we configured the GPT clock root divider by a dividing factor of 4 to achieve a 37.5MHz frequency; this is translated to the following instruction in source code: “CLOCK_SetDiv(kCLOCK_PerclkDiv,3);” inside the “BOARD_BootClockRUN();” function.                Figure 14. Frequency divider function References i.MX RT1060 Processor Reference Manual Also visit LPC's System Clocks  Kinetis System Clocks
查看全文
The newly announced i.MX RT1170 is a dual-core Arm® Cortex®-M based crossover MCU that breaks the gigahertz (GHz) barrier and accelerates advanced Machine Learning (ML) applications at the edge.  Built using advanced 28nm FD-SOI technology for lower active and static power requirements, i.MX RT1170 MCU family integrates a GHz Arm Cortex-M7 and power-efficient Cortex-M4, advanced 2D vector graphics, together with NXP’s signature EdgeLock security solution.  The i.MX RT1170 delivers a total CoreMark score of 6468 and address the growing performance needs of edge computing for industrial, Internet-of-Things (IoT) and automotive applications
查看全文
MIMXRT1010 EVK (Chinese Version)  Design Files and Hardwre User's Guide 
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-341317
查看全文
Using a different Flash with the RT1050 In IAR/Keil environment , when you change to other flash(not default flash on EVK board), please refer to below AN to modify the code for it. https://www.nxp.com/docs/en/nxp/application-notes/AN12183.pdf
查看全文
In case you missed our recent webinar, you can check out the slides and comment below with any questions.
查看全文
Goal Our goal is to train a model that can take a value, x, and predict its sine, y. In a real-world application, if you needed the sine of x, you could just calculate it directly. However, by training a model to approximate the result, we can demonstrate the basics of machine learning. TensorFlow and Keras TensorFlow is a set of tools for building, training, evaluating, and deploying machine learning models. Originally developed at Google, TensorFlow is now an open-source project built and maintained by thousands of contributors across the world. It is the most popular and widely used framework for machine learning. Most developers interact with TensorFlow via its Python library. TensorFlow does many different things. In this post, we’ll use Keras, TensorFlow’s high-level API that makes it easy to build and train deep learning networks. To enable TensorFlow on mobile and embedded devices, Google developed the TensorFlow Lite framework. It gives these computationally restricted devices the ability to run inference on pre-trained TensorFlow models that were converted to TensorFlow Lite. These converted models cannot be trained any further but can be optimized through techniques like quantization and pruning. Building the Model To building the Model, we should follow the below steps. Obtain a simple dataset. Train a deep learning model. Evaluate the model’s performance. Convert the model to run on-device. Please navigate to the URL in your browser to open the notebook directly in Colab, this notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite. Deploy the mode to the RT MCU Hardware Board: MIMXRT1050 EVK Board Fig 1 MIMXRT1050 EVK Board Template demo code: evkbimxrt1050_tensorflow_lite_cifar10 Code /* Copyright 2017 The TensorFlow Authors. All Rights Reserved. Copyright 2018 NXP. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/ #include "board.h" #include "pin_mux.h" #include "clock_config.h" #include "fsl_debug_console.h" #include <iostream> #include <string> #include <vector> #include "timer.h" #include "tensorflow/lite/kernels/register.h" #include "tensorflow/lite/model.h" #include "tensorflow/lite/optional_debug_tools.h" #include "tensorflow/lite/string_util.h" #include "Sine_mode.h" int inference_count = 0; // This is a small number so that it's easy to read the logs const int kInferencesPerCycle = 30; const float kXrange = 2.f * 3.14159265359f; #define LOG(x) std::cout void RunInference() { std::unique_ptr<tflite::FlatBufferModel> model; std::unique_ptr<tflite::Interpreter> interpreter; model = tflite::FlatBufferModel::BuildFromBuffer(sine_model_quantized_tflite, sine_model_quantized_tflite_len); if (!model) { LOG(FATAL) << "Failed to load model\r\n"; exit(-1); } model->error_reporter(); tflite::ops::builtin::BuiltinOpResolver resolver; tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (!interpreter) { LOG(FATAL) << "Failed to construct interpreter\r\n"; exit(-1); } float input = interpreter->inputs()[0]; if (interpreter->AllocateTensors() != kTfLiteOk) { LOG(FATAL) << "Failed to allocate tensors!\r\n"; } while(true) { // Calculate an x value to feed into the model. We compare the current // inference_count to the number of inferences per cycle to determine // our position within the range of possible x values the model was // trained on, and use this to calculate a value. float position = static_cast<float>(inference_count) / static_cast<float>(kInferencesPerCycle); float x_val = position * kXrange; float* input_tensor_data = interpreter->typed_tensor<float>(input); *input_tensor_data = x_val; Delay_time(1000); // Run inference, and report any error TfLiteStatus invoke_status = interpreter->Invoke(); if (invoke_status != kTfLiteOk) { LOG(FATAL) << "Failed to invoke tflite!\r\n"; return; } // Read the predicted y value from the model's output tensor float* y_val = interpreter->typed_output_tensor<float>(0); PRINTF("\r\n x_value: %f, y_value: %f \r\n", x_val, y_val[0]); // Increment the inference_counter, and reset it if we have reached // the total number per cycle inference_count += 1; if (inference_count >= kInferencesPerCycle) inference_count = 0; } } /* * @brief Application entry point. */ int main(void) { /* Init board hardware */ BOARD_ConfigMPU(); BOARD_InitPins(); BOARD_InitDEBUG_UARTPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); NVIC_SetPriorityGrouping(3); InitTimer(); std::cout << "The hello_world demo of TensorFlow Lite model\r\n"; RunInference(); std::flush(std::cout); for (;;) {} } ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Test result On the MIMXRT1050 EVK Board, we log the input data: x_value and the inferenced output data: y_value via the Serial Port. Fig2 Received data In a while loop function, It will run inference for a progression of x values in the range 0 to 2π and then repeat. Each time it runs, a new x value is calculated, the inference is run, and the data is output. Fig3 Test result In further, we use Excel to display the received data against our actual values as the below figure shows. Fig4 Dot Plot You can see that, for the most part, the dots representing predicted values form a smooth sine curve along the center of the distribution of actual values. In general, Our network has learned to approximate a sine curve.
查看全文
Recently, we often encounter customers using i.MXRT for RS485 communication. Mostly the problem of receiving and sending direction conversion in the process of using. Taking iMXRT1050 and SN65HVD11QDR as examples, The document introduces the LPUART to RS485 circuit and the method of transceiver control. The working principle is as follows: LPUART TXD: Transmit Data LPUART RXD: Receive Date LPUART RTS_B: Request To Send   The main control methods are as follows: 1  Use TXD signal line to do hardware automatic transceiver control According to the UART protocol, when the line is idle, TX is logic high. After the NOT gate, the LOW level is added to the direction control terminal, so when the UART is not  transmitting data, RS485 is in the state of receiving data. 2   Use GPIO control & LPUART_RTS More detailed information, users can refer to the link: https://www.nxp.com/docs/en/application-note/AN12679.pdf Note: Using GPIO control, software needs to judge the timing of receiving and sending. If the control is not good, it is easy to lose data. In order to control it well, the software must respond to TX FIFO "empty" interrupt, or query the sending status register, and accurately grasp the control opportunity, so as to ensure that there is no error in sending and receiving. Combined with the above methods, some customers are using the following control: Best Regards
查看全文
1 Introduction    With the quick development of science and technology, the Internet of Things(IoT) is widely used in various areas, such as industry, agriculture, environment, transportation, logistics, security, and other infrastructure. IoT usage makes our lives more colorful and intelligent. The explosive development of the IoT cannot be separated from the cloud platform. At present, there are many types of cloud services on the market, such as Amazon's AWS, Microsoft's Azure, google cloud, China's Alibaba Cloud, Baidu Cloud, OneNet, etc.    Amazon AWS Cloud is a professional cloud computing service that is provided by Amazon. It provides a complete set of infrastructure and cloud solutions for customers in various countries and regions around the world. It is currently a cloud computing with a large number of users. AWS IoT is a managed cloud platform that allows connected devices to easily and securely interact with cloud applications and other devices.    NXP crossover MCU RT product has launched a series of AWS sample codes. This article mainly explains the remote_control_wifi_nxp code in the official MIMXRT1060-EVK SDK as an example to realize the data interaction with AWS IoT cloud, Android mobile APP, and MQTTfx client. The cloud topology of this article is as follows: Fig.1-1 2 AWS cloud operation 2.1 Create an AWS account Prepare a credit card, and then go to the below amazon link to create an AWS account:    https://console.aws.amazon.com/console/home   2.2 Create a Thing    Open the AWS IOT link: https://console.aws.amazon.com/iot    Choose the Things item under manage, if it is the first time usage, customer can choose “register a thing” to create the thing. If it is used in the previous time, customers can click the “create” button in the right corner to create the thing. Choose “create a single thing” to create the new thing, more details check the following picture. Fig. 2-1 Fig.2-2 Fig.2-3 2.3 Create certificate    Create a certificate for the newly created thing, click the “create certificate” button under the following picture: Fig.2-4    After the certificate is built, it will have the information about the certificate created, it means the certificate is generated and can be used. Fig. 2-5 Please note, download files: certificate for this thing, public key, private key. It will be used in the mqttfx tool configuration. Click “A root CA for AWS for Download”, download the root CA for AWS IoT, the mqttfx tool setting will also use it. Open the root CA download link, can download the CA certificate. RSA 2048 bit key: VeriSign Class 3 Public Primary G5 root CA certificate Fig. 2-6 At last, we can get these files: 7abfd7a350-certificate.pem.crt 7abfd7a350-private.pem.key 7abfd7a350-public.pem.key AmazonRootCA1.pem Save it, it will be used later. Click “active” button to active the certificate, and click “Done” button. The policy will be attached later.   2.4 Create Policies     Back to the iot view page: https://console.aws.amazon.com/iot/     Select the policies under Secure item, to create the new policies.  Fig. 2-7 Input the policy name, in the action area, fill: iot:*, Resource ARN area fill: * Check Allow item, click the create button to finish the new policy creation. Fig. 2-8 2.5 Things attach relationship     After the thing, certificate, policies creation, then will attach the policy to the certificate, and attach the certificate to the Things. Fig. 2-9 Choose the certificates under Secure item, in the related certificate item, choose “…”, you will find the down list, click “attach policy”, and choose the newly created policy. Then click attach thing, choose the newly created thing. Fig. 2-10 Fig. 2-11 Fig. 2-12 Now, open the Things under Mange item, check the detail things related information. Fig.2-13 Double click the thing, in the Interact item, we can find the Rest API Endpoint, the RT code and the mqttfx tool will use this endpoint to realize the cloud connection. Fig. 2-14 Check the security, you will find the previously created certificate, it means this thing already attach the new certificates: Fig. 2-15 Until now, we already finish the Things related configuration, then it will be used for the MQTT fx, Android app, RT EVK board connections, and testing, we also can check the communication information through the AWS shadow in the webpage directly.       3 Android related configuration 3.1 AWS cognito configuration    If use the Android app to communicate with the AWS IoT clould, the AWS side still needs to use the cognito service to authorize the AWS IoT, then access the device shadows. Create a new identity pools at first from the following link: https://console.aws.amazon.com/cognitohttps://console.aws.amazon.com/cognito Fig. 3-1 Click “manage Identity pools”, after enter it, then click “create new identity pool” Fig. 3-2 Fig. 3-3 Fig. 3-4 Here, it will generate two Roles: Cognito_PoolNameAuth_Role Cognito_PoolNameUnauth_Role Click Allow, to finish the identity pool creation. Fig. 3-5 Please record the related Identity pool ID, it will be used in the Android app .properties configuration files. 3.2 Create plicies in IAM for cognito   Open https://console.aws.amazon.com/iam   Click the “policies” item under “access management” Fig. 3-6 Choose “create policy”, create a IAM policies, in the Policy JSON area, write the following content: Fig. 3-7 { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Connect" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "iot:Publish" ], "Resource": [ "arn:aws:iot:us-east-1:965396684474:topic/$aws/things/RTAWSThing/shadow/update", "arn:aws:iot:us-east-1:965396684474:topic/$aws/things/RTAWSThing/shadow/get" ] }, { "Effect": "Allow", "Action": [ "iot:Subscribe", "iot:Receive" ], "Resource": [ "*" ] } ] }‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Please note, in the JSON content: "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/update", "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/get" Region:the us-east-1 inFig. 3-5 ACCOUNT ID, it can be found in the upper right corner my account side. Fig 3-8 Fig 3-9 After finished the IAM policy creation, then back to IAM policies page, choose Filter policies as customer managed, we can find the new created customer’s policy. Fig. 3-10 3.3 Attach policy for the cognito role in IAM   In IAM, choose roles item: Fig. 3-11 Double click the cognito_PoolNameUnauth_Role which is generated when creating the pool in cognito, click attach policies, select the new created policy. Fig. 3-12 Fig. 3-13 Until now, we already finish the AWS cognito configuration.   3.4  Android properties file configuration Create a file with .properties, the content is:     customer_specific_endpoint=<REST API ENDPOINT>     cognito_pool_id=<COGNITO POOL ID>     thing_name=<THING NAME>     region=<REGION> Please fill the correct content: REST API ENDPOINT:Fig 2-14 COGNITO POOL ID:fig 3-5 THING NAME:fig 2-14,upper left corner REGION:Fig 3-5, the region data in COGNITO POOL ID Take an example, my properties file content is:  customer_specific_endpoint=a215vehc5uw107-ats.iot.us-east-1.amazonaws.com  cognito_pool_id=us-east-1:c5ca6d11-f069-416c-81f9-fc1ec8fd8de5  thing_name=RTAWSThing  region=us-east-1 In the real usage, please use your own configured data, otherwise, it will connect to my cloud endpoint. 4. MQTTfx configuration and testing MQTT.fx is an MQTT client tool which is based on EclipsePaho and written in Java language. It supports subscribe and publish of messages through Topic. You can download this tool from the following link:   http://mqttfx.jensd.de/index.php/download    The new version is:1.7.1.   4.1 MQTT.fx configuration     Choose connect configuration button, then enter the connection configuration page: Fig. 4-1 Profile Name: Enter the configuration name Broker Address: it is REST API ENDPOINT。 Broker Port:8883 Client ID: generate it freely CA file: it is the downloaded CA certificate file Client Certificate File: related certificate file Client key File: private key file Check PEM formatted。 Click apply and OK to finish the configuration. 4.2 Use the AWS cloud to test connection   In order to test whether it can be connected to the event cloud, a preliminary connection test can be performed. Open the aws page: https://console.aws.amazon.com/iot here is a Test button under this interface, which can be tested by other clients or by itself.Both AWS cloud and MQTTfx subscribe topic: $aws/things/RTAWSThing/shadow/update MQTTfx publishes data to the topic: $aws/things/RTAWSThing/shadow/update It can be found that both the cloud test port and the MQTTfx subscribe can receive data: Fig. 4-2 Below, the Publish data is tested by the cloud, and then you can see that both the MQTTFX subscribe and the cloud subscribe can receive data: Fig. 4-3 Until now, the AWS cloud can transfer the data between the AWS iot cloud and the client. 5 RT1060 and wifi module configuration   We mainly use the RT1060 SDK2.8.0 remote_control_wifi_nxp as the RT test code: SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\aws_examples\remote_control_wifi_nxp Test platform is:MIMXRT1060-EVK Panasonic PAN9026 SDIO ADAPTER + SD to uSD adapter The project is using Panasonic PAN9026 SDIO ADAPTER in default. 5.1 WIFI and the AWS code configuration    The project need the working WIFI SSID and the password, so prepare a working WIFI for it. Then add the SSID and the password in the aws_clientcredential.h #define clientcredentialWIFI_SSID       "Paste WiFi SSID here." #define clientcredentialWIFI_PASSWORD   "Paste WiFi password here." The connection for AWS also in file: aws_clientcredential.h #define clientcredentialMQTT_BROKER_ENDPOINT "a215vehc5uw107-ats.iot.us-east-1.amazonaws.com" #define clientcredentialIOT_THING_NAME       "RTAWSThing" #define clientcredentialMQTT_BROKER_PORT      8883   5.2 certificate and the key configuration Open the SDK following link: SDK_2.8.0_EVK-MIMXRT1060\rtos\freertos\tools\certificate_configuration\CertificateConfigurator.html Fig. 5-1 Generate the new aws_clientcredential_keys.h, and replace the old one. Take the MCUXPresso IDE project as an example, the file location is: Fig. 5-2 Build the project and download it to the MIMXRT1060-EVK board. 6 Test result Androd mobile phone download and install the APK under this folder: SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\aws_examples\remote_control_android\AwsRemoteControl.apk SDK can be downloaded from this link: Welcome | MCUXpresso SDK Builder  Then, we can use the Android app to remote control the RT EVK on board LED, the test result is 6.1 APP and EVK test result MIMXRT1060-EVK printf information: Fig. 6-1 Turn on and turn off the led:   Fig. 6-2                                        Fig. 6-3 6.2 MQTTfx subscribe result MQTTfx subscribe data Turn on the led, we can subscribe two messages: Fig. 6-4 Fig. 6-5   Turn off the led, we also can subscribe two messages: Fig. 6-6 Fig. 6-7 In the two message, the first one is used to set the led status. The second one is the EVK used to report the EVK led information. MQTTfx also can use the publish page, publish this data: {"state":{"desired":{"LEDstate":1}}} or {"state":{"desired":{"LEDstate":0}}} To topic: $aws/things/RTAWSThing/shadow/update It also can realize the on board LED turn on or off. 6.3 AWS cloud shadows display result Turn on the led: Fig. 6-8 Turn off the led: Fig. 6-9 In conclusion, after the above configuration and testing, it can finish the Android mobile phone to remote control the RT EVK on board LED and get the information. Also can use the MQTTFX client tool and the AWS shadow page to check the communication data.
查看全文
Slides from webinar hosted by NXP on Dec 10, 2019.
查看全文
Source code: https://github.com/JayHeng/NXP-MCUBootUtility 【v2.1.0】 Features: > 1. [RTyyyy] Support for loading bootable image into SEMC NOR boot device >     [RTyyyy] 支持下载Bootable image进主动启动设备 - SEMC NOR接口Flash > 2. [RTyyyy] Support operation under both CM7 and CM4 of RT117x A0 >     [RTyyyy] 在RT1170无论是CM7还是CM4作为主核下均能正常工作 > 3. [RTyyyy] Support two FlexSPI map addresses for RT117x A0 >     [RTyyyy] 支持RT1170的两个FlexSPI XIP映射地址 > 4. [RTyyyy] Support efuse memory operation for RT117x A0 >     [RTyyyy] 支持RT1170的eFuse回读与烧写 > 5. [RTyyyy] Can import user fuse table file to set efuse value >     [RTyyyy] 支持导入用户fuse配置文件去设置fuse > 6. [RTyyyy] Enable OTFAD encryption secure boot mode (User Key) for RT117x A0 >     [RTyyyy] 为RT1170 A0开启OTFAD加密(User Key)支持 > 7. [RTyyyy] Support RT1170/1010 bootable image from SDK as source input >     [RTyyyy] 支持RT1170/RT1010 SDK生成的Bootable image作为源文件输入 Improvements: > 1. [RTyyyy] Image format auto detection can be used for axf file from MCUX or GCC >     [RTyyyy] 程序格式自动检测选项也可用于MCUX生成的axf格式源文件 > 2. Specify file path instead of file to save readback data >    指定目录而不是指定文件去存放回读的数据 > 3. If readback data is enabled to be saved in file, then it will not displayed on the screen >    如果回读的数据已经选择保存到文件中,那么点击Read按钮将不会在窗口显示数据 Bugfixes: > 1. 'Cmd Pads' is not set correctly for some typical octal-flash models in FlexSPI NOR configuration >     在FlexSPI NOR配置界面里,对于一些octal-flash模型,其Cmd Pads参数没有被正确设置 > 2. 'Max Frequency' option is not exactly aligned with selected MCU device in FlexSPI NOR configuration >     在FlexSPI NOR配置界面里,Max Frequency参数选项与当前MCU型号不完全匹配 > 3. [RTyyyy] Cannot show total size of SD/eMMC correctly, so SD/eMMC cannot be programmed >     [RTyyyy] SD/eMMC总容量未能正确显示,导致无法编程SD/eMMC > 4. [RTyyyy] Some fields are not aligned with selected MCU device in Flexible User Key Setting >     [RTyyyy] 在用户自定义Key设置界面里,有些选项与当前选中的MCU型号不匹配 > 5. [RTyyyy] Cannot generate bootable image when original image size is less than 4KB >     [RTyyyy] 当输入的源image文件大小小于4KB时,生成可启动程序会失败 > 6. [RTyyyy] Sometimes tool cannot recognize .axf format from MCUX or Keil MDK >     [RTyyyy] 有时候无法识别MCUX或Keil MDK生成的axf格式源文件 > 7. [RTyyyy] Signed flashloader cannot be generated if DCD is enabled >     [RTyyyy] 当DCD使能的时候,无法生成含签名的Flashloader > 8. [RTyyyy] Cannot mark DCD in readback image if it comes from source bootable image >     [RTyyyy] 如果DCD来自源Bootable image,则无法在读回的image中标记DCD Interests: > 1. Add sound effect (Contra) >    增加魂斗罗音效
查看全文
The RT1170 supports the ability to trigger dual ADC’s in SyncMode or AsyncMode via the ADC External Trigger Control (ADC_ETC): In SyncMode, ADC1 and ADC2 are controlled by the same trigger source. In AsyncMode, ADC1 and ADC2 are controlled by separate trigger source. In AsyncMode (TRIGa_CTRL[SYNC_MODE]==0), the ADC conversion clock frequency maximum is 88 MHz, but in SyncMode (TRIGa_CTRL[SYNC_MODE]==1), the ADC conversion clock frequency must be constraint to a lower frequency due to switching noise inherent to its design architecture.  Reducing the conversion clock frequency reduces the switching noise that is observed. NXP is currently conducting further characterization in order to specify the maximum frequency of conversion in SyncMode across process, voltage, and temperature.  However, on typical samples at room temperature 60 MHz is the maximum frequency.
查看全文