i.MX RT Crossover MCUs Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX RT Crossover MCUs Knowledge Base

Labels

Discussions

Sort by:
1.    Abstract When using the RT600 SDK USB composite example, the customer found that the default HS interval was 125us, and the code was: mimxrt685audevk_dev_composite_hid_audio_unified_bm. However, in actual applications, the 125us packet interval for data transmission will cause a large interrupt load on the CPU, so the customer hopes to change the interval to a larger during, such as 1ms. After changing interval to 1ms, it is found that the data packet can be sent to the RT chip, but there is indeed a problem with the playback on the RT side, speaker no voice. Changing it to 500us in synchronous mode is possible work, but at 500us, the customer's CPU load still reaches more than 80%, which is not convenient for subsequent application code expansion, so it is still hoped to achieve a 1ms method. This article will give the transmission of different intervals of UAC and provide a solution for 1ms intervals. 2.    Test situation    Board:MIMXRT685-AUD-EVK    SDK:SDK_2_15_000_MIMXRT685-AUD-EVK    USB Analyzer:Lecroy USB-TOS2-A01-X 2.1 Platform connection situation First, given the connection between MIMXRT685-AUD-EVK, USB analyzer and audio source. This article mainly tests the RT685 UAC speaker function, that is, USB transmits audio source data to RT685, and plays the audio source through a player (which can be headphones), or speaker. The J7 USB port of EVK is connected to the A port of Lecroy, and the B port of Lecroy is connected to the audio source, which can be a PC or a mobile phone. If using a PC, the speaker needs to be selected as USB AUDIO+HID DEMO, because the name of the board after UAC enumeration is USB AUDIO+HID DEMO. The other end of Lecroy is connected to the PC for transmitting USB bus data. The specific connection diagram is as follows:      Fig 1 EVK USB analyzer connection 2.2 Different audio interval data package situation This chapter gives the modifications under different intervals, as well as the results of USB analyzer packet capture. The clock system of the audio synchronous/isochronous audio endpoint can be synchronized through SOF. The USB 1.0 sampling rate must be locked to the 1ms SOF beat. The USB2.0 high-speed HS endpoint can be locked to the 125us SOF beat. If you want to change the interval, it is usually a multiple of 125us, that is, it can be 250us, 500us, 1ms, etc. The modification method is usually to directly modify the USB stack's usb_audio_config.h:   #define HS_ISO_OUT_ENDP_INTERVAL (0x01) The relationship between HS_ISO_OUT_ENDP_INTERVAL and the real during time is: 125us*2^( HS_ISO_OUT_ENDP_INTERVAL -1) So: HS_ISO_OUT_ENDP_INTERVAL=1 : 125us HS_ISO_OUT_ENDP_INTERVAL=2 : 250us HS_ISO_OUT_ENDP_INTERVAL=3 : 500us HS_ISO_OUT_ENDP_INTERVAL=4 : 1000us The following are the four situations mentioned above respectively through USB analyzer packet capture test. 2.2.1 125us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x01) Fig 2 125us interval It can be seen that when the interval is configured to 125us, the SOF beat interval in front of each OUT packet is 125us, and the length of the OUT data packet is 24Byte. The data in this 24Byte packet is the actual audio data. In this case, the playback is normal   2.2.2 250us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x02) The packet capture configured as 250us is shown in the figure below. It can be seen that the SOF beat interval in front of the two OUT packets is 250us, and the length of the OUT data packet is 48 Byte. In other words, as the interval increases, the length of the data packet also increases proportionally, that is, the transmission time is longer, but a packet contains more data packets. At this time, the RT side also needs to provide more USB receiving buffers to receive data. Fig 3 250us interval This situation, the speaker play normally, have the audio sound. 2.2.3 500us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x03) Fig 4 500us interval SYNC mode At this time, you can see that the data packet has become 96 Bytes, and the data content is also normal audio data, but the playback has problems and no sound can be heard. However, if you configure: #define USB_DEVICE_AUDIO_USE_SYNC_MODE (0U) It is not the SYNC mode, it can hear the audio sound, the packet situation is: Fig 5 500us interval no SYNC mode However, the asynchronous mode also has problems. After a long period of operation, it may become out of sync and the playback may become stuck. Therefore, it is still necessary to use the 1ms method in the synchronous mode. 2.2.4 1ms interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x04) Fig 6 interval 1ms It can be seen that the data packet length has become 192 bytes, and the SOF beat interval in front of the OUT packet has indeed become 1ms. The audio data packet also looks like normal audio data, so the audio data here is successfully transmitted from USB to RT. However, the playback is abnormal and no sound can be heard. 3. 1ms interval solution Now let's debug the project to find the problem. Because the USB analyzer can show that the audio data of the USB bus is actually transmitted, we must first check whether the USB receives the data packet in the kUSB_DeviceAudioEventStreamRecvResponse event in USB_DeviceAudioCompositeCallback of audio_unified.c, whether the data packet length is correct, and whether the data content looks like audio data. The following is the debug result: Fig 7 data packet received situation So from this point of view, the USB interface data packet is received, and the data looks like real audio data. If it is abnormal data, it is either all 0 or irregular data that changes, but the test result cannot be played. So now to check the I2S playback function, composite.h file, TxCallback function: Fig 8 I2S play data buffer We can see, the real data transfer to the I2S data buffer are always 0, and the reality should be g_composite.audioUnified.startPlayFlag none zero, and also use: s_TxTransfer.dataSize = g_composite.audioUnified.audioPlayTransferSize; s_TxTransfer.data=audioPlayDataBuff + g_composite.audioUnified.tdReadNumberPlay; But, after testing, audioPlayDataBuff data are also 0. So the issue should be the USB received side, receive the data, but when save the data to the audioPlayDataBuff have issues, go back to audio_unified.c Fig 9 USB_DeviceAudioCompositeCallback Fig10 USB_AudioSpeakerPutBuffer After testing, in the Fig 9, audioUnified.tdWriteNumberPlay never larger than g_deviceAudioComposite->audioUnified.audioPlayTransferSize * AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT=192*6   #define AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT \ (0x06U) /* 6 means 6 mico frames (6*125us), make sure the latency is smaller than 1ms for sync mode */ The low latency here is a delay buffer. Therefore, the USB cache data must at least be able to hold the delayed data packet. Unit 1 represents an interval frame. Now it is changed to an interval of 1ms, 6 delay units, which means that the delay here is 6ms. Here, the receiving buffer size needs to be larger, which is related to the following definition: Fig 11 USB device callback g_composite.audioUnified.audioPlayBufferSize = AUDIO_PLAY_BUFFER_SIZE_ONE_FRAME * AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT; #define AUDIO_PLAY_BUFFER_SIZE_ONE_FRAME AUDIO_OUT_TRANSFER_LENGTH_ONE_FRAME #define AUDIO_OUT_FORMAT_CHANNELS (0x02U) #define AUDIO_OUT_FORMAT_SIZE (0x02) #define AUDIO_OUT_SAMPLING_RATE_KHZ (48) /* transfer length during 1 ms */ #define AUDIO_OUT_TRANSFER_LENGTH_ONE_FRAME \ (AUDIO_OUT_SAMPLING_RATE_KHZ * AUDIO_OUT_FORMAT_CHANNELS * AUDIO_OUT_FORMAT_SIZE) #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT \ (2U) /* 2 units size buffer (1 unit means the size to play during 1ms) */ Here, try to set larger AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT, let it should be at least can put 6ms data, as 1 unit is 1ms play data size, and it needs to have more than 6ms, so, set to 12 unit, the modified definition: #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT 12 Build the project and download it, now capture the USB bus data again: Fig 12 1ms interval data packet play successfully   This data waveform is the data that can be played normally. Below is the video of the MIMXRT685-AUD-EVK test. The attachment provides two demos of RT685 and RT1170, both modified to 1ms interval, and the modification method of RT1170 is exactly the same 4.    Summarize Whether it is RT600, RT1170, or more precisely the UAC code of the entire RT series, if you need to modify the interval, the main focus is on the following macros in usb_audio_config.h, the following is for 1ms interval: #define HS_ISO_OUT_ENDP_INTERVAL (0x04)//(0x01)//(0X04) #define AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT \ (0x06U) /* 6 means 16 mico frames (6*125us), make sure the latency is smaller than 1ms for ehci high speed */ #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT \ (12U)//(2U) /* 2 units size buffer (1 unit means the size to play during 1ms) */ Test video:
View full article
1 Introduction RT1060 MCUXpresso SDK provide the ota_bootloader project, download link: https://mcuxpresso.nxp.com/en/welcome ota_bootloader path: SDK_2_10_0_EVK-MIMXRT1060\boards\evkmimxrt1060\bootloader_examples\ota_bootloader  this ota_bootloader can let the customer realize the on board app update, the RT1060 OTA bootloader mainly have the two functions: ISP method update APP Provide the API to the customer, can realize the different APP swap and rollback function ISP method is from the flashloader, the flashloader put the code in the internal RAM, when use it, normally RT chip need to enter the serial download mode, and use the sdphost download the flashloader to the internal RAM, then run it to realize the ISP function. But OTA_bootloader will put the code in the flash directly, RT chip can run it in the internal boot mode directly, each time after chip reset, the code will run the ota_bootloader at first, this time, customer can use the blhost to communicate the RT chip with UART/USB HID directly to download the APP code. So, to the ISP function in the OTA_bootloader, customer also can use it as the ISP secondary bootloader to update the APP directly. Then reset after the 5 seconds timeout, if the APP area is valid, code will jump to the APP and run APP. To the ota_bootloader swap and rollback function, customer can use the provided API in the APP directly to realize the APP update, run the different APP, swap and rollback to the old APP.    This document will give the details about how to use the ota_bootloader ISP function to update the APP, how to modify the customer application to match the ota_bootloader, how to resolve the customer APP issues which need to use the SDRAM with the ota_bootloader, how to use the ota_bootloader API to realize the swap and rollback function, and the related APP prepare. 2 OTA bootloader ISP usage Some customers need the ISP secondary bootloader function, as the flashloader need to put in the internal RAM, so customer can use the ota_bootloader ISP function realize the secondary bootloader requirement, to this method, customer mainly need to note two points: 1) APP need to add the ota_header to meet the ota_bootloader demand. 2) APP use the SDRAM, ota_bootloader need to add DCD memory  2.1 application modification OTA_bootloader located in the flash from 0x60000000, APP locate from 0x60040000. We can find this from the ota_bootloader bootloader_config.h file: #define BL_APP_VECTOR_TABLE_ADDRESS (0x60040000u) From 0x60040000, the first 0X400 should put the ota_header, then the real APP code is put from 0x60040400. Now, take the SDK evkmimxrt1060_iled_blinky project as an example, to modify it to match the ota_bootloader. Application code need to add these files: ota_bootloader_hdr.c, ota_bootloader_board.h, ota_bootloader_supp.c, ota_bootloader_supp.h These files can be found from the evkmimxrt1060_lwip_httpssrv_ota project, put the above files in the led_blinky source folder. 2.1.1 memory modification    APP memory start address modify from 0x60000000 to 0x60040000, which is the ota_bootloader defined address。 Fig 1 2.1.2 ota_hdr related files add   In the led_blinky project source folder, add the mentioned ota_bootloader related files. Fig 2 2.1.3 linker file modification In the evkmimxrt1060_iled_blinky_debug.ld, add boot_hdr, length is 0X400. We can use the MCUXpresso IDE linkscripts folder, add the .ldt files to modify the linker file. Here, in the project, add one linkscripts folder, add the boot_hdr_MIMXRT1060.ldt file, build . Fig 3 After the build, we can find the ld file already add the ota_header in the first 0x400 range. Fig 4 The above is the generated evkmimxrt1060_iled_blinky_ota_0x60040000.bin file, we can find already contains the boot_hdr. From offset 0x400, will put the real APP code. 2.2 ISP test related command Test board is MIMXRT1060-EVK, download the evkmimxrt1060_ota_bootloader project at first, can use the mcuxpresso IDE download it directly, then press the reset button, put the blhost and evkmimxrt1060_iled_blinky_ota_0x60040000.bin in the same folder, use the following command:   blhost.exe -t 50000 -u 0x15a2,0x0073 -j -- get-property 1 0 blhost.exe -t 2048000 -u 0x15a2,0x0073 -j -- flash-erase-region 0x60040000 0x6000 9 blhost.exe -t 5242000 -u 0x15a2,0x0073 -j -- write-memory 0x60040000 evkmimxrt1060_iled_blinky_ota_0x60040000.bin blhost.exe -t 5242000 -u 0x15a2,0x0073 -j -- read-memory 0x60040000 0x6000 flexspiNorCfg.dat 9 Fig 5 After the download is finished, press the reset, wait 5 seconds, it will jump to the app, we can find the MIMXRT1060-EVK on board led is blinking. This method realize the flash ISP bootloader downloading and the app working. The above command is using the USB HID to download, if need to use the UART, also can use command like this: blhost.exe -t 50000 -p COM45,19200 -j -- get-property 1 0 blhost.exe -t 50000 -p COM45,19200 -j -- flash-erase-region 0x60040000 0x6000 9 blhost.exe -t 50000 -p COM45,19200 -j -- write-memory 0x60040000 evkmimxrt1060_iled_blinky_ota_0x60040000.bin blhost.exe -t 50000 -p COM45,19200 -j -- read-memory 0x60040000 0x6000 flexspiNorCfg.dat 9 To the UART communication, as the ota_bootloader auto baud detect issues, it’s better to use the baudrate not larger than 19200bps, or in the ota_bootloader code, define the fixed baudrate, eg, 115200. 2.3 Bootloader DCD consideration Some customer use the ota_bootloader to associate with their own application, which is used the external SDRAM, and find although the downloading works, but after boot, the app didn’t run successfully. In the APP code, it add the DCD configuration, but as the ota_bootloader already put in the front of the QSPI, app will use the ota_hdr, which will delete the dcd part, so, to this situation, customer can add the dcd in the ota_bootloader. The SDK ota_bootloader didn’t add the dcd part in default. Now, modify the ota_bootloader at first, then prepare the sdram app and test it. 2.3.1 Ota_bootloader add dcd 2.3.1.1 add dcd files   ota_bootloader add dcd.c dcd.h, these two files can be found from the SDK hello_world project. Copy these two files to the ota_bootloader board folder. From the dcd.c code, we can see, dcd_data[] array is put in the “.boot_hdr.dcd_data” area, and it need to define the preprocessor: XIP_BOOT_HEADER_DCD_ENABLE=1 2.3.1.2 add dcd linker code   Modify the mcuxpresso IDE ld files, and add the dcd range.    .ivt : AT(ivt_begin) { . = 0x0000 ; KEEP(* (.boot_hdr.ivt)) /* ivt section */ . = 0x0020 ; KEEP(* (.boot_hdr.boot_data)) /* boot section */ . = 0x0030 ; KEEP(*(.boot_hdr.dcd_data)) __boot_hdr_end__ = ABSOLUTE(.) ; . = 0x1000 ; } > m_ivt Fig 6 0x60001000 : IVT 0x6000100c : DCD entry point 0x60001020 : boot data 0x60001030 : DCD detail data 2.3.1.3 add IVT dcd entry point   ota_bootloader->MIMXRT1062 folder->hardware_init_MIMXRT1062.c,modify the image_vector_table, fill the DCD address to the dcdc_data array address as the dcd entry point. Fig 7 Until now, we finish the ota_bootloader dcd add, build the project, generate the image, we can find, DCD already be added to the ota_bootloader image. Fig 8 DCD entry point in the IVT and the DCD data is correct, we can burn this modified ota_bootloader to the MIMXRT1060-EVK board. 2.3.2 SDRAM app prepare Still based on the evkmimxrt1060_iled_blinky project, just put some function in the SDRAM. From memory, we can find, the SRAM is in the RAM4: Fig 9 In the led_blinky.c, add this header: #include <cr_section_macros.h> Then, put the systick delay code to the RAM4 which is the SDRAM area. __RAMFUNC(RAM4) void SysTick_DelayTicks(uint32_t n) { g_systickCounter = n; while (g_systickCounter != 0U) { } } Now, we already finish the simple SDRAM app, we also can test it, and put the breakpoint in the SysTick_DelayTicks, we can find the address is also the SDRAM related address, generate the evkmimxrt1060_iled_blinky1_SDRAM_0x60040000.bin. If use the old ota_bootloader to download this .bin, we can find the led is not blinking. 2.3.3 Test result Command refer to chapter 2.2 ISP test related command, download the evkmimxrt1060_iled_blinky1_SDRAM_0x60040000.bin with the new modified ota_bootloader project, we can find, after reset, the led will blink. So the SDRAM app works with the modified ota_bootloader. 3 OTA bootloader swap and rollback OTA bootloader can realize the swap and rollback function, this part will test the ota_bootloader swap and rollback, prepare two apps and download to the different partition area, then use the UART input char to select the swap or rollback function, to check the which app is running. 3.1  memory map ota bootloader memory information. Fig 10 The above map is based on the external 8Mbyte QSPI flash. OTA bootloader: RT SDK ota bootloader code Boot meta 0: contains 3 partition start address, size etc. information, ISP peripheral information. Boot meta 1: contains 3 partition start address, size etc. information, ISP peripheral information. Swap meta 0: bootloader will use meta data to do the swap operation Swap meta 1: bootloader will use meta data to do the swap operation Partition 1: APP1 location Partition 2: APP2 location Scratch part: APP1 backup location, start point is before 0x60441000, which is enough to put APP1 and multiple sector size, eg, APP1 is 0X5410, sector size is 0x1000, APP1 need 6 sectors, so the scratch start address is 0x60441000-0x6000=0x43b000. User data: user used data area 3.2 swap and rollback basic Fig11  The APP1 and APP2 put in the partition1 and partition2 need to contains the ota_header which meet the bootloader demand, bootloader will check the APP CRC, if it passed, then will boot the app, otherwise, it will enter the ISP mode. Partition2 image need to has the valid header, otherwise, swap will be failed.   Swap function will erase partition2 scratch area, then put the partition1 code to the scratch, erase the partition 1 position, and write the partition 2 image to the partition1 position.   Rollback function will run the previous APP1, erase the partition 2 position, copy the parititon 1 image back to the partition 2, erase partition 1 position, copy partition 2 scratch image back to partition 1. 3.2.1 boot meta boot_meta 0: 0x0x6003c000 size: 0x20c boot_meta 1: 0x0x6003d000 size: 0x20c   OTA bootloader can read boot meta from the two different address, when the two address meta are valid(tag is 'B', 'L', 'M', 'T'), bootloader will choose the bigger version meta. If both meta is not valid, bootloader will copy default boot meta data to the boot meta 0 address.   Boot meta contains 3 partition start address, size information. SDK demo can call bootloader API to find the partition information, then do the image program.   Boot meta also contains the ISP peripheral information, the timeout(5s) information, the structure is: //!@brief Partition information table definitions typedef struct { uint32_t start; //!< Start address of the partition uint32_t size; //!< Size of the partition uint32_t image_state; //!< Active/ReadyForTest/UnderTest uint32_t attribute; //!< Partition Attribute - Defined for futher use uint32_t reserved[12]; //!< Reserved for future use } partition_t; //!@brief Bootloader meta data structure typedef struct { struct { uint32_t wdTimeout; uint32_t periphDetectTimeout; uint32_t enabledPeripherals; uint32_t reserved[12]; } features; partition_t partition[kPartition_Max];//16*4*7 bytes uint32_t meta_version; uint32_t patition_entries; uint32_t reserved0; uint32_t tag; } bootloader_meta_t;   3.2.2 swap meta Swap meta 0 : 0x6003e000, size 0x50 Swap meta 1 : 0x6003f000, size 0x50    OTA bootloader will read the swap meta from 2 different place, if it is not valid, bootloader will set the default data to swap meta 0(0x6003e000). If both image are valid, bootloader will choose the bigger version meta.    Bootloader will refer to the meta data to do the swap operation, sometimes, after reset, meta data will be modified automatically. It mainly relay on the swap_type: kSwapType_ReadyForTest :After reset, do swap operation. Modify meta swap_type to kSwapType_Test. Reset, as the meta data is kSwapType_Test, bootloader can do the rollback. kSwapType_Test : After reset, do rollback after operation, swap_type change to kSwapType_None.  kSwapType_Rollback bootloader will write kSwapType_Test to the meta, after reset, bootloader will refer to kSwapType_Test to do the operation.  kSwapType_Permanent After reset, modify meta data to kSwapType_Permanent, then the APP will boot with partition 1. Swap structure: //!@brief Swap progress definitions typedef struct { uint32_t swap_offset; //!< Current swap offset uint32_t scratch_size; //!< The scratch area size during current swapping uint32_t swap_status; // 1 : A -> B scratch, 2 : B -> A uint32_t remaining_size; //!< Remaining size to be swapped } swap_progress_t; typedef struct { uint32_t size; uint32_t active_flag; } image_info_t; //!@brief Swap meta information, used for the swapping operation typedef struct { image_info_t image_info[2]; //!< Image info table #if !defined(BL_FEATURE_HARDWARE_SWAP_SUPPORTED) || (BL_FEATURE_HARDWARE_SWAP_SUPPORTED == 0) swap_progress_t swap_progress; //!< Swap progress #endif uint32_t swap_type; //!< Swap type uint32_t copy_status; //!< Copy status uint32_t confirm_info; //!< Confirm Information uint32_t meta_version; //!< Meta version uint32_t reserved[7]; //!< Reserved for future use uint32_t tag; } swap_meta_t; 3.3 Common used API Bootloader provide the API for the customer to use it, the common used API are: 3.3.1 update_image_state   update swap meta data, before update, it will check the partition1 image valid or not, if image is not valid, swap meta data will not be updated, and return failure. 3.3.2 get_update_partition_info   get partition information, then define the image program address. 3.3.3 get_image_state   get the current boot image status. None/permanent/UnderTest 3.4  Swap rollback APP prepare Prepare two APPs: APP1 and APP2 bin file, and use the USB HID download to the partition1 and paritition 2. After reset, run APP1 in default, then use the COM input to select the swap and rollback function. Code is: int main(void) { char ch; status_t status; /* Board pin init */ BOARD_InitPins(); BOARD_InitBootClocks(); /* Update the core clock */ SystemCoreClockUpdate(); BOARD_InitDebugConsole(); PRINTF("\r\n------------------hello world + led blinky demo 2.------------------\r\n"); PRINTF("\r\nOTA bootloader test...\r\n" "1 - ReadyForTest\r\n" "3 - kSwapType_Permanent\r\n" "4 - kSwapType_Rollback\r\n" "5 - show image state\r\n" "6 - led blinking for 5times\r\n" "r - NVIC reset\r\n"); // show swap state in swap meta get_image_swap_state(); /* Set systick reload value to generate 1ms interrupt */ if (SysTick_Config(SystemCoreClock / 1000U)) { while (1) { } } while(1) { ch = GETCHAR(); switch(ch) { case '1': status = bl_update_image_state(kSwapType_ReadyForTest); PRINTF("update_image_state to kSwapType_ReadyForTest status: %i\n", status); if (status != 0) PRINTF("update_image_state(kSwapType_ReadyForTest): failed\n"); else NVIC_SystemReset(); break; case '3': status = bl_update_image_state(kSwapType_Permanent); PRINTF("update_image_state to kSwapType_Permanent status: %i\n", status); if (status != 0) PRINTF("update_image_state(kSwapType_Permanent): failed\n"); else NVIC_SystemReset(); break; case '4': status = bl_update_image_state(4); // PRINTF("update_image_state to kSwapType_Rollback status: %i\n", status); if (status != 0) PRINTF("update_image_state(kSwapType_Rollback): failed\n"); else NVIC_SystemReset(); break; case '5': // show swap state in swap meta get_image_swap_state(); break; case '6': Led_blink10times(); break; case 'r': NVIC_SystemReset(); break; } } } When download the APP, need to use the correct ota_header in the first 0x400 area, otherwise, swap will failed. const boot_image_header_t ota_header = { .tag = IMG_HDR_TAG, .load_addr = ((uint32_t)&ota_header) + BL_IMG_HEADER_SIZE, .image_type = IMG_TYPE_XIP, .image_size = 0, .algorithm = IMG_CHK_ALG_CRC32, .header_size = BL_IMG_HEADER_SIZE, .image_version = 0, .checksum = {0xFFFFFFFF}, }; This is a correct sample: Fig 12 Image size and checksum need to use the real APP image information, in the attached file, provide one image_header_padding.exe, it can input the none ota header image, then it will output the whole image which add the ota_header contains the image size and the image crc data in the first 0x400 range. 3.5 Test steps and result Prepare the none header APP1 evkmimxrt1060_APP1_0X60040400.bin, APP2 image evkmimxrt1060_APP2_0X60240400.bin, and put blhost.exe, image_header_padding.exe in the same folder. APP1 and APP2 just the printf version is different, one is version1, another is version2. Printf result: hello world + led blinky demo 1 hello world + led blinky demo 2 Attached OTAtest folder is used for test, but need to download the blhost.exe from the below link, and copy the blhost.exe to the OTAtest folder. https://www.nxp.com/webapp/sps/download/license.jsp?colCode=blhost_2.6.6&appType=file1&DOWNLOAD_ID=null Then run the following bat command: image_header_padding.exe evkmimxrt1060_APP1_0X60040400.bin 0x60040400 sleep 20 blhost.exe -t 50000 -u 0x15a2,0x0073 -j -- get-property 1 0 sleep 20 blhost.exe -u -t 1000000 -- flash-erase-region 0x6003c000 0x4000 sleep 50 blhost -u -t 5000 -- flash-erase-region 0x60040000 0x10000 sleep 50 blhost -u -t 5000 -- write-memory 0x60040000 boot_img_crc32.bin sleep 100 image_header_padding.exe evkmimxrt1060_APP2_0X60240400.bin 0x60040400 sleep 20 blhost -u -t 5000 -- flash-erase-region 0x60240000 0x10000 sleep 50 blhost -u -t 5000 -- flash-erase-region 0x6043b000 0x10000 sleep 50 blhost -u -t 5000 -- write-memory 0x60240000 boot_img_crc32.bin sleep 100 pause The function is to generate the APP1 with the correct ota_header, erase parititon 1,program APP1 to partition 1, generate the APP2 with the correct ota_header, erase partition 2 and scratch area, program APP2 to partition 2, Run the .bat file should in 5 seconds after reset, then use the ISP to connect it: blhost.exe -t 50000 -u 0x15a2,0x0073 -j -- get-property 1 0 ota_bootloader can use the ota_bootloader project download directly to the 0X60000000 area. After downloading, reset the chip, and wait for 5 seconds, the APP will run. This is the test result: Fig 13 From the test result, we can find, in the first time boot, APP1 running, image state: none Input 1, do the swap, will find the APP2 running, image state: undertest Input 3, select permanent, reset will find, still APP2 running, image state: permanent Input 4, choose rollback, reset will find APP1 running, image states: none Until now, finish the swap and rollback function. Input 6, will find the APP contains the SDRAM led blinky is working.          
View full article
LittleFS is a file system used for microcontroller internal flash and external NOR flash. Since it is more suitable for small embedded systems than traditional FAT file systems, more and more people are using it in their projects. So in addition to NOR/NAND flash type storage devices, can LittleFS be used in SD cards? It seems that it is okay too. This article will use the littlefs_shell and sdcard_fatfs demo project in the i.mxRT1050 SDK to make a new littefs_shell project for reading and writing SD cards. This experiment uses MCUXpresso IDE v11.7, and the SDK uses version 2.13. The littleFS file system has only 4 files, of which the current version shown in lfs.h is littleFS 2.5. The first step, of course, is to add SD-related code to the littlefs_shell project. The easiest way is to import another sdcard_fatfs project and copy all of the sdmmc directories into our project. Then copy sdmmc_config.c and sdmmc_config.h in the /board directory, and fsl_usdhc.c and fsl_usdhc.h in the /drivers directory. The second step is to modify the program to include SD card detection and initialization, adding a bridge from LittleFS to SD drivers. Add the following code to littlefs_shell.c. extern sd_card_t m_sdCard; status_t sdcardWaitCardInsert(void) { BOARD_SD_Config(&m_sdCard, NULL, BOARD_SDMMC_SD_HOST_IRQ_PRIORITY, NULL); /* SD host init function */ if (SD_HostInit(&m_sdCard) != kStatus_Success) { PRINTF("\r\nSD host init fail\r\n"); return kStatus_Fail; } /* wait card insert */ if (SD_PollingCardInsert(&m_sdCard, kSD_Inserted) == kStatus_Success) { PRINTF("\r\nCard inserted.\r\n"); /* power off card */ SD_SetCardPower(&m_sdCard, false); /* power on the card */ SD_SetCardPower(&m_sdCard, true); // SdMmc_Init(); } else { PRINTF("\r\nCard detect fail.\r\n"); return kStatus_Fail; } return kStatus_Success; } status_t sd_disk_initialize() { static bool isCardInitialized = false; /* demostrate the normal flow of card re-initialization. If re-initialization is not neccessary, return RES_OK directly will be fine */ if(isCardInitialized) { SD_Deinit(&m_sdCard); } if (kStatus_Success != SD_Init(&m_sdCard)) { SD_Deinit(&m_sdCard); memset(&m_sdCard, 0U, sizeof(m_sdCard)); return kStatus_Fail; } isCardInitialized = true; return kStatus_Success; } In main(), add these code if (sdcardWaitCardInsert() != kStatus_Success) { return -1; } status = sd_disk_initialize(); Next, create two new c files, lfs_sdmmc.c and lfs_sdmmc_bridge.c. The call order is littlefs->lfs_sdmmc.c->lfs_sdmmc_bridge.c->fsl_sd.c. lfs_sdmmc.c and lfs_sdmmc_bridge.c acting as intermediate layers that can connect the LITTLEFS and SD upper layer drivers. One of the things that must be noted is the mapping of addresses. The address given by littleFS is the block address + offset address. See figure below. This is a read command issued by the ‘mount’ command. The block address refers to the address of the erased sector address in SD. The read and write operation uses the smallest read-write block address (BLOCK) of SD, as described below. Therefore, in lfs_sdmmc.c, the address given by littleFS is first converted to the byte address. Then change the SD card read-write address to the BLOCK address in lfs_sdmmc_bridge.c. Since most SD cards today exceed 4GB, the byte address requires a 64-bit variable. Finally, the most important step is littleFS parameter configuration. There is a structure LittlsFS_config in peripherals.c, which contains not only the operation functions of the SD card, but also the read and write sectors and cache size. The setup of this structure is critical. If the setting is not good, it will not only affect the performance, but also cause errors in operation. Before setting it up, let's introduce some of the general ideal of SD card and littleFS. The storage unit of the SD card is BLOCK, and both reading and writing can be carried out according to BLOCK. The size of each block can be different for different cards. For standard SD cards, the length of the block command can be set with CMD16, and the block command length is fixed at 512 bytes for SDHC cards. The SD card is erased sector by sector. The size of each sector needs to be checked in the CSD register of the SD card. If the CSD register ERASE_BLK_EN = 0, Sector is the smallest erase unit, and its unit is "block". The value of sector size is equal to the value of the SECTOR_SIZE field in the CSD register plus 1. For example, if SECTOR_SIZE is 127, then the minimum erase unit is 512*(127+1)=65536 bytes. In addition, sometimes there are doubts, many of the current SD cards actually have wear functions to reduce the loss caused by frequent erasing and writing, and extend the service life. So in fact, delete operations or read and write operations are not necessarily real physical addresses. Instead, it is mapped by the SD controller. But for the user, this mapping is transparent. So don't worry about this affecting normal operation. LittleFS is a lightweight file system that has power loss recovery and dynamic wear leveling compared to FAT systems. Once mounted, littleFS provides a complete set of POSIX-like file and directory functions, so it can be operated like a common file system. LittleFS has only 4 files in total, and it basically does not need to be modified when used. Since the NOR/NAND flash to be operated by LittleFS is essentially a block device, in order to facilitate use, LittleFS is read and written in blocks, and the underlying NOR/NAND Flash interface drivers are carried out in blocks. Let's take a look at the specific content of LittleFS configuration parameters. const struct lfs_config LittleFS_config = { .context = (void*)0, .read = lfs_sdmmc_read, .prog = lfs_sdmmc_prog, .erase = lfs_sdmmc_erase, .sync = lfs_sdmmc_sync, .read_size = 512, .prog_size = 512, .block_size = 65536, .block_count = 128, .block_cycles = 100, .cache_size = 512, .lookahead_size = LITTLEFS_LOOKAHEAD_SIZE }; Among them, the first item (.context) is not used in this project, and is used in the original project to save the offset of the file system stored in Flash. Items two (.read) through five (.sync) point to the handlers for each operation. The sixth item (.read_size) is the smallest unit of read operation. This value is roughly equal to the BLOCK size of the SD card. In the SD card driver, this size has been fixed to 512. So for convenience, it is also set to 512. The seventh item (.prog_size) is the number of bytes written each time, which is 512 bytes like .read_size. The eighth item is .block_size. This can be considered to be the smallest erase block supported by the SD card when performing an erase operation. Here the default value is not important, you need to set it in the program according to the actual value after the SD card is initialized. The card used in this experiment is 64k bytes as an erase block, so 65536 is used directly here. Item 9 (.block_count) is used to indicate how many erasable blocks there are. Multiply the .block_size to get the size of the card. If the card is replaceable, it needs to be determined according to the parameters after the SD card is initialized. The tenth item (.block_cycles) is the number of erase cycles per block. Item 11 (.cache_size) is about the cache buffer. It feels like bigger is better, but actually modifies this value won't work. So still 512. Item 12 (lookahead_size), littleFS uses a lookahead buffer to manage and allocate blocks. A lookahead buffer is a fixed-size bitmap that records information about block allocations within an area. The lookahead buffer only records the information of block allocations in one area, and when you need to know the allocation of other regions, you need to scan the file system to find allocated blocks. If there are no free blocks in the lookahead buffer, you need to move the lookahead buffer to find other free blocks in the file system. The lookahead buffer position shifts one lookahead_size at a time. Use the original value here.  That’s all for the porting work. We can test the project now. You can see it works fine. The littleFS-SD project can read/write/create folder and erase. And it also support append to an exist file. But after more testing, a problem was found, if you repeatedly add->-close->-add-> close a file, the file will open more and more slowly, even taking a few seconds. This is what should be added and is not written directly in the last block of the file, but will apply for a new block, regardless of whether the previous block is full or not. See figure below. The figure above prints out all the read, write, and erase operations used in each write command. You can see that each time in the lfs_file_open there is one more read than the last write operation. In this way, after dozens or hundreds of cycles, a file will involve many blocks. It is very time-consuming to read these blocks in turn. Tests found that more than 100 read took longer than seconds. To speed things up, it is recommended to copy the contents of one file to another file after adding it dozens of times. In this way, the scattered content will be consolidated to write a small number of blocks. This can greatly speed up reading and writing.
View full article
RT Linux SDK build based on Ubuntu 1. Abstract The SDK of NXP MIMXRT products can support three operation systems: windows, Linux, and macOS. Usually, the vast majority of users use the windows version combined with IDE compilation, and the documentation is relatively complete. However, for the Linux version, although the SDK is downloaded, it also contains documents, but the documents are the same as those of windows, not for Linux. Therefore, when a small number of customers use Ubuntu Linux to compile, they suffer from no documentation reference, especially for novices, it is difficult to use.      This article will implement the build of RT1060 linux version SDK based on Ubuntu. 2. Tool preparation You need to prepare a computer with Ubuntu system. Windows can install a virtual machine with Ubuntu system. This article uses the Ubuntu system of the web server. Tools required for testing: Ubuntu system cmake ARMGCC: ARGCC for ARM Cortex M core SDK: SDK_2_13_1_EVK-MIMXRT1060_linux.zip EVK: MIMXRT1060-EVK This article takes MIMXRT1060-EVK SDK as an example, and the situation of other RT development boards with Linux SDK is the same. 2.1 SDK downloading     Download link: https://mcuxpresso.nxp.com/en/builder?hw=EVK-MIMXRT1060 Fig 1 Download the SDK code, named as: SDK_2_13_1_EVK-MIMXRT1060_linux.zip If you download it under Windows, you need to copy the SDK to the Ubuntu system. Here you can use FileZilla or MobaXterm to transfer the file. Because I use the web server Ubuntu, it is based on MobaXterm. This software is free to use, and the download link is: https://mobaxterm.mobatek.net/ Put the downloaded SDK into the Ubuntu folder, in MobaXterm, drag the file can realize the file transfer from Windows to Ubuntu: Fig 2 Unzip SDK, the commander is: unzip SDK_2_13_1_EVK-MIMXRT1060_linux.zip -d ./SDK_2_13_1_EVK-MIMXRT1060_linux Fig 3 Fig 4 It can be seen that the SDK has been successfully unzipped to the SDK_2_13_1_EVK-MIMXRT1060_linux folder. At this point, the Linux SDK is ready to use. 2.2 ARMGCC install and configuration Download ARMGCC, as you can see from the release note of the SDK, the supported GCC Arm Embedded version: GCC Arm Embedded, version is 10.3-2021.10   Download link:https://developer.arm.com/downloads/-/gnu-rm Download the file: gcc-arm-none-eabi-10.3-2021.10-x86_64-linux.tar.bz2  Copy it to the Ubuntu, and unzip it, the unzip commander is: tar -xjvf gcc-arm-none-eabi-10.3-2021.10-x86_64-linux.tar.bz2 Fig 5 Fig 6 You can see that ARMGCC has been decompressed. Configure the environment variables below and add ARMGCC_DIR to /etc/profile: Add the path at the end of the profile to save and exit: export ARMGCC_DIR=/home/nxa07323/rtdoc/gcc-arm-none-eabi-10.3-2021.10/ export PATH=$PATH:/home/nxa07323/rtdoc/gcc-arm-none-eabi-10.3-2021.10/bin/ Fig 7 Valid profile, and check the ARMGCC_DIR is really valid. source /etc/profile echo $ARMGCC_DIR Fig 8 Until now, ARMGCC is ready to use! 2.3 cmake download and install Build also need the cmake tool, so use the following command to install cmake and check whether the installation is successful: sudo apt-get install cmake cmake –version Fig 9 Cmake is also ready! 3. Testing All the tools are ready, let’s start compiling the code, here we take hello_world as an example to compile an executable file downloaded to Flash. 3.1 Executable file Compilation Enter the hello_world gcc path of the SDK: Fig10 It can be seen that there are many files under the armgcc folder, which are compilable files that generate different images: build_debug,build_release:the linker file is RAM linker, where text and data section is put in internal TCM. build_flexspi_nor_debug, build_flexspi_nor_release: The linker file is flexspi_nor linker, where text is put in flash and data put in TCM. build_flexspi_nor_sdram_debug, build_flexspi_nor_sdram_release: The linker file is flexspi_nor_sdram linker, where text is put in flash and data put in SDRAM. build_sdram_debug, build_sdram_release: The linker file is SDRAM linker, where text is put in internal TCM and data put in SDRAM. build_sdram_txt_debug, build_sdram_txt_release: The linker file is SDRAM_txt linker, where text is put in SDRAM and data put in OCRAM. Now, compile build_flexspi_nor_debug.sh, this script will generate flash .elf file, the command is: ./build_flexspi_nor_debug.sh Fig 11 The compiled .elf is placed in the flexspi_nor_debug folder: Fig 12 Convert the hello_world.elf file to hex and bin for the RT board burning, conversion command is: arm-none-eabi-objcopy -O ihex hello_world.elf hello_world.hex arm-none-eabi-objcopy -O binary hello_world.elf hello_world.bin Fig 13 3.2 Code Downloading Test The generated files hello_world.hex and hello_world.bin are the executable files, which can be downloaded to the EVK board through MSD, serial downloader, or debugger software. Open the bin file to view: Fig 14 As you can see, this file is an app executable file with FCB. Here use the MCUbootUtility tool to download, and the EVK board enters the serial download mode: SW7 1-OFF, 2-OFF, 3-OFF, 4-ON Fig 15 After the downloading is finished, EVK board enter the internal boot mode: SW7 1-OFF,2-OFF,3-ON,4-OFF Fig 16 We can see, the printf works, it means the Ubuntu Linux build the file works OK. 3.3 Code configuration Some customers may think that the executable files to be loaded by some of our tools do not need FCB, so how to realize to generate the app without FCB Linux, here we need to modify the flags.cmake file, the path is:     /home/nxa07323/rtdoc/SDK_2_13_1_EVK-MIMXRT1060_linux/boards/evkmimxrt1060/demo_apps/hello_world/armgcc Configure BOOT_HEADER_ENABLE=0: Default is BOOT_HEADER_ENABLE=1(Fig 17), modified to Fig 18: Fig 17                              Fig18 Build again, to generate the .bin, check the .bin file: Fig 19 We can see that this file is a pure app file that does not contain FCB+IVT. It can be used in occasions that do not require FCB. Until now, the RT1060 Linux version of the SDK can be compiled to generate an executable file under Ubuntu, and the function is normal after the function test.            
View full article
RT106X secure JTAG test and IDE debug 1 Introduction     Regarding the usage of RT10XX Secure JTAG, the nxp.com has already released a very good application note AN12419 Secure JTAG for i.MXRT10xx: https://www.nxp.com/docs/en/application-note/AN12419.pdf This application note talks about the principle of Secure JTAG, how to modify the fuse to implement the Secure JTAG function, and the content of the related JLINKscript file, and then gives the use of JLINK commander to realize the identification of the ARM core. Usually, if the ARM core can be identified, it indicates that Secure JTAG connection has been passed. But in practical usage, I found many customers encounter the different issues, for example, the Secure JTAG could not find the ARM core directly, or the core identify is not stable, and some customers asked how to use common IDEs, such as MCUXPresso, IAR , MDK to add this Secure JTAG function to realize  Secure JTAG debugging.   For the test of secure JTAG, it also needs the cost, because the fuse needs to be modified. If the position of the fuse is accidentally modified, it may cause irreversible problems. Due to the different situations of customers, I also done more tests, borrowing boards with chip socket which can replace the different RT chip, I have tested RT1050, RT1060, RT1064, but in practical usage, there are still some customers mentioned that it will be reproduced on the EVK, so I also tested the secure JTAG function on the RT1060 and RT1064 EVK     This article will share all the previous relevant experience, so that latecomers can have a reference when encountering similar problems, and avoid unnecessary minefields. This document used the platform: MIMXRT1064-EVK revA: RT1060-EVK, RT1050-EVKB is similar SDK_2_13_0_EVK-MIMXRT1064 MCUXpresso IDE v11.7.1_9221 MDK V5.36: higher reversion is the same IAR 9.30.1: higher reversion is the same Segger JLINK plus JLINK driver version:V788D NXP-MCUBootUtility-5.1.0 2 RT1064 secure JTAG modification Under normal circumstances, it is not recommended for customers to burn all the related fuses directly and then test it directly. I usually proceeds step by step, hardware layout, to ensure that it can support JTAG, and then save the original read of the fuse, burn JTAG, test JTAG, and finally Burn and test other fuses for secure JTAG.    2.1 MIMXRT1064-EVK Hardware modification For RT10XX EVK, the board default situation is the same as the chip situation, which supports SWD. The JTAG pin is connected to other hardware modules from the hardware, so it will affect JTAG function. When it is determined to use JTAG function, the circuit needs to be modified, just like MIMXRT105060HDUG has said:    (1). Burn fuse DAP_SJC_SWD_SEL from ‘0’ to ‘1’ to choose JTAG. (2). DNP R323,R309,R152 to isolate JTAG multiplexed signals. (3). Keep off J47 to J50 to isolate board level debugger.     So, to the MIMXRT1064-EVK board, just need to remove R323, R309, R152, disconnect J47,J48,J49,J50, which is used to disconnect the on board debugger, then use the external Segger JLINK JTAG interface to connect the MIMXRT1064-EVK on board J21. 2.2 Original fuse map read First, the MIMXRT1064-EVK board enters the serial download mode, SW7: 1-OFF, 2-OFF, 3-OFF, 4-ON. Use MCUBootUtility tool to connect EVK, and read the initial fuse map, the situation is as follows:     Fig 1 2.3 JTAG Modification and test    Modify fuse to realize SWD to JTAG: 0X460[19] DAP_SJC_SWD_SEL=1   Fig 2     Use the JLINK commander, JTAG method to connect the board, to find the ARM CM7 core: Fig 3     If the ARM CM7 core can’t be identified, it means the hardware still have issues, or the fuse modified bit is not correct, just do the double check, make sure the ARM core can be found, then go to the next steps. 2.4 Secure JTAG Modification     Modify fuse bit to realize Secure JTAG:     0X460[23:22]:JTAG_SMODE =1     0X460[26]: KTE_FUSE=1     0X610,0X600 burn key: 0xedcba987654321, user also can burn with other custom keys, but need to record it, as the JLINKScript needs to use it.   Fig 4 In the above picture, the secure JTAG fuse and key fuse is finished, at last, to burn fuse 0X400[6]: SJC_RESP_LOCK=1, which is used to close the write and read to secret response key: Fig 5 Here, we can see, the 0X600,0X610 key area is shadow. Now, record the UUID0, UUID1, it will use the script to read out to check the UUID correction or not. 2.5 Secure JTAG JLINK commander test Because during the secure JTAG connection process, the JTAG_MOD pin needs to be pulled low and high, so a wire needs to be connected to pull JTAG_MOD low and high. MIMXRT1064-EVK can use J25_4, which is 3.3V, and JTAG_MOD signal point can use TP11 test point. By default, JTAG_MOD is pulled low. When it needs to be pulled high, it can be connected to J25_4.         During the test, it will need to use the JLINKScript, the content is as follows, also can check  the attached NXP_RT1064_SecureJTAG.JlinkScript file: int InitTarget(void) { int r; int v; int Key0; int Key1; JLINK_SYS_Report("***********************************************"); JLINK_SYS_Report("J-Link script: InitTarget() *"); JLINK_SYS_Report("NXP iMXRT, Enable Secure JTAG *"); JLINK_SYS_Report("***********************************************"); JLINK_SYS_MessageBox("Set pin JTAG_MOD => 1 and press any key to continue..."); // Secure response stored @ 0x600, 0x610 in eFUSE region (OTP memory) Key0 = 0x87654321; Key1 = 0xedcba9; JLINK_CORESIGHT_Configure("IRPre=0;DRPre=0;IRPost=0;DRPost=0;IRLenDevice=5"); CPU = CORTEX_M7; JLINK_SYS_Sleep(100); JLINK_JTAG_WriteIR(0xC); // Output Challenge instruction // Readback Challenge, Shift 64 dummy bits on TDI, TODO: receive Challenge bits on TDO JLINK_JTAG_StartDR(); JLINK_SYS_Report("Reading Challenge ID...."); JLINK_JTAG_WriteDRCont(0xffffffff, 32); // 32-bit dummy write on TDI / read 32 bits on TDO v = JLINK_JTAG_GetU32(0); JLINK_SYS_Report1("Challenge UUID0:", v); JLINK_JTAG_WriteDREnd(0xffffffff, 32); v = JLINK_JTAG_GetU32(0); JLINK_SYS_Report1("Challenge UUID1:", v); JLINK_JTAG_WriteIR(0xD); // Output Response instruction JLINK_JTAG_StartDR(); JLINK_JTAG_WriteDRCont(Key0, 32); JLINK_JTAG_WriteDREnd(Key1, 24); JLINK_SYS_MessageBox("Change pin JTAG_MOD => 0, press any key to continue..."); return 0; }   SecJtag.bat file content is: jlink.exe -JLinkScriptFile NXP_RT1064_SecureJTAG.JlinkScript -device MIMXRT1064XXX6A -if JTAG -speed 4000 -autoconnect 1 -JTAGConf -1,-1 This command is mainly used the JLINK commander and JLINKScript to realize the Secure JTAG connection. When test it, put the SecJtag.bat, JLink.exe, and NXP_RT1064_SecureJTAG.JlinkScript 3 files in the same folder. For testing, can change the board mode to the internal boot mode, SW7:1-OFF,2-OFF, 3-ON, 4-OFF. Run SecJtag.bat, the test situation is: It indicates to connect JTAG_MOD to higher level   Fig 6 Here, use the wire to connect the J25_4 and TP11, which is connect the JTAG_MOD=1, then click OK, go to the next step:   Fig 7 It can be seen here that the correct UUID has been recognized, which is consistent with the UUID read by MCUBootutility above. Many customers cannot read the correct UUID here, indicating that there is a problem with hardware modification, or fuse modification, or another. Or in the case, the JTAG pin in the app is not enabled, which will be described in detail later. Here disconnect the connection between TP11 and J25_4, the default is JTAG_MOD=0, click OK to continue Fig 8 Here, we can see, the ARM CM7 core is found, it means this hardware platform already realize the Secure JTAG connection. Now, can use the IDEs to do the debugging. 3. Secure JTAG debug function in 3 IDEs This chapter aims at how to use secure JTAG function in RT10XX three commonly used IDEs: MCUXpresso, IAR, MDK,  to implement secure JTAG code debug operation.    3.1 Software code prepare This article selects the SDK hello_world project as the test demo: SDK_2_13_0_EVK-MIMXRT1064\boards\evkmimxrt1064\demo_apps\hello_world     Two points should be noted here:  Do not use led_blinky directly, because the led control pin GPIO_AD_B0_09 used by the code is JTAG_TDI, which will cause the Secure JTAG connection to fail after downloading this code, because the pin function of JTAG has been changed. Add the pin configuration for JTAG in app code pinmux.c, otherwise there will be a phenomenon due to the lack of JTAG pin configuration, to the empty RT1064, which the chip that has not burned the code can use Secure JTAG connection, but once the code is burned, the connection will be failed. Add the following code to Pinmux.c: IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_11_JTAG_TRSTB, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_06_JTAG_TMS, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_07_JTAG_TCK, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_09_JTAG_TDI, 0U); IOMUXC_SetPinMux(IOMUXC_GPIO_AD_B0_10_JTAG_TDO, 0U); 3.2 MCUXpresso Secure JTAG debug    Use MCUXpresso IDE to import the SDK hello world demo, modify the pinmux.c, which add the JTAG pin function configuration.    Configure MCUXPresso ID’s debugger JLinkGDBServerCL.exe version as your used JLINK driver version, Window->preferences Fig 9 Run->Debug configurations, configure to JTAG, choose device as MIMXRT1064xxx6A, add the JLINKscriptfile   Fig10   Fig 11 Connect JTAG_MOD=1, which is connect TP11 to J25_4, connect OK.   Fig 12 We can see, it already gets the correct UUID, it also requires connect JTAG_MOD=0, here just leave the TP11 floating, then connect OK:   Fig 13 It can be seen that at this time, it has successfully entered the debug mode and can do debugging. For details, you can check the MCUXpresso11_7_1_MIMXRT1064_SJTAG.mp4 file in the attachment. The test experience here is that MCUXpresso V11.7.1 is found to be a bit unstable and needs to be tried a few more times, but the download of the higher version V11.8.0 version is very stable. If you can get a version higher than V11.7.1, it is recommended to use a higher version of MCUXpresso IDE . 3.3 IAR Secure JTAG debug Some customers need to use the IAR IDE to debug Secure JTAG function, you can use the hello world in the SDK demo, modify pinmux.c to add the JTAG pin configuration code.     The difference is:   (1) Run JLINK driver:JLinkDLLUpdater.exe   Fig 14 Just to refresh the JLINK driver to the IAR,MDK IDE. (2) Modify the file name of JLINKscript to be consistent with the name of the demo, and put it under the settings folder of the project folder. For example, the routine here is hello_world_flexspi_nor_debug, and the file name of JlinkScript is required: hello_world_flexspi_nor_debug.JlinkScript, so that IAR will automatically call the corresponding JlinkScript file   Fig15 (3) Configure IAR debugger as JLINK JTAG   Fig 16                                          Fig 17 Click debug button to enter debug mode:   Fig 18 It needs to set JTAG_MOD=1, just to connect TP11 to J25_4.   Fig 19 It needs to set JTAG_MOD=0, just leave the TP11 floating, click OK to continue.   Fig 20 We can see, the IAR already can do the secure JTAG debugging. 3.4 MDK Secure JTAG debug   For the MDK secure JTAG configuration, the basic requirement is:     (1) Modify pinmux.c code to enable the JTAG pin function     (2) Run JLINK driver, JLinkDLLUpdater.exe,refresh the driver to MDK     (3) JlinkScript file name changed to JLinkSettings.JlinkScript, copy it to the folder in the mdk project, then the MDK will call the JLINKscript file automatically   Fig 21       (4) Modify debugger to JLINK, then modify the interface to JTAG   Fig 22   Fig 23 So far, the Secure JTAG related configuration of MDK has been completed. From theory, it can be directly debugged to run. But I found some problems after many tests. For the code of RAM (hello_world debug), it is no problem to be able to perform secure JTAG debug, but for the code of flash (hello_world_flexspi_nor_debug), there is no problem through secure jtag download, but the debug will run the program abnormal, check the memory data in the flash, also get the wrong data     Fig 24 We can see, UUID also correct, normally, this issue is related to the flashloader during downloading, however, the flashloader of JLINK has not been directly accessed, so I tried to use RT-UFL as the flashloader, and the debugger was successful. If customers encounter similar problems when want to use the MDK to do the secure JTAG debugging, they can use RT-UFL as the flashloader. The reference document is: https://www.cnblogs.com/henjay724/p/13951686.html https://www.cnblogs.com/henjay724/p/15465655.html To summarize it here, copy the iMXRT_UFL file to the JLINK driver folder: C:\Program Files\SEGGER\JLINK\Devices\NXP Copy JLinkDevices.xml to folder: C:\Program Files\SEGGER\JLINK The Jlinkscript file add is the same as the Figure 21. Modify the JlinkSettings.ini file, device is MIMXRT1064_UFL, override =1.   Fig 25 Delete the program algorithm, will use the RT-UFL algorithm   Fig 26 Uncheck update target before Debugging   Fig 27 Enter debug mode:   Fig 28 Configure JTAG_MOD=1, connect TP11 to J25_4, click OK to continue:   Fig 29 Leave the TP11 as floating, click OK to enter the debug mode, the result is:   Fig 30 We can see, after changing the flashloader to the RT-UFL, MDK project Secure JTAG debug also works OK, the attachment also share the RT-UFL related files.  4. Summary For Secure JTAG, you need to modify the hardware to support JTAG function, modify the fuse to support secure JTAG, and modify the code pins to enable the JTAG function. For the IDE debug, you need to configure the relevant interface as JTAG and add the correct JlinkScriptfile, so that the secure JTAG function can be successfully run , and perform IDE code debugging. Attachments: evkmimxrt1064_hello_world_SJTAG.zip:MCUXpresso project EVK-MIMXRT1064-hello_world_iar.7z:IAR project EVK-MIMXRT1064-hello_world_mdk.7z:MDK project File\ NXP_RT1064_SecureJTAG.JlinkScript, JLINK script File\ SecJtag.bat, associate with JLink.exe and NXP_RT1064_SecureJTAG.JlinkScript to realize JLINK Commander connection, which will find the ARM core. File\ RT-UFL: RT ultra flashloader algorithm, source:https://github.com/JayHeng/RT-UFL   Here, really thanks so much for our expert @juying_zhong 's help with the Secure JTAG patient guide during my testing road!
View full article
Introduction i.MX RT ROM bootloader provides a wealth of options to enable user programs to start in various ways. In some cases, people want to copy application image from Flash or other storage device to SDRAM and run there. In this article, I record three ways to realize this. Section 2 and 3 shows load image from NOR flash. Section 4 shows load image from SD card. Software and Tools: MCUXpresso IDE v11.1 NXP-MCUBootUtility 2.2.0   MIMXRT1060-EVK   RT1060 SDK v2.7.0   Win10   Add DCD by MCUxpresso IDE If customers use MCUXpresso to develop the project, they can add DCD head by MCUXpresso. To show the work flow, we take evkmimxrt1020_iled_blinky as the example. Step 1: Add the following to Compiler options: XIP_BOOT_HEADER_DCD_ENABLE=1 SKIP_SYSCLK_INIT Step 2: Modify the Memory Configuration Step 3: Select link application to RAM Step 4. Compile the project. MCUXpresso will generate linker script automatically. Step 5. Since the code should be linked to RAM, MCUXpresso will not prefix IVT and DCD. We can add these link information to linker script manually. Add below code to .ld file’s head.     .boot_hdr : ALIGN(4)     {         FILL(0xff)         __boot_hdr_start__ = ABSOLUTE(.) ;         KEEP(*(.boot_hdr.conf))         . = 0x1000 ;         KEEP(*(.boot_hdr.ivt))         . = 0x1020 ;         KEEP(*(.boot_hdr.boot_data))         . = 0x1030 ;         KEEP(*(.boot_hdr.dcd_data))         __boot_hdr_end__ = ABSOLUTE(.) ;         . = 0x2000 ;     } >BOARD_SDRAM   Then deselect “Manage linker script” in last screenshot. Step 6. Recompile the project, IVT/DCD/BOOT_DATA will be add to your project. Then right click the project axf file->Binary Utilities->Create S-record.   After all these step, you can open MCUBootUtility and download the .s19 file to NOR flash.   Add DCD by MCUBootUtility We can also keep the linker script managed by IDE. MCUBootUtility can add head too. Sometimes it is more flexible than other manners. Step 1. This time BOARD_SDRAM location should be changed to 0x80002000 while the size should be 0x1cff000. This is because the start 8k space in bootable image is saved for IVT and DCD. Step 2. compile the project and generate the .s19 file. Step 3. Open MCUBootUtility. In MCUBootUtility, we should first set the Device Configuration Data. Here I use MIMXRT1060_EVK. So I select the DCD bin file in NXP-MCUBootUtility-2.2.0\src\targets\MIMXRT1062. After that, select the application image file and click All-in-one Action button. MCUBootUtility can do all the work without any manual operation.   Boot from SD card to SDRAM In some application, customer don’t want XIP. They want to use SD card to keep application image and run the code in RAM. But if the code size is bigger than OCRAM size, they have to copy image into SDRAM when startup. With MCUBootUtility’s help, this work is very easy too.  User just need to change the memory map which is located to 0x80001000.   In MCUBootUtility, select the Boot Device to “uSDHC SD” and insert SD card. Then connect the target board. If RT1060 can read the SD card, it will display the SD card information. Then same as last section, set the DCD file and application image file. Click the All-in-One Action button, MCUBootUtility will generate the bootable image and write it to SD card.   SD card has huge capacity. It's too wasteful to only store boot image. People may ask that can they also create a FAT32 system and store more data file in it? Yes, but you need tool's help. When booting from SD card, ROM code read IVT and DCD from SD card address 0x400. To FAT32, the first 512 bytes in SD is for MBR(MAIN BOOT RECORD). Data in address 0x1c6 in MBR reords the partition start address. If the space from MBR to partition start address is big enough to store boot image, then FAT32 system and boot image can live in peace. .   Conclusions:      To help Boot ROM initialize SDRAM, DCD must be placed at right place and indexed by IVT correctly. When our code seems not work, we should first check IVT and DCD.          The IVT offset from the base address for each boot device type is defined in the table below. The location of the IVT is the only fixed requirement by the ROM. The remainder or the image memory map is flexible and is determined by the contents of the IVT.
View full article
Abstract Today I'd like to discuss a scenario that I will encounter in practical application. In the design phase, the Serial Nor flash is usually used as a code storage device for RT series MCUs, such as QSPI, HyperFlash, etc, as these devices all support XIP features. The flash usually is not only occupied by the code but also used to store data, such as device parameters, and log information, and even used as a file system. So we need to evaluate Flash's size. Is there any constraint to the manipulation of data space in the secure boot mode? And how to keep the data confidential? We'll talk about it below and let's get started. Secure boot mode HAB boot The bootable image is plaintext, and it may be changed after the writing operation of the FlexSPI module. If the digest algorithm obtains the different values will lead to the verification process fails, as the writing operation destroys the integrity of the data, regarding confidentiality, data is stored in plaintext in Serial Nor flash under HAB boot.   Fig1 Signature and verification process Encrypted XIP boot mode After enabling the Encrypted XIP boot mode, what is the impact on FlexSPI's read and write data operations? The first point is that the read data will be treated as encrypted data and decrypted by the BEE or OTFAD module. However, if a write operation is performed, the BEE or OTFAD module will be bypassed, in another word, the data will be written directly to the Serial Nor flash. in a short, it is not affected by the Encrypted XIP boot mode. 1) Read operation As shown in Fig 2, the encrypted code and data stored in Serial Nor flash need to be decrypted before they are sent to the CPU for execution. This is also the implementation mechanism of the Encrypted XIP boot mode. To enable the encrypted XIP boot mode, it needs to burn the keys to eFuse, after that, eFuse is impossible to restore, so the test cost seems a bit high, so I recommend you refer to the source code of the 《How to Enable the On-the-fly Decryption》application note to dynamically configure the key of the BEE module and read the encrypted array by DCP from Flash, then compare to plaintext array to verify BEE module participle the decryption flow. Fig 2 2) Write operation Modify the source code of the above application note, define s_nor_program_buffer [256], then set the values through the following code, and burn them to the 20th sector, the offset address is 0x14000. for (i = 0; i < 0xFFU; i++) { s_nor_program_buffer[i] = i; } status = flexspi_nor_flash_page_program(EXAMPLE_FLEXSPI, EXAMPLE_SECTOR * SECTOR_SIZE, (void *)s_nor_program_buffer); if (status != kStatus_Success) { PRINTF("Page program failure !\r\n"); return -1; } DCACHE_InvalidateByRange(EXAMPLE_FLEXSPI_AMBA_BASE + EXAMPLE_SECTOR * SECTOR_SIZE, FLASH_PAGE_SIZE); memcpy(s_nor_read_buffer, (void *)(EXAMPLE_FLEXSPI_AMBA_BASE + EXAMPLE_SECTOR * SECTOR_SIZE), sizeof(s_nor_read_buffer)); for(uint32_t i = 0; i < 256; i++) { PRINTF("The %d data in the second region is 0x%x\r\n", i, s_nor_read_buffer[i]); } After the programming finishes, connect to Jlink and use J-flash to check whether the burned array is correct. The results prove that the write operation will bypass the BEE or OTFAD module and write the data directly to the Serial Nor flash, which is consistent with Fig 2. Fig3 Sensitive data preservation As mentioned at the beginning, in the real project, we may need to use Flash to store data, such as device parameters, log information, or even as a file system, and the saved data is usually a bit sensitive and should prevent being easily obtained by others. For example, in SLN-VIZNAS-IoT, there is a dedicated area for storing facial feature data. Fig 4 Although the purely facial feature data is only meaningful for specific recognition algorithms, in another word, even if a third-party application obtains the original data, it is useless for hackers. In the development of real face recognization projects, it is usually to declare other data items associated with facial feature data, such as name, age, preferences, etc, as shown below: typedef union { struct { /*put char/unsigned char together to avoid padding*/ unsigned char magic; char name[FEATUREDATA_NAME_MAX_LEN]; int index; // this id identify a feature uniquely,we should use it as a handler for feature add/del/update/rename uint16_t id; uint16_t pad; // Add a new component uint16_t coffee_taste; /*put feature in the last so, we can take it as dynamic, size limitation: * (FEATUREDATA_FLASH_PAGE_SIZE * 2 - 1 - FEATUREDATA_NAME_MAX_LEN - 4 - 4 -2)/4*/ float feature[0]; }; unsigned char raw[FEATUREDATA_FLASH_PAGE_SIZE * 2]; } FeatureItem; // 1kB After enabling the Encrypted XIP boot mode, the above write operation test has proved that FlexSPI's write operation will program the data into Serial Nor flash directly, but during the reading process, the data will be decrypted by BEE or OTFAD, so we'd better use DCP or other modules to encrypt the data prior to writing, otherwise, the read operation will get the values that the plaintext goes through the decryption calculation. The risk of leakage Assume XIP encrypted boot is enabled, and whether it's okay to send the encrypted bootable image sent to the OEM for mass production. Moreover, is it able to allow the customers to access the encrypted bootable image without worrying about application image leakage? In order to verify the above guesses, I do the following testing on MIMXRT1060-EVK. Select the XIP encrypted mode in the MCUXpresso Secure Provisioning tool to generate and burn the bootable image of the Blink LED; Fig5 Observe the burned image through NXP-MCUBootUtility, you can find that the ciphertext image is very messy when compared to the plaintext image on the right border, so it seems like the NXP-MCUBootUtility can't obtain the plaintext image; Fig 6 Let's observe the ciphertext image in another way, read them through the pyocd command, as shown below; Fig 7 Open then 9_21_readback.Bin and compare it with the plain text image on the right border, they are the same actually, in other words, the plaintext image was leaked. Fig 8 Explanation As the above Fig 2 shows, the encrypted code and data stored in Serial Nor flash need to be decrypted before they are sent to the CPU for execution. When Jlink connects to the target MCU, it will load the corresponding flash driver algorithm to run in the FlexRAM. If the flash driver algorithm detects the boot type of the MCU just like the following code, then configures the BEE or OTFAD module according to the detecting result, after that, when reading the ciphertext in the Nor Flash, the data will be automatically decrypted. status = SLN_AUTH_check_context(SLN_CRYPTO_CTX_1); configPRINTF(("Context check status %d\r\n", status)); // DEBUG_LOG_DELAY_MS(1000); // Optional delay, enable for debugging to ensure log is printed before a crash if (SLN_AUTH_NO_CONTEXT == status) { configPRINTF(("Ensuring context...\r\n")); // DEBUG_LOG_DELAY_MS(1000); // Optional delay, enable for debugging to ensure log is printed before a crash // Load crypto contexts and make sure they are valid (our own context should be good to get to this point!) status = bl_nor_encrypt_ensure_context(); if (kStatus_Fail == status) { configPRINTF(("Failed to load crypto context...\r\n")); // DEBUG_LOG_DELAY_MS(1000); // Optional delay, enable for debugging to ensure log is printed before a crash // Double check if encrypted XIP is enabled if (!bl_nor_encrypt_is_enabled()) { configPRINTF(("Not running in encrypted XIP mode, ignore error.\r\n")); // DEBUG_LOG_DELAY_MS(1000); // Optional delay, enable for debugging to ensure log is printed before a // crash // No encrypted XIP enabled, we can ignore the bad status status = kStatus_Success; } } else if (kStatus_ReadOnly == status) // Using this status from standard status to indicate that we need to split PRDB { volatile uint32_t delay = 1000000; // Set up context as needed for this application status = bl_nor_encrypt_split_prdb(); configPRINTF(("Restarting BOOTLOADER...\r\n")); while (delay--) ; // Restart DbgConsole_Deinit(); NVIC_DisableIRQ(LPUART6_IRQn); NVIC_SystemReset(); } } else if (SLN_AUTH_OK == status) { configPRINTF(("Ensuring context...\r\n")); // DEBUG_LOG_DELAY_MS(1000); // Optional delay, enable for debugging to ensure log is printed before a crash // We will check to see if we need to update the backup to the reduced scope PRDB0 for bootloader space status = bl_nor_encrypt_ensure_context(); if (kStatus_Fail == status) { configPRINTF(("Failed to load crypto context...\r\n")); // DEBUG_LOG_DELAY_MS(1000); // Optional delay, enable for debugging to ensure log is printed before a crash // Double check if encrypted XIP is enabled if (!bl_nor_encrypt_is_enabled()) { configPRINTF(("Not running in encrypted XIP mode, ignore error.\r\n")); // No encrypted XIP enabled, we can ignore the bad status status = kStatus_Success; } } else if (kStatus_Success == status) // We have good PRDBs so we can update the backup { bool isMatch = false; bool isOriginal = false; configPRINTF(("Checking backup context...\r\n")); // DEBUG_LOG_DELAY_MS(1000); // Optional delay, enable for debugging to ensure log is printed before a crash // Check if we have identical KIBs and initial CTR status = bl_nor_crypto_ctx_compare_backup(&isMatch, &isOriginal, SLN_CRYPTO_CTX_0); if (kStatus_Success == status) { if (isMatch && isOriginal) { configPRINTF(("Updating backup context with valid address space...\r\n")); // DEBUG_LOG_DELAY_MS(1000); // Optional delay, enable for debugging to ensure log is printed before // a crash // Update backup PRDB0 status = SLN_AUTH_backup_context(SLN_CRYPTO_CTX_0); } } } } How to handle Now we already understand the cause of the leak, we must prohibit external tools from loading flashloader or flash driver algorithms into the FlexRAM to run, so in addition to disabling the Debug port, we also need to disable the Serial download method to prevent the hackers take advantage of the Serial Downloader method to make the ROM code load a special flashloader to run in RAM, then configure the BEE or OTFAD module prior to reading the image. However, compared to simply prohibiting the debug port, I'd highly recommend you select the Secure Debug method, because the debug feature requirement is important to return/filed testing, Secure Debug just is like adding a sturdy lock to the debug port, and only the authorized one can open this lock to enter the debugging mode successfully. Reference AN12852:How to Enable the On-the-fly Decryption 《The trust chain of HAB boot》
View full article
Introduction NXP i.MX RT1xxx series provide the High Assurance Boot (HAB) feature which makes the hardware to have a mechanism to ensure that the software can be trusted, as the HAB feature enables the ROM to authenticate the program image by using digital signatures, which can assure the application image's integrity, authenticated and undeniable. So the OEM can utilize it to make their product reject any system image which is not authorized to run. However, what's the trust chain of HAB for implementing the purpose? How the key and certificate generate In the installation directory of MCUXpresso Secure Provisioning:  ~\nxp\MCUX_Provi_v3.1\bin\tools_scripts\keys , there are scripts for generating keys: hab4_pki_tree.sh and hab4_pki_tree.bat (both are applicable to Linux and Windows systems respectively), running any of the above scripts will generate 13 pairs of public and private keys in sequence through OpenSSL, which constitute the below tree structure. Fig1 Key Tree structure The public key and private key generated by OpenSSL are paired one by one, saving the private key and publishing the corresponding public key to the outside world can easily implement asymmetric encryption applications. But how to ensure that the obtained public key is correct and has not been tampered with? At this time, the intervention of authoritative departments is required. Just like everyone can print their resume and say who they are, but if they have the seal of the Public Security Bureau, only the household registration book can prove you are you. This issued by the authority is called a certificate. What's in the certificate? Of course, it should contain a public key, which is the most important; there is also the owner of the certificate, just like the household registration book with your name and ID number, indicating that the book is yours; in addition, there is the issuer of the certificate and the validity period of the ID card is a bit like the issuer institution on the ID card, and how many years of the validity period. If someone fakes a certificate issued by an authority, it's like having fake ID cards and fake household registration books. To generate a certificate, you need to initiate a certificate request, and then send the request to an authority for certification, which is called a CA(Certificate Authority). After sending this request to the authority, the authority will give the certificate a signature. Another question arises, how can the signature be guaranteed to be signed by a genuine authority? Of course, it can only be signed with something that is only in the hands of the authority, which is the CA's private key. The signature algorithm probably works like this: a Hash calculation is performed on the target information to obtain a Hash value. And this process is irreversible, that is to say, the original information content cannot be obtained through the Hash value. When the information is sent out, the hash value is encrypted and sent together with the information as a signature. The process is as follows. Fig2 Signature and verification process Looking at the content of the certificate (as shown below), we will find that there is an Issuer, that is, who issued the certificate; The subject is to who the certificate is issued; Validity is the certificate period; Public-key is the content of the public key, and related signature algorithm. You will find that in order to verify the certificate, the public key of the CA is required. Then a new question arises. How can we be sure that the public key of the CA is correct? This requires a superior CA to sign the CA's public key, and then form the CA's certificate. If you want to know whether a CA's certificate is reliable, you need to see if the public key of the CA's superior certificate can unlock the CA's signature. Just likes if you don’t trust the District Public Security Bureau, you can call the Municipal Public Security Bureau and ask the Municipal Public Security Bureau to confirm its legitimacy of the District Public Security Bureau. This goes up layer by layer until the root CA makes the final endorsement. Through this layer-by-layer credit endorsement method, the normal operation of the asymmetric encryption mode is guaranteed. How does the Root CA prove itself? At this time, Root CA will issue another certificate (as shown below), called the Self-Signed Certificate, which is to sign itself with its own private key, giving people a feeling of "I am me, whether you believe it or not", Therefore, its format content is slightly different from the above CA certificate. Its Issuer and Subject are the same, and its own public key can be used for authentication. So the certificate authentication process will also end here. In this way, in addition to generating the public key and private key through running the script, the OpenSSL will also generate the certificate chain shown below.  Fig3 certificates Boot flow of the HAB mode Figure 4 shows the boot flow of the HAB mode. And steps 1, 2, and 3 are essentially the signature verification process. Fig4 Boot flow of the HAB mode The verification process (as shown in Figure 2) can be used to detect data integrity, identity authentication, and non-repudiation when the public key is trusted, so hab4_pki_tree.sh and hab4_pki_tree.bat scripts can ensure the generated public key and private key pair and the certificate are trusted, it's the "perfectly closed loop". However, the Application image in Figure 4 is plaintext, and the confidentiality of the data is not implemented, so the encrypted boot is always a combination of the HAB boot and the encrypted boot is an advanced usage of an authenticated boot. Reference AN4581: i.MX Secure Boot on HABv4 Supported Devices AN12681: How to use HAB secure boot in i.MX RT10xx  
View full article
1.  Abstract NXP EdgeReady solution can use RT106/5 S/L/A/F to achieve speech recognition, but the relevant support software libraries for the RT4-bit series are limited to the S/L/A/F series, if you want to use normal RT chips, how to achieve speech recognition functions? NXP officially launched the VIT software package in the SDK, which can support RT1060, RT1160, RT1170, RT600, RT500 to achieve SDK-based speech recognition functions. For the acquisition of weather information, usually customer can connect with a third-party platform or the cloud weather API, using http client method to access directly, the current weather API platforms, you can register it, then call the API directly, so you can use the RT SDK lwip socket client method to call the corresponding weather API, to achieve real-time specific geographical location weather forecast data.     This article will use MIMXRT1060-EVK to implement customer-defined wake-up word(WW) and voice recognition word recognition(VC) based on SDK VIT lib, and LWIP socket client to achieve real-time weather information acquisition in Shanghai, then print it to the terminal, this article mainly use the print to share the weather information, for the sound broadcasts, it also add the simple method to broadcast the fixed sound with mp3 audio data, but for the freely sound broadcast, it may need to use real-time TTS function, which is not added now.     The system block diagram of this document is as follows:   Fig 1 System Block diagram The VIT custom wake-up word of this system is "小恩小恩", and after waking up, one of the following recognition words can be recognized: ”开灯”("Turn on the lights"),“关灯”("Turn off the lights"),”今天天气”("Today's weather"),“明天天气”("Tomorrow's Weather"),“后天天气”("The day after tomorrow's weather"). Turn on the light or Turn off the lights , that is to control  the external LED red light on the EVK board. ”今天天气” gets today’s weather forecast, it is in the following format:                     "date": "2022-05-27",                     "week": "5",                     "dayweather": "阴",                     "nightweather": "阴",                     "daytemp": "28",                     "nighttemp": "21",                     "daywind": "东南",                     "nightwind": "东南",                     "daypower": "≤3",                     "nightpower": "≤3" “明天天气”,“后天天气” are the same format, but it is 1-2 days after the date of today. To get the weather data, the MIMXRT1060-EVK board needs to connect the network to achieve the acquisition of the Gaode Map(restapi.amap.com) Weather API data. 2.  Related preparations 2.1 Weather API Platform     At present, there are many third-party platforms that can obtain weather on the Internet for Chinese, such as: Baidu Intelligent Cloud, Baidu Map API, Huawei cloud platform, Juhe weather, Gaode Map API, and so on. This article tried several platform, the test results found: Baidu intelligent cloud, the number of daily free calls is small, the need for real-time synthesis of AK, SK, cumbersome to call; Baidu Map API needs to upload ID card information; Several others have a similar situation. In the end, the Gaode Map API with convenient registration, many daily calls and relatively full feedback weather data information was selected.     Here, we mainly talk about the Gaode Map API usage, the link is: https://lbs.amap.com/api/webservice/guide/api/weatherinfo Create the account and the API key, then add the relevant parameters to implement the call of the weather API, the application for API Key is as follows: Fig 2 Gaode map API key The following diagram shows the call volume:   Fig 3 Gaode Map API call volume This is the API calling format:   Fig 4 Weather API calling parameters So, the full Gaode Map API link should like this: https://restapi.amap.com/v3/weather/weatherInfo?key=xxxxxxx&city=xxx&extensions=all&output=JSON If need to test the Shanghai weather, city code is 310000. 2.2 Postman test weather API     Postman is an interface testing tool, when doing interface testing, Postman is equivalent to a client, it can simulate various HTTP requests initiated by users, send the request data to the server, obtain the corresponding response results, and verify whether the result data in the response matches the expected value. Postman download link: https://www.postman.com/   After finding the proper weather API platform and the calling link, use the postman do the http GET operation to capture the weather data, refer to the Fig 4, fill the related parameters to the postman: Fig 5 Postman call weather API Send Get command, we can find the weather information in the position 7, the complete all information is: {     "status": "1",     "count": "1",     "info": "OK",     "infocode": "10000",     "forecasts": [         {             "city": "上海市",             "adcode": "310000",             "province": "上海",             "reporttime": "2022-05-27 17:34:12",             "casts": [                 {                     "date": "2022-05-27",                     "week": "5",                     "dayweather": "阴",                     "nightweather": "阴",                     "daytemp": "28",                     "nighttemp": "21",                     "daywind": "东南",                     "nightwind": "东南",                     "daypower": "≤3",                     "nightpower": "≤3"                 },                 {                     "date": "2022-05-28",                     "week": "6",                     "dayweather": "小雨",                     "nightweather": "中雨",                     "daytemp": "24",                     "nighttemp": "20",                     "daywind": "东南",                     "nightwind": "东南",                     "daypower": "≤3",                     "nightpower": "≤3"                 },                 {                     "date": "2022-05-29",                     "week": "7",                     "dayweather": "大雨",                     "nightweather": "小雨",                     "daytemp": "23",                     "nighttemp": "20",                     "daywind": "南",                     "nightwind": "南",                     "daypower": "≤3",                     "nightpower": "≤3"                 },                 {                     "date": "2022-05-30",                     "week": "1",                     "dayweather": "小雨",                     "nightweather": "晴",                     "daytemp": "27",                     "nighttemp": "20",                     "daywind": "北",                     "nightwind": "北",                     "daypower": "≤3",                     "nightpower": "≤3"                 }             ]         }     ] }   We can see, it can capture the continuous 4 days information, with this information, we can get the weather information easily. From the postman, we also can see the Get code, like this: Fig 6 postman API HTTP code     With this API which already passed the testing, it can capture the complete weather information, here, we can consider adding the working http API to the MIMXRT1060-EVK code.    2.3 VIT custom commands     From the maestro code of the RT1060 SDK, we can know that the SDK already supports the VIT library, what is VIT?     VIT's full name: Voice Intelligent Technology, the library provides voice recognition services designed to wake up and recognize specific commands, control IOT, and the smart home. Fig 7 VIT system block diagram     In NXP RT1060 SDK code, the generated wake word and command word have been provided and placed in the VIT_Model.h file. If in the customer's project, how to customize the wake word and command word? With the NXP's efforts, we have made a web page form for customers to choose their own command, and then generate the corresponding VIT_Model.h file for code to call. VIT command word generation web page is: https://vit.nxp.com/#/home     Login the NXP account, choose the RT chip partn umber, wakeup word, voice command. Please note, the current supported RT chip is: RT1060,RT1160,RT1170,RT600,RT500 The following is the example for generating wakeup word and voice command:   Fig 8 Custom VIT configuration Fig 9 generated result Download the generated model, you can get VIT_Model_cn.h, open to see the command word information and related model data stored in the const PL_MEM_ALIGN (PL_UINT8 VIT_Model_cn[], VIT_MODEL_ALIGN_BYTES) array, the command word information is as follows: WakeWord supported : " 小恩 小恩 " Voice Commands supported     Cmd_Id : Cmd_Name       0    : UNKNOWN       1    : 开灯       2    : 关灯       3    : 今天 天气       4    : 明天 天气       5    : 后天 天气 Use the RT1060 SDK maestro_record demo to test this custom command result:   Fig 10 Custom Wakeup word and voice command test From the test result, we can see, both the wakeup word and voice command is detected. 3 Software code 3.1 LWIP socket client code capture weather API From chapter 2.2, we have been able to obtain the weather API and through testing, we can successfully achieve weather acquisition, so we need to add relevant commands in combination with the needs of our own system. For the acquisition of the weather API, the lwip code based on the RT1060 SDK is in the form of socket client. The relevant code is as follows: #define PORT 80 #define IP_ADDR "59.82.9.133" uint8_t get_weather[]= "GET /v3/weather/weatherInfo?key=xxx&city=310000&extensions=all&output=JSON HTTP/1.1\r\nHost: restapi.amap.com\r\n\r\n\r\n\r\n"; if (sys_thread_new("weather_main", weathermain_thread, NULL, HTTPD_STACKSIZE, HTTPD_PRIORITY) == NULL) LWIP_ASSERT("main(): Task creation failed.", 0); static void weathermain_thread(void *arg) { static struct netif netif; ip4_addr_t netif_ipaddr, netif_netmask, netif_gw; ethernetif_config_t enet_config = { .phyHandle = &phyHandle, .macAddress = configMAC_ADDR, }; LWIP_UNUSED_ARG(arg); mdioHandle.resource.csrClock_Hz = EXAMPLE_CLOCK_FREQ; IP4_ADDR(&netif_ipaddr, configIP_ADDR0, configIP_ADDR1, configIP_ADDR2, configIP_ADDR3); IP4_ADDR(&netif_netmask, configNET_MASK0, configNET_MASK1, configNET_MASK2, configNET_MASK3); IP4_ADDR(&netif_gw, configGW_ADDR0, configGW_ADDR1, configGW_ADDR2, configGW_ADDR3); tcpip_init(NULL, NULL); netifapi_netif_add(&netif, &netif_ipaddr, &netif_netmask, &netif_gw, &enet_config, EXAMPLE_NETIF_INIT_FN, tcpip_input); netifapi_netif_set_default(&netif); netifapi_netif_set_up(&netif); PRINTF("\r\n************************************************\r\n"); PRINTF(" TCP client example\r\n"); PRINTF("************************************************\r\n"); PRINTF(" IPv4 Address : %u.%u.%u.%u\r\n", ((u8_t *)&netif_ipaddr)[0], ((u8_t *)&netif_ipaddr)[1], ((u8_t *)&netif_ipaddr)[2], ((u8_t *)&netif_ipaddr)[3]); PRINTF(" IPv4 Subnet mask : %u.%u.%u.%u\r\n", ((u8_t *)&netif_netmask)[0], ((u8_t *)&netif_netmask)[1], ((u8_t *)&netif_netmask)[2], ((u8_t *)&netif_netmask)[3]); PRINTF(" IPv4 Gateway : %u.%u.%u.%u\r\n", ((u8_t *)&netif_gw)[0], ((u8_t *)&netif_gw)[1], ((u8_t *)&netif_gw)[2], ((u8_t *)&netif_gw)[3]); PRINTF("************************************************\r\n"); sys_thread_new("weather", weather_thread, NULL, DEFAULT_THREAD_STACKSIZE, DEFAULT_THREAD_PRIO); vTaskDelete(NULL); } static void weather_thread(void *arg) { int sock = -1,rece; struct sockaddr_in client_addr; char* host_ip; ip4_addr_t dns_ip; err_t err; uint32_t *pSDRAM= pvPortMalloc(BUF_LEN);// host_ip = HOST_NAME; PRINTF("host name : %s , host_ip : %s\r\n",HOST_NAME,host_ip); while(1) { sock = socket(AF_INET, SOCK_STREAM, 0); if (sock < 0) { PRINTF("Socket error\n"); vTaskDelay(10); continue; } client_addr.sin_family = AF_INET; client_addr.sin_port = htons(PORT); client_addr.sin_addr.s_addr = inet_addr(host_ip); memset(&(client_addr.sin_zero), 0, sizeof(client_addr.sin_zero)); if (connect(sock, (struct sockaddr *)&client_addr, sizeof(struct sockaddr)) == -1) { PRINTF("Connect failed!\n"); closesocket(sock); vTaskDelay(10); continue; } PRINTF("Connect to server successful!\r\n"); write(sock,get_weather,sizeof(get_weather)); while (1) { rece = recv(sock, (uint8_t*)pSDRAM, BUF_LEN, 0);//BUF_LEN if (rece <= 0) break; memcpy(weather_data.weather_info, pSDRAM,1500);//max 1457 } Weather_process(); memset(pSDRAM,0,BUF_LEN); closesocket(sock); vTaskDelay(10000); } }  3.2 VIT detect customer command code    Put the generated VIT_Model_cn.h to the maestro_record folder path:   vit\RT1060_CortexM7\Lib    The specific wake word and voice command related code can be viewed from the code vit_pro.c, mainly involving function is: int VIT_Execute(void *arg, void *inputBuffer, int size) The code is modified as follows, mainly to record the wake and wake word number, for specific function control, the command directly controlled here is the local "开灯:turn on the light", "关灯:turn off the light" command, as for the weather command needs to call the socket client API, so in the main lwip call area combined with the command word recognition number to call: if (VIT_DetectionResults == VIT_WW_DETECTED) { PRINTF(" - WakeWord detected \r\n"); weather_data.ww_flag = 1; //kerry } else if (VIT_DetectionResults == VIT_VC_DETECTED) { // Retrieve id of the Voice Command detected // String of the Command can also be retrieved (when WW and CMDs strings are integrated in Model) VIT_Status = VIT_GetVoiceCommandFound(VITHandle, &VoiceCommand); if (VIT_Status != VIT_SUCCESS) { PRINTF("VIT_GetVoiceCommandFound error: %d\r\n", VIT_Status); return VIT_Status; // will stop processing VIT and go directly to MEM free } else { PRINTF(" - Voice Command detected %d", VoiceCommand.Cmd_Id); weather_data.vc_index = VoiceCommand.Cmd_Id;//kerry 1:ledon 2:ledoff 3:today weather 4:tomorrow weather 5:aftertomorrow weather if(weather_data.vc_index == 1)//1 { GPIO_PinWrite(GPIO1, 3, 1U); //pull high PRINTF(" led on!\r\n"); } else if(weather_data.vc_index == 2)//2 { GPIO_PinWrite(GPIO1, 3, 0U); //pull low PRINTF(" led off!\r\n"); } // Retrieve CMD Name: OPTIONAL // Check first if CMD string is present if (VoiceCommand.pCmd_Name != PL_NULL) { PRINTF(" %s\r\n", VoiceCommand.pCmd_Name); } else { PRINTF("\r\n"); } } }  3.3 Voice recognize weather information    In the weather_thread while, check the wakeup word and voice command, if meet the requirement, then create the socket connection, write the API and capture the weather data.   The related code is: while(1) { //add the command request, only cmd == weather flag, then call it. if((weather_data.ww_flag == 1)) { if(weather_data.vc_index >= 3) { // create connection //write API and read API Weather_process(); } memset(weather_data.weather_info, 0, sizeof(weather_data.weather_info)); weather_data.ww_flag = 0; weather_data.vc_index = 0; } vTaskDelay(10000); } void Weather_process(void) { char * datap, *datap1; datap = strstr((char*)weather_data.weather_info,"date"); if(datap != NULL) { memcpy(today_weather, datap,184);//max 1457 if(weather_data.vc_index == 3) { PRINTF("\r\n*******************today weather***********************************\n\r"); PRINTF("%s\r\n",today_weather); return; } } else return; datap1 = strstr(datap+4,"date"); if(datap1 != NULL) { memcpy(tomorr_weather, datap1,184);//max 1457 if(weather_data.vc_index == 4) { PRINTF("\r\n*******************tomorrow weather*******************************\n\r"); PRINTF("%s\r\n",tomorr_weather); return; } } else return; datap = strstr(datap1+4,"date"); if(datap != NULL) { memcpy(aftertom_weather, datap,184);//max 1457 if(weather_data.vc_index == 5) { PRINTF("\r\n*******************after tomorrow weather**************************\n\r"); PRINTF("%s\r\n",aftertom_weather); } } else return; }   Function Weather_process is used to refer to the voice recognized weather number to get the related date’s weather, and printf it. 4 Test result  the test result video: Print the log results as shown in Figure 11, after testing, you can see that the wakeup word and voice command can be successfully recognized, in the recognition of word sequence numbers 3, 4, 5 is the weather acquisition, you can successfully call the lwip socket client API, successfully obtain weather information and printf it.   Fig 11 system test print result  evkmimxrt1060_maestro_weather_backup.zip is the project without sound playback, weather information will print to the terminal! 5 Meet issues conclusion 5.1 LWIP failed to get weather    When creating the code, call the postman provided http code: GET /v3/weather/weatherInfo?key=8f777fc7d867908eebbad7f96a13af10&amp; city=310000&amp; extensions=all&amp; output=JSON HTTP/1.1 Host: restapi.amap.com    Add it to the socket API function: uint8_t get_weather[]= "GET /v3/weather/weatherInfo?key=xxx&amp;city=310000&amp;extensions=all&amp;output=JSON HTTP/1.1\r\nHost: restapi.amap.com\r\n\r\n\r\n\r\n";    The test result is:   Fig 12 socket weather API return issues     We can see, server connection is OK, http also return back the data, but it report the parameter issues, after checking, we use the postman C code, and put it to the get_weather: uint8_t get_weather[]= "GET /v3/weather/weatherInfo?key=xxx&city=310000&extensions=all&output=JSON HTTP/1.1\r\nHost: restapi.amap.com\r\n\r\n\r\n\r\n"; Then, it can capture the weather data, the same as postman test result. 5.2 VIT LWIP merger memory is not enough     After combining the maestro_record and lwip socket code together, compile it, it will meet the DTCM memory overflow issues. Fig 13 memory overflow After optimize, still meet the DTCM overflow issues, so, at last, choose to reconfigure the FlexRAM: OCRAM 192K, DTCM 256K, ITCM 64K Compile it, and the memory overflow issues disappear:   Fig 14 FlexRAM recofiguration 5.3 Print Chinese word in tera    Directly use teraterm, when the weather API returns the Chinese word, the print out information is the garbled code, and then after the following configuration, to achieve Chinese printing: Setup  ->  Terminal Locale    : american->chinese Codepage : 65001 ->936 Fig 15 Tera Term Chinese word print In summary, after various data collection and problem solving, in MIMXRT1060-EVK board  combined with the official SDK complete the function of customizing VIT voice commands to obtain real-time weather and local control.So, even if the ordinary RT series which is not S/L/A/F series, you also can use VIT to implement speech recognition functions. 6 Add the sound broadcast    This chapter mainly gives the method how to add the sound broadcast with the mp3 video data which is stored in the memory, but to the realtime weather data playback, it is not very freely, it needs to check the weather data, and use the video mp3 data lib get the correct mp3 data, as it is not the online TTS method.     So, here, just share one example add the sound broadcast, eg: WW : “小恩小恩”    ->   “小恩来了,请吩咐!” VC  :“今天天气”   ->   “温度32.1度” VC playback is fixed now, if need to play real data, it needs to generate the mp3 voice data lib, then according to the feedback weather information, to generate the correct weather mp3 data array, and play it, as this is a little complicated, but not difficult, so here, just use one fixed sound give an example of it. 6.1 MP3 playback audio data preparation     For audio broadcasting which need to convert the Chinese word into MP3 files, you can use some online speech synthesis software, here use Baidu online speech synthesis function, you can view the previous article, chapter 2.2.2 online speech synthesis: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/RT106L-S-voice-control-system-based-on-the-Baidu-cloud/ta-p/1363295     If use the Baidu online speech synthesis generated mp3 file to convert to the c array directly, it will meet the first audio play issues, so, here we use the Audacity to convert the mp3 file, the convert configuration is like this:  Fig 16 Audacity convert configuration     After the regeneration of mp3, you can use xxd .exe to convert the mp3 file to an array of C files, and then put it into RT-related memory or external flash , xxd .exe can be found at the following link: https://github.com/baldram/ESP_VS1053_Library/issues/18 The convert command like this: xxd -i your-sound.mp3 ready-to-use-header.c Convert the xiaoencoming.mp3 and temptest.mp3 file to the C array, then modify the data to the C file, save file as: xiaoencoming.h and temptest.h. Here, take xiaoencoming.c as an example: #define XIAOEN_MP3_SIZE  6847 unsigned char xiaoencoming_mp3[XIAOEN_MP3_SIZE] = {   0x49, 0x44, 0x33, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x21, 0x54, 0x58, …   0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55 }; unsigned int xiaoencoming1_mp3_len = XIAOEN_MP3_SIZE;//6847; Until now, the playback audio data is finished.     Copy xiaoencoming.h and temptest.h to project path: evkmimxrt1060_maestro_weather_mp3\source 6.2 Play the MP3 data from memory    Here, share the related code. 6.2.1 app_streamer.c added code    #include "xiaoencoming.h" #include "temptest.h" void *voice_inBuf = NULL; void *voice_outBuf = NULL; status_t STREAMER_file_Create(streamer_handle_t *handle, char *filename, int eap_par) { STREAMER_CREATE_PARAM params; OsaThreadAttr thread_attr; int ret; ELEMENT_PROPERTY_T prop; MEMSRC_SET_BUFFER_T inBufInfo = {0}; SET_BUFFER_DESC_T outBufInfo = {0}; PRINTF("Kerry test begin!\r\n"); if(filename == "temptest.mp3") inBufInfo = (MEMSRC_SET_BUFFER_T){.location = (int8_t *)temptest_mp3, .size = TEMPtest_MP3_SIZE}; else if(filename == "xiaoencoming.mp3") inBufInfo = (MEMSRC_SET_BUFFER_T){.location = (int8_t *)xiaoencoming_mp3, .size = XIAOEN_MP3_SIZE}; /* Create message process thread */ osa_thread_attr_init(&thread_attr); osa_thread_attr_set_name(&thread_attr, STREAMER_MESSAGE_TASK_NAME); osa_thread_attr_set_stack_size(&thread_attr, STREAMER_MESSAGE_TASK_STACK_SIZE); ret = osa_thread_create(&msg_thread, &thread_attr, STREAMER_MessageTask, (void *)handle); osa_thread_attr_destroy(&thread_attr); if (ERRCODE_NO_ERROR != ret) { return kStatus_Fail; } /* Create streamer */ strcpy(params.out_mq_name, APP_STREAMER_MSG_QUEUE); params.stack_size = STREAMER_TASK_STACK_SIZE; params.pipeline_type = STREAM_PIPELINE_MEM; params.task_name = STREAMER_TASK_NAME; params.in_dev_name = "buffer"; params.out_dev_name = "speaker"; handle->streamer = streamer_create(&params); if (!handle->streamer) { return kStatus_Fail; } prop.prop = PROP_DECODER_DECODER_TYPE; prop.val = (uintptr_t)DECODER_TYPE_MP3; ret = streamer_set_property(handle->streamer, prop, true); if (ret != STREAM_OK) { streamer_destroy(handle->streamer); handle->streamer = NULL; return kStatus_Fail; } prop.prop = PROP_MEMSRC_SET_BUFF; prop.val = (uintptr_t)&inBufInfo; ret = streamer_set_property(handle->streamer, prop, true); if (ret != STREAM_OK) { streamer_destroy(handle->streamer); handle->streamer = NULL; return kStatus_Fail; } handle->audioPlaying = false; error: PRINTF("End STREAMER_file_Create\r\n"); PRINTF("Kerry test end!\r\n"); return kStatus_Success; }   The code implements the thread build, creates a streamer, defines it as playing from memory, decodes the properties for MP3, and specifies an array of MP3 files in memory. Specify a different array of mp3 files in memory based on the calling file name. 6.2.2 cmd.c added code void play_file(char *filename, int eap_par) { STREAMER_Init(); int ret = STREAMER_file_Create(&streamerHandle, filename, eap_par); if (ret != kStatus_Success) { PRINTF("STREAMER_file_Create failed\r\n"); goto file_error; } STREAMER_Start(&streamerHandle); PRINTF("Starting playback\r\n"); file_playing = true; while (streamerHandle.audioPlaying) { osa_time_delay(100); } file_playing = false; file_error: PRINTF("[play_file] Cleanup\r\n"); STREAMER_Destroy(&streamerHandle); osa_time_delay(100); }   Play file, it calls the STREAMER_file_Create API function, start play, and wait the play finished, then release the STREAMER. shellRecMIC API function add the VIT recorded flag, which is used to play feedback audio file. static shell_status_t shellRecMIC(shell_handle_t shellHandle, int32_t argc, char **argv) { … //kerry PRINTF("Kerry MP3 stream data test!\r\n"); PRINTF("---weather_data.ww_flag =%d--\r\n ", weather_data.ww_flag); PRINTF("---weather_data.vc_inde =%d--\r\n ", weather_data.vc_index); PRINTF("---weather_data.mp3_flag =%d--\r\n ", weather_data.mp3_flag); if(weather_data.ww_flag == 1) { play_file("xiaoencoming.mp3", 0); } if(weather_data.vc_index == 3) { play_file("temptest.mp3", 0); } if(weather_data.mp3_flag != 0) { weather_data.ww_flag = 0; weather_data.vc_index = 0; } weather_data.mp3_flag = 0; /* Delay for cleanup */ osa_time_delay(100); return kStatus_SHELL_Success; } If detect the Wakeup Word: “小恩小恩”, play feedback audio: “小恩来了请吩咐”. If detect the voice command: “今天天气”, play feedback audio: “温度32.1度”, please note, this playback just an example, it is the fixed audio, you also can create audio word lib, then according to the received weather information, combine the related word audio together, then playback it. This is a little complicated, but not difficult. So, if need to play the free audio, also can consider the online TTS method in real time. 6.2.3 VIT WW and VC flag VIT_Execute function int VIT_Execute(void *arg, void *inputBuffer, int size) { … if (VIT_DetectionResults == VIT_WW_DETECTED) { PRINTF(" - WakeWord detected \r\n"); weather_data.ww_flag = 1; //kerry weather_data.mp3_flag = 1; } else if (VIT_DetectionResults == VIT_VC_DETECTED) { // Retrieve id of the Voice Command detected // String of the Command can also be retrieved (when WW and CMDs strings are integrated in Model) VIT_Status = VIT_GetVoiceCommandFound(VITHandle, &VoiceCommand); if (VIT_Status != VIT_SUCCESS) { PRINTF("VIT_GetVoiceCommandFound error: %d\r\n", VIT_Status); return VIT_Status; // will stop processing VIT and go directly to MEM free } else { PRINTF(" - Voice Command detected %d", VoiceCommand.Cmd_Id); weather_data.vc_index = VoiceCommand.Cmd_Id;//kerry 1:ledon 2:ledoff 3:today weather 4:tomorrow weather 5:aftertomorrow weather weather_data.mp3_flag = 2; if(weather_data.vc_index == 1)//1 { GPIO_PinWrite(GPIO1, 3, 1U); //pull high PRINTF(" led on!\r\n"); } else if(weather_data.vc_index == 2)//2 { GPIO_PinWrite(GPIO1, 3, 0U); //pull low PRINTF(" led off!\r\n"); } // Retrieve CMD Name: OPTIONAL // Check first if CMD string is present if (VoiceCommand.pCmd_Name != PL_NULL) { PRINTF(" %s\r\n", VoiceCommand.pCmd_Name); } else { PRINTF("\r\n"); } } } return VIT_Status; }   Until now, all the code is added. 6.2.4  playback audio test result     This is the audio playback test result:   Fig 17 playback audio log   From the test result, we can see, we also can use the mp3 data which is stored in the memory and play it as audio playback.   The code project is: evkmimxrt1060_maestro_weather_mp3.zip.  
View full article
RT10xx image reserve the APP FCB methods 1. Abstract     Regarding RT10XX programming, it is mainly divided into two categories: 1) Serial download mode with blhost proramming     To this method, we can use the MCUBootUtility tool, or blhost+elftosb+sdphost cmd method, we also can use the NXP SPT(MCUXpresso secure provisional Tool). This programming need to enter the serial download mode, then use the flashloader supported UART or the USB HID interface. 2) Use Programmer or debugger with flashdriver programming This method is usually through the SWD/JTAG download interface combined with the debugger + IDE, or directly software burning, the chip mode can be in the internal boot, or in the serial download mode, with the help of the flashloader to generate the flash burning algorithm file. Method 2, The burning method using the debugger tool usually ensures that the burning code is consistent with the original APP.     Method 1, Uses the blhost method to download, usually blhost will regenerate an FCB with a full-featured LUT to burn to the external flash, and then burn the app code with IVT, that is, without the FCB header of the original APP, and re-assemble a blhost generated FCB header and burn it separately. However, for some customers who need to read out the flash image and compare with the original APP image to check the difference after burning, the commonly used blhost method will have the problem of inconsistent FCB area matching. If the customer needs to use the blhost burning method in serial download mode, how to ensure that the flash image after burning is consistent with the original burning file? This article will take the MIMXRT1060-EVK development board as an example, and give specific methods for the command mode and SPT tool mode. 2 Blhost programming reserve APP FCB     From the old RT1060 SDK FCB file (below SDK2.12.0), evkmimxrt1060_flexspi_nor_config.c, we can see:   const flexspi_nor_config_t qspiflash_config = { .memConfig = { .tag = FLEXSPI_CFG_BLK_TAG, .version = FLEXSPI_CFG_BLK_VERSION, .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad, .csHoldTime = 3u, .csSetupTime = 3u, .sflashPadType = kSerialFlash_4Pads, .serialClkFreq = kFlexSpiSerialClk_100MHz, .sflashA1Size = 8u * 1024u * 1024u, .lookupTable = { // Read LUTs FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xEB, RADDR_SDR, FLEXSPI_4PAD, 0x18), FLEXSPI_LUT_SEQ(DUMMY_SDR, FLEXSPI_4PAD, 0x06, READ_SDR, FLEXSPI_4PAD, 0x04), }, }, .pageSize = 256u, .sectorSize = 4u * 1024u, .blockSize = 64u * 1024u, .isUniformBlockSize = false, };   This FCB LUT just contains the basic read command, normally, to the app booting, the FCB just need to provide the read command to the ROM, then it can boot normally.     But what happens to the memory downloaded by blhost? Based on the MIMXRT1060-EVK development board, the following shows how to use the command line mode corresponding to blhost to burn the SDK led_blinky project app, and read out the corresponding flash burning code to analysis. 2.1 Normal blhost download command line    This command line also the same as MCUBootUtility download log, source code is attached rt1060 cmd.bat. elftosb.exe -f imx -V -c imx_application_gen.bd -o ivt_evkmimxrt1060_iled_blinky_FCB.bin evkmimxrt1060_iled_blinky.s19 sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- write-file 0x20208200 ivt_flashloader.bin sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- jump-address 0x20208200 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 1 0 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 24 0 blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202000 4 0xc0000007 word  //option 0 blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202004 4 0 word                 //option1 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20202000                    blhost -t 2048000 -u 0x15A2,0x0073 -j -- flash-erase-region 0x60000000 0x8000 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20203000 4 0XF000000F word  blhost -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20203000                    blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x60001000 ivt_evkmimxrt1060_iled_blinky_FCB_nopadding.bin 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 0x8000 flexspiNorCfg.dat 9 The normal blhost programming is to use the cmd line method, and provide an app which is without the FCB header(Even app with the FCB, will exclude the FCB header at first), then use the elftosb.exe generate the app with IVT, eg ivt_evkmimxrt1060_iled_blinky_FCB_nopadding.bin, download the flashloader file ivt_flashloader to internal RAM, and jump to the flashloader, then use the fill-memory to fill option0, option1 to choose the proper external flash, and use the configure-memory to configure the flexSPI module, with the SFDP table which is got from get configure command, then fill the flexSPI LUT internal buffer. Next, fill-memory 0x20203000 4 0XF000000F associate with configure-memory will generate the full FCB header, burn it from flash address 0x60000000. At last, burn the app which contains IVT from flash address 0X60001000, until now, realize the whole app image programming. Pic 1 shows the comparison between the data read after programming and the original app data. It can be seen that the LUT of the FCB actually programmed on the left is not only contains read, but also contains read status, write enable, program and erase commands. The one on the right is the original app with FCB. The LUT of FCB only contains read commands for boot. So, if you want to keep the FCB header of the original APP instead of the header generated and burned by option0,1 configure-memory, how to do it? The method is that you can also use Option0, 1 to generate and fill in the LUT for flexSPI for communication use, but do not burn the corresponding generated FCB, just burn the FCB that comes with the original APP. pic1 2.2 Reuse option0 and option1 to program the original APP LUT The following command gives reuse option0 and option1, generates LUT and fills in flexSPI LUT for connection with external flash interface, but does not call:  fill-memory 0x20203000 4 0XF000000F and configure-memory 9 0x20203000, so that the generated FCB will not be burned to external memory.    Source file is attached rt1060 cmd_option01.bat. elftosb.exe -f imx -V -c imx_application_gen.bd -o ivt_evkmimxrt1060_iled_blinky_FCB.bin evkmimxrt1060_iled_blinky.s19 sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- write-file 0x20208200 ivt_flashloader.bin sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- jump-address 0x20208200 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 1 0 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 24 0 blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202000 4 0xc0000007 word blhost.exe -t 5242000 -u 0x15A2,0x0073 -j -- fill-memory 0x20202004 4 0 word blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20202000 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 2048000 -u 0x15A2,0x0073 -j -- flash-erase-region 0x60000000 0x8000 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x60000000 evkmimxrt1060_iled_blinky_FCB.bin 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 0x8000 flexspiNorCfg.dat 9 Pic 2 is the comparison between the read data after programming and the original programming data. It can be seen that the FCB programmed at this time is exactly the same as the original code FCB. Pic 2 2.3 use 1bit FCB file to configure LUT    The used file cfg_fdcb_RTxxx_1bit_sdr_flashA.bin is copied from MCUBOOTUtility: \NXP-MCUBootUtility-3.4.0\src\targets\fdcb_model . The configuration of Option0 and Option1 is usually for chips that can support SFDP table, but some flash chips cannot support SFDP table. At this time, you need to fill in the flexSPI LUT for the full LUT manually. The so-called full LUT command is not only read commands, but also supports erasing, program, etc. In this way, the flexSPI interface can be successfully connected to the external FLASH, and the corresponding functions of reading, erasing, and writing can be realized. Therefore, the method in this chapter is to use a single-line command, which is also a command supported by general chips, to enable the corresponding function of flexSPI, so it can complete the subsequent APP code programming.   Pic 3     We can see: 03H is read, 05H is read status register, 06H is write enable, D8H is the block 64K erase, 02H is the page program, 60H is the chip erase. This is the 1bit SPI method full function LUT command, which can realize the chip read, write and erase function.     The command line is, source file is attached rt1060 cmd_fdcb_1bit_sdr_flashA.bat: elftosb.exe -f imx -V -c imx_application_gen.bd -o ivt_evkmimxrt1060_iled_blinky_FCB.bin evkmimxrt1060_iled_blinky.s19 sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- write-file 0x20208200 ivt_flashloader.bin sdphost.exe -t 50000 -u 0x1FC9,0x0135 -j -- jump-address 0x20208200 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 1 0 blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- get-property 24 0 blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x20202000 cfg_fdcb_RTxxx_1bit_sdr_flashA.bin blhost.exe -t 50000 -u 0x15A2,0x0073 -j -- configure-memory 9 0x20202000 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 2048000 -u 0x15A2,0x0073 -j -- flash-erase-region 0x60000000 0x8000 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 1024 flexspiNorCfg.dat 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- write-memory 0x60000000 evkmimxrt1060_iled_blinky_FCB.bin 9 blhost -t 5242000 -u 0x15A2,0x0073 -j -- read-memory 0x60000000 0x8000 flexspiNorCfg.dat 9 In the command line, where option0,1 was previously filled in, instead of filling in the data of option0,1, the 512-byte Bin file of the complete FCB LUT command is directly given, and then the configure-memory command is used to configure the flashloader’s FlexSPI LUT with the FCB file. so that it can support read and write erase commands, etc. The comparison between the flash data and the original APP data when burning and reading is in the Pic 4, we can see, the readout data from the flash is totally the same as the original APP FCB. Pic 4 3,SPT program reserve APP FCB The NXP officially released MCUXPresso Secure Provisional Tool can support the function of retaining the customer's FCB, but the SPT tool currently uses the APP FCB to fill in the flashloader FlexSPI FCB. Therefore, if the customer directly uses the old SDK demo which just contains the read command in the LUT to generate an APP with FCB, then use the SPT tool to burn the flash, and choose to keep the customer FCB in the tool, you will encounter the problem of erasing failure. In this case, analyze the reason, we can know the FCB on the customer APP side needs to fill in the full FCB LUT command, that is, including reading, writing, erasing, etc. The following shows how the old original SDK led_blinky generates an image with an FCB header and writes it in the SPT tool. As you can see in Pic 5, the tool has information that if you use APP FCB, you need to ensure that the FCB LUT contains the read, erase, program commands. Pic 6 shows the programming situation of APP FCB LUT only including read. It has failed when doing erase. The reason is that there is no erase, program and other commands in the FlexSPI LUT command, so it will fail when doing the corresponding erasing or programming.   Pic 5 Pic 6 Pic 7 If you look at the specific command, as shown in Pic 7, you can find that the SPT tool directly uses the FCB header extracted from the APP image to flash the LUT of the flashloader FlexSPI, so there will be no erase and write commands, and it will fail when erasing. The following is how to fill in the LUT in the FCB of the SDK, open evkmimxrt1060_flexspi_nor_config.c, and modify the FCB as follows: const flexspi_nor_config_t qspiflash_config = {     .memConfig =         {             .tag              = FLEXSPI_CFG_BLK_TAG,             .version          = FLEXSPI_CFG_BLK_VERSION,             .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad,             .csHoldTime       = 3u,             .csSetupTime      = 3u,             .sflashPadType    = kSerialFlash_4Pads,             .serialClkFreq    = kFlexSpiSerialClk_100MHz,             .sflashA1Size     = 8u * 1024u * 1024u,             .lookupTable =                 {                   // Read LUTs                   FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xEB, RADDR_SDR, FLEXSPI_4PAD, 0x18),                   FLEXSPI_LUT_SEQ(DUMMY_SDR, FLEXSPI_4PAD, 0x06, READ_SDR, FLEXSPI_4PAD, 0x04),                   // Read status                   [4*1] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x05, READ_SDR, FLEXSPI_1PAD, 0x04),                   //write Enable                   [4*3] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x06, STOP, FLEXSPI_1PAD, 0),                   // Sector Erase byte LUTs                   [4*5] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x20, RADDR_SDR, FLEXSPI_1PAD, 0x18),                   // Block Erase 64Kbyte LUTs                   [4*8] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xD8, RADDR_SDR, FLEXSPI_1PAD, 0x18),                    //Page Program - single mode                   [4*9] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x02, RADDR_SDR, FLEXSPI_1PAD, 0x18),                   [4*9+1] = FLEXSPI_LUT_SEQ(WRITE_SDR, FLEXSPI_1PAD, 0x04, STOP, FLEXSPI_1PAD, 0x0),                   //Erase whole chip                   [4*11] =FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x60, STOP, FLEXSPI_1PAD, 0),                                       },         },     .pageSize           = 256u,     .sectorSize         = 4u * 1024u,     .blockSize          = 64u * 1024u,     .isUniformBlockSize = false, }; Please note, after the internal SDK team modification, from SDK_2_12_0_EVK-MIMXRT1060, the evkmimxrt1060_flexspi_nor_config.c already add LUT cmd to the full FCB LUT function. Use the above FCB to generate the APP, then use the SPT tool to burn the app with customer FCB again, we can see, the programming is working now. Pic 8 In summary, if you need to reserve the customer FCB, you can use the above method, but if you use the SPT tool, you need to add read, write, and erase commands to the LUT of the code FCB to ensure that flexSPI successfully operates the external flash.
View full article
Face recognition Actually, face recognition technology is used in many scenes in our daily life, for instance, when taking pictures with the mobile phone, the camera software will automatically recognize the faces in the lens and focus, scan face for real-name verification when registering the App and scan face for pay, etc. The basic steps of face recognition are shown in the below figure. Firstly, the camera captures image data, then through preprocessing such as noise elimination and image format conversion, the image data will be transmitted to the processor for face detection and recognition calculations. After recognizing the face successful, continue to do the follow-up operations. Fig1 The basic steps of face recognition i.MX RT106F MCU based solution for face recognition The below figure is the block diagram of i.MX RT106F MCU-based solution for face recognition provided by the NXP. Comparing with the general processor (CPU) solution, it has comparative advantages in cost and power consumption. Further, the PCB size will be smaller too and the MCU usually can boot up within a few hundred milliseconds even with RTOS, versus to the boot-up speed of the processor (CPU) equipped with a Linux system that is about 10 seconds, it will give customers a better user experience. Fig2 i.MX RT106F MCU based solution for face recognition Of course, the i.MX RT106F MCU-based solution face recognition solution is not intended to replace the solution based on the processor (CPU). As aforementioned, face recognition technology has a lot of application cases, and it will definitely be used in more fields in the future, so the MCU-based face recognition solution provides customers and the market with another choice. i.MX RT106F MCU The i.MX RT106F face recognition crossover processor is an EdgeReady™ solution-specific variant of the i.MX RT1060 family of crossover processors, targeting face recognition applications. It features NXP’s advanced implementation of the Arm Cortex®-M7 core, which operates at speeds up to 600 MHz to provide high CPU performance and the best real-time response. i.MX RT106F based solutions enable system designers to easily and inexpensively add face recognition capabilities to a wide variety of smart appliances, smart homes, smart retail, and smart industrial devices. The i.MX RT106F is licensed to run the OASIS Lite library for face recognition (as the below figure shows) which include: Face detection Anti-spoofing Face tracking Face alignment Glass detection Face recognition Confidence measure Face recognition quantified results, etc Fig3 OASIS Recognition Software Pipeline sln_viznas_iot_elock_oobe The sln_viznas_iot_elock_oobe project is the application on the SLN-VIZNAS-IOT (as the below figure shows, regarding the Bootstrap and Bootloader in the software flowchart, I will introduce them in the future). The following development work is based on the sln_viznas_iot_elock_oobe project, however, I need to sketch the basic workflow of it prior to starting real development work. Fig4 SLN-VIZNAS-IOT software flowchart sln_viznas_iot_elock_oobe's workflow flow In the Camera_Start() function, the task (Camera_Init_Task) completes the initialization of the RGB and IR cameras, then creates a task (Camera_Task); In the Display_Start() function, after the task (Display_Init_Task) completes the initialization of the display medium (USB or LCD), it immediately creates the task (Display_Task) and sends the message queue s_DisplayReqMsg.id = QMSG_DISPLAY_FRAME_REQ to the task (Camera_Task), then the pDispData will point to the s_BufferLcd[0] array for storing the image data to be displayed; In the Oasis_Start() function, firstly, OASISLT_init() completes the initialization of the OAISIT library, then creates a task (Oasis_Task) to send the message queues gFaceDetReqMsg.id = QMSG_FACEREC_FRAME_REQ and gFaceInfoMsg.id = QMSG_FACEREC_INFO_UPDATE to the task (Camera_Task) to make the pDetIR and pDetRGB point to the face block diagram captured by the RGB and IR cameras, and update the content pointed by infoMsgIn. After the camera is initialized, the RGB camera works at first. After the image data is captured, an interrupt is triggered and the callback function Camera_Callback() sends the message queue DQMsg.id = QMSG_CAMERA_DQ to the task (Camera_Task), and DQIndex++; CAMERA_RECEIVER_GetFullBuffer() extracts the image data captured by the RGB camera, and sends the message queue DPxpMsg.id = QMSG_PXP_DISPLAY to the task (PXP_Task) created in the APP_PXP_Start() function and EQIndex++, meanwhile switch the camera from RGB to IR. After the APP_PXPStartCamera2Display() function in the task (PXP_Task) completes processing, it sends the message queue s_DResMsg.id = QMSG_PXP_DISPLAY to the task (Camera_Task), and the task (Camera_Task) sends the message queue DresMsg.id = QMSG_DISPLAY_FRAME_RES to the task (Display_Task) after receiving the above message queue. The task (Display_Task) completes display, then it sends the message queue s_DisplayReqMsg.id = QMSG_DISPLAY_FRAME_REQ to the task (Camera_Task) to make pDispData point to the s_BufferLcd[1] array; After the IR camera completes capturing work, CAMERA_RECEIVER_GetFullBuffer() extracts the image data and sends the message queue DPxpMsg.id = QMSG_PXP_DISPLAY to the (PXP_Task) task created in the APP_PXP_Start() function, continue to execute EQIndex++ and switch to RGB camera again, and repeat the steps 5. Finally, send the message queue FPxpMsg.id = QMSG_PXP_FACEREC to the task (PXP_Task) and set irReady = true. After the task (PXP_Task) receives the above message queue, it calls APP_PXPStartCamera2DetBuf() and after completes the processing, sends the message queue s_FResMsg.id = QMSG_PXP_FACEREC to the task (Camera_Task); CAMERA_RECEIVER_GetFullBuffer() extracts the image data collected by the RGB camera, repeat step 5, when (pDetRGB && irReady) condition is met, send the message queue FPxpMsg.id = QMSG_PXP_FACEREC to the task (PXP_Task) and set irReady = false, pDetRGB = NULL, pDetIR = NULL. After the task (PXP_Task) receives the above message queue, it calls APP_PXPStartCamera2DetBuf() and after completes the processing, sends the message queue s_FResMsg.id = QMSG_PXP_FACEREC to the task (Camera_Task). At this time, the (!pDetIR && !pDetRGB) condition is met and the Queue message FResMsg.id = QMSG_FACEREC_FRAME_RES is sent to the task (Oasis_Task), run OASISLT_run_extend to perform face recognition calculation, and send the message queue gFaceDetReqMsg.id = QMSG_FACEREC_FRAME_REQ to the task (Camera_Task) to make the pDetIR and pDetRGB point to the face block diagram captured by the RGB and IR cameras again. keep repeat steps 6 and 7; Fig5 sln_viznas_iot_elock_oobe's workflow flow Smart Coffee machine Fig 6 is the workflow of the smart coffee machine that I want to develop for, as there is no LCD board on hand, in the below development process, I will select Win10's camera (as the below figure shows) to output the captured image, further, take advantage of the Shell command to simulate the LCD's touch feature to interact with the board.   Fig6 workflow of the smart coffee machine Fig7 Camera Code modification In the commondef.h, add a new member variable 'uint16_t coffee_taste' in Union FeatureItem to stand for the favorite coffee taste; typedef union { struct { /*put char/unsigned char together to avoid padding*/ unsigned char magic; char name[FEATUREDATA_NAME_MAX_LEN]; int index; // this id identify a feature uniquely,we should use it as a handler for feature add/del/update/rename uint16_t id; uint16_t pad; // Add a new component uint16_t coffee_taste; /*put feature in the last so, we can take it as dynamic, size limitation: * (FEATUREDATA_FLASH_PAGE_SIZE * 2 - 1 - FEATUREDATA_NAME_MAX_LEN - 4 - 4 -2)/4*/ float feature[0]; }; unsigned char raw[FEATUREDATA_FLASH_PAGE_SIZE * 2]; } FeatureItem; // 1kB   In featuredb.h, add two member functions into class FeatureDB:  set_taste()  and  get_taste() , and add the definition of the above two member functions in featuredb.cpp; class FeatureDB { public: FeatureDB(); ~FeatureDB(); int add_feature(uint16_t id, const std::string name, float *feature); int update_feature(uint16_t id, const std::string name, float *feature); int del_feature(uint16_t id, std::string name); int del_feature(const std::string name); int del_feature_all(); std::vector<std::string> get_names(); int get_name(uint16_t id, std::string &name); std::vector<uint16_t> get_ids(); int ren_name(const std::string oldname, const std::string newname); int feature_count(); int get_free(int &index); int database_save(int count); int get_feature(uint16_t id, float *feature); void set_autosave(bool auto_save); bool get_autosave(); //Add two customize member functions int set_taste(const std::string username, uint16_t taste_number); int get_taste(const std::string username); private: bool auto_save; int load_feature(); int erase_feature(int index); int save_feature(int index = 0); int reassign_feature(); int get_free_mapmagic(); int get_remain_map(); }; int FeatureDB::set_taste(const std::string username, uint16_t taste_number) { int index = FEATUREDATA_MAX_COUNT; for (int i = 0; i < FEATUREDATA_MAX_COUNT; i++) { if (s_FeatureData.item[i].magic == FEATUREDATA_MAGIC_VALID) { if (!strcmp(username.c_str(), s_FeatureData.item[i].name)) { index = i; } } } if (index != FEATUREDATA_MAX_COUNT) { s_FeatureData.item[index].coffee_taste = taste_number; return 0; } else { return -1; } } int FeatureDB::get_taste(const std::string username) { int index = FEATUREDATA_MAX_COUNT; int taste_number; for (int i = 0; i < FEATUREDATA_MAX_COUNT; i++) { if (s_FeatureData.item[i].magic == FEATUREDATA_MAGIC_VALID) { if (!strcmp(username.c_str(), s_FeatureData.item[i].name)) { index = i; } } } if (index != FEATUREDATA_MAX_COUNT) { taste_number = s_FeatureData.item[index].coffee_taste; return taste_number; } else { return -1; } }   In database.h, add the declarations of  DB_Set_Taste()  and  DB_Get_Taste()  functions, and in database.cpp, add the related codes of the above two functions. These two functions are equivalent to encapsulating the newly added member functions set_taste() and get_taste() of the FeatureDB class; int DB_Del(uint16_t id, std::string name); int DB_Del(string name); int DB_DelAll(); int DB_Ren(const std::string oldname, const std::string newname); int DB_GetFree(int &index); int DB_GetNames(std::vector<std::string> *names); int DB_Count(int *count); int DB_Save(int count); int DB_GetFeature(uint16_t id, float *feature); int DB_Add(uint16_t id, float *feature); int DB_Add(uint16_t id, std::string name, float *feature); int DB_Update(uint16_t id, float *feature); int DB_GetIDs(std::vector<uint16_t> &ids); int DB_GetName(uint16_t id, std::string &names); int DB_GenID(uint16_t *id); int DB_SetAutoSave(bool auto_save); // Add two customize functions int DB_Set_Taste(const std::string username, const uint16_t taste); int DB_Get_Taste(const std::string username); int DB_Set_Taste(const std::string username, const uint16_t taste) { int ret = DB_MGMT_FAILED; ret = DB_Lock(); if (DB_MGMT_OK == ret) { ret = s_DB->set_taste(username, taste); DB_UnLock(); } return ret; } int DB_Get_Taste(const std::string username) { int ret = DB_MGMT_FAILED; ret = DB_Lock(); if (DB_MGMT_OK == ret) { ret = s_DB->get_taste(username); DB_UnLock(); } return ret; } In sln_api.h, add the declarations of the functions  VIZN_SetTaste() ,  VIZN_GetTaste()  and  VIZN_Is_Rec_User() , and add the codes of the above three functions in sln_api.cpp. The VIZN_SetTaste() and VIZN_GetTaste() functions are equivalent to the encapsulation of the DB_Set_Taste() and DB_Get_Taste() functions. Why is it so complicated? To follow the code layering mechanism of the elock_oobe project and reduce the difficulty of code implementation through code layered encapsulation. /** * @brief Set user's favorite coffee taste. * * @Param clientHandle The client handler which required this action * @Param userName Pointer to a buffer which contains the name of the new user. * @Param taste Coffee taste */ vizn_api_status_t VIZN_SetTaste(VIZN_api_client_t *clientHandle, char *UserName, cfg_Coffee_taste taste); /** * @brief Set user's favorite coffee taste. * * @Param clientHandle The client handler which required this action * @Param userName Pointer to a buffer which contains the name of the new user. * @Param taste Pointer to the Coffee taste */ vizn_api_status_t VIZN_GetTaste(VIZN_api_client_t *clientHandle, char *UserName, int *taste); vizn_api_status_t VIZN_Is_Rec_User(VIZN_api_client_t *clientHandle, char *UserName); ~~~~~~~~~ vizn_api_status_t VIZN_SetTaste(VIZN_api_client_t *clientHandle, char *UserName, cfg_Coffee_taste taste) { int32_t status; if (!IsValidUserName(UserName)) { return kStatus_API_Layer_RenameUser_InvalidUserName; } status = DB_Set_Taste(std::string(UserName), (uint16_t)taste); if (status == 0) { return kStatus_API_Layer_Success; } else if (status == -1) { return kStatus_API_Layer_SetTaste_Failed; } } vizn_api_status_t VIZN_GetTaste(VIZN_api_client_t *clientHandle, char *UserName, int *taste) { int32_t status; if (!IsValidUserName(UserName)) { return kStatus_API_Layer_RenameUser_InvalidUserName; } *taste = DB_Get_Taste(std::string(UserName)); if (*taste != -1) { return kStatus_API_Layer_Success; } else { return kStatus_API_Layer_GetTaste_Failed; } } vizn_api_status_t VIZN_Is_Rec_User(VIZN_api_client_t *clientHandle, char *UserName) { if (!IsValidUserName(UserName)) { return kStatus_API_Layer_RenameUser_InvalidUserName; } return kStatus_API_Layer_Success; } In sln_api_init.cpp, declare the variable:  std::string Current_User = "" ; which is used to store the name corresponding to the face after recognition, and add the processing function  Coffee_Rec()  after successful face recognition in the structure variable ops2; std::string Current_User = " "; //Add customize function int Coffee_Rec(VIZN_api_client_t *pClient, face_info_t face_info); client_operations_t ops2 = { .detect = NULL, .recognize = Coffee_Rec,//NULL, .enrolment = NULL, }; //Add customize function int Coffee_Rec(VIZN_api_client_t *pClient, face_info_t face_info) { Current_User = face_info.name; return 1; } In sln_timers.h, increase MS_SYSTEM_LOCKED to extend the locked status time to 25 seconds; ~~~~~~~~ #define MS_SYSTEM_LOCKED 25000 //2000 // MS in which the board is in a locked state after a reg/rec. ~~~~~~~~ In sln_cli.cpp, add three Shell commands: order, set_taste, get_taste to stand for the operations of brewing coffee, setting coffee taste, and checking coffee taste; SHELL_COMMAND_DEFINE(set_taste, (char *)"\r\n\"set_taste username <0|1|2|3|~>\": set user's favorite taste\r\n" "0 - Cappuccino\r\n" "1 - Black Coffee\r\n" "2 - Coffee latte\r\n" "3 - Flat White\r\n" "4 - Cortado\r\n" "5 - Mocha\r\n" "6 - Con Panna\r\n" "7 - Lungo\r\n" "8 - Ristretto\r\n" "9 - Others \r\n", FFI_CLI_SetTasteCommand, SHELL_IGNORE_PARAMETER_COUNT); SHELL_COMMAND_DEFINE(get_taste, (char *)"\r\n\"get_taste username\": return user's favorite taste \r\n", FFI_CLI_GetTasteCommand, SHELL_IGNORE_PARAMETER_COUNT); SHELL_COMMAND_DEFINE(order, (char *)"\r\n\"order <0|1|2|3|~>\": order a favorite taste \r\n", FFI_CLI_OrderCommand, SHELL_IGNORE_PARAMETER_COUNT); ~~~~~~ static shell_status_t FFI_CLI_SetTasteCommand(shell_handle_t shellContextHandle, int32_t argc, char **argv) { if (argc != 3) { SHELL_Printf(shellContextHandle, "Wrong parameters\r\n"); return kStatus_SHELL_Error; } return UsbShell_QueueSendFromISR(shellContextHandle, argc, argv, SHELL_EV_FFI_CLI_SET_TASTE); } static shell_status_t FFI_CLI_GetTasteCommand(shell_handle_t shellContextHandle, int32_t argc, char **argv) { if (argc != 2) { SHELL_Printf(shellContextHandle, "Wrong parameters\r\n"); return kStatus_SHELL_Error; } return UsbShell_QueueSendFromISR(shellContextHandle, argc, argv, SHELL_EV_FFI_CLI_GET_TASTE); } shell_status_t FFI_CLI_OrderCommand(shell_handle_t shellContextHandle, int32_t argc, char **argv) { if (argc > 2) { SHELL_Printf(shellContextHandle, "Wrong parameters\r\n"); return kStatus_SHELL_Error; } return UsbShell_QueueSendFromISR(shellContextHandle, argc, argv, SHELL_EV_FFI_CLI_ORDER); } ~~~~~~ shell_status_t RegisterFFICmds(shell_handle_t shellContextHandle) { SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(list)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(add)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(del)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(rename)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(verbose)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(camera)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(version)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(save)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(updateotw)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(reset)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(emotion)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(liveness)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(detection)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(display)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(wifi)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(app_type)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(low_power)); // Add three Shell commands SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(order)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(set_taste)); SHELL_RegisterCommand(shellContextHandle, SHELL_COMMAND(get_taste)); return kStatus_SHELL_Success; } In sln_cli.cpp, it needs to add corresponding codes for handle order, set_taste, get_taste instructions in task UsbShell_CmdProcess_Task else if (queueMsg.shellCommand == SHELL_EV_FFI_CLI_SET_TASTE) { int coffee_taste = atoi(queueMsg.argv[2]); if (coffee_taste >= Cappuccino && coffee_taste <= Others) { status = VIZN_SetTaste(&VIZN_API_CLIENT(Shell),(char *)queueMsg.argv[1], (cfg_Coffee_taste)coffee_taste); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s like coffee taste: %s \r\n", queueMsg.argv[1], Coffee_type[coffee_taste]); } else { SHELL_Printf(shellContextHandle, "Cannot set coffee taste\r\n"); } } else { SHELL_Printf(shellContextHandle, "Unsupported coffee taste\r\n"); } } else if (queueMsg.shellCommand == SHELL_EV_FFI_CLI_GET_TASTE) { int get_taste_num = 0; status = VIZN_GetTaste(&VIZN_API_CLIENT(Shell),(char *)queueMsg.argv[1], &get_taste_num); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s like coffee taste: %s \r\n", queueMsg.argv[1], Coffee_type[(cfg_Coffee_taste)(get_taste_num)]); } else { SHELL_Printf(shellContextHandle, "Cannot get coffee taste\r\n"); } } else if (queueMsg.shellCommand == SHELL_EV_FFI_CLI_ORDER) { status = VIZN_Is_Rec_User(&VIZN_API_CLIENT(Shell),(char *)Current_User.c_str()); if (status == kStatus_API_Layer_Success) { if (queueMsg.argc == 1) { int get_taste_num = 0; status = VIZN_GetTaste(&VIZN_API_CLIENT(Shell),(char*)Current_User.c_str(), &get_taste_num); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s order the a cup of %s \r\n", Current_User.c_str(), Coffee_type[(cfg_Coffee_taste)(get_taste_num)]); } else { SHELL_Printf(shellContextHandle, "Sorry, please order again, Current user is %s\r\n",Current_User.c_str()); } } else if(queueMsg.argc == 2) { int coffee_taste = atoi(queueMsg.argv[1]); if (coffee_taste >= Cappuccino && coffee_taste <= Others) { status = VIZN_SetTaste(&VIZN_API_CLIENT(Shell),(char*)Current_User.c_str(), (cfg_Coffee_taste)coffee_taste); if (status == kStatus_API_Layer_Success) { SHELL_Printf(shellContextHandle, "User: %s order a cup of %s \r\n", Current_User.c_str(), Coffee_type[coffee_taste]); } else { SHELL_Printf(shellContextHandle, "Cannot set coffee taste, Current user is %s\r\n",Current_User.c_str()); } } else { SHELL_Printf(shellContextHandle, "Unsupported coffee taste\r\n"); } } } } Use the cafe logo of《Friends》to replace the original Welcome_home picture, use the BmpCvt tool to convert the picture into the corresponding array, and add it to welcomehome_320x122.h. static const unsigned short Coffee_shop_320_122[] = { 0x59E6, 0x6227, 0x6247, 0x59C5, 0x59C5, 0x59A5, 0x4103, 0x6A67, 0x6A47, 0x6227, 0x6A47, 0x6A68, 0x7268, 0x6A67, 0x6A67, 0x6A47, 0x72A9, 0x6A68, 0x7268, 0x6A48, 0x5A06, 0x6A88, 0x6A68, 0x6247, 0x6A47, 0x7289, 0x7289, 0x6A47, 0x6A47, 0x6A47, 0x6227, 0x6A68, 0x6206, 0x6A47, 0x5A26, 0x6247, 0x6227, 0x6A27, 0x4924, 0x836D, 0x5207, 0x7BAC, 0x5247, 0x83ED, 0x4A47, 0x2923, 0x7B8C, 0x49E5, 0x49E5, 0x4A05, 0x28C1, 0x5226, 0x6267, 0x6A87, 0x72E9, 0x6267, 0x6AA9, 0x5A27, 0x6AA9, 0x6AA9, 0x5A47, 0x6A88, 0x5A06, 0x5A47, 0x6AA9, 0x5A47, 0x62A9, 0x5206, 0x6288, 0x6268, 0x5A47, 0x5A27, 0x5A47, 0x5A27, 0x49E6, 0x4A07, 0x4A07, 0x5A89, 0x49C6, 0x5A48, 0x5A28, 0x5A47, 0x5226, 0x49E6, 0x49C6, 0x41A6, 0x5208, 0x2082, 0x52A8, 0x6B6B, 0x39A5, 0x39A5, 0x3964, 0x49E7, 0x3104, 0x49C7, 0x3945, 0x41A6, 0x28A2, 0x2061, 0x3965, 0x28E3, 0x1881, 0x3944, 0x3103, 0x3103, 0x3903, 0x4145, 0x51A6, 0x51C6, 0x4985, 0x51E6, 0x51E6, 0x61E7, 0x6A48, 0x6A28, 0x6A28, 0x6A27, 0x61E6, 0x6207, 0x6A68, 0x59E7, 0x4185, 0x51E6, 0x51A6, 0x6228, 0x5A07, 0x6228, 0x5A08, 0x4184, 0x41A5, 0x4164, 0x3944, 0x3944, 0x736B, 0x83ED, 0x41A5, 0x83ED, 0x6288, 0x8BAB, 0x836A, 0x6287, 0x6B2A, 0x5267, 0x83CD, 0x5A68, 0x5228, 0x3986, 0x3985, 0x7B0A, 0x6A67, 0x7267, 0x832B, 0x49A5, 0x6206, 0x8AC9, 0x72A8, 0x82C9, 0x82E9, 0x8309, 0x6A46, 0x8B2B, 0x3860, 0x8329, 0x6A67, 0x7288, 0x7268, 0x61E6, 0x7267, 0x6A67, 0x59C5, 0x51A4, 0x6A46, 0x7AA8, 0x6A26, 0x7287, 0x7AA8, 0x72A8, 0x72A9, 0x51C5, 0x5A27, 0x5A27, 0x3923, 0x ~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~ 0x7B8C, 0x734B, 0x6B0A, 0x83CD, 0x83ED, 0x8C0E, 0x7B8C, 0x7B6C, 0x20C2, 0x5227, 0x83ED, 0x6AE9, 0x734B, 0x62A9, 0x7B6B, 0x7B8C, 0x62E9, 0x7BAC, 0x7B6B, 0x732A, 0x940D, 0x83AC, 0x732A, 0x7309, 0x8BCC, 0x7309, 0x8BCD, 0x83AC, 0x7B6B, 0x940D, 0x3943, 0x942E, 0x7B6B, 0x734A, 0x7B8B, 0x62C8, 0x7B8B, 0x7B6A, 0x7BAB, 0x732A, 0x7B6B, 0x7B6B, 0x83CC, 0x6B09, 0x6AA9, 0x6AE9, 0x7B6B, 0x7B8B, 0x83AC, 0x734B, 0x6AC9, 0x6B0A, 0x734B, 0x734A, 0x62A8, 0x732A, 0x8C0E, 0x8BCD, 0x944F, 0x734B, 0x7B8B, 0x732A, 0x942E, 0x8BCD, 0x83AD, 0x732B, 0x6B0A, 0x6AEA, 0x62C9, 0x9C90, 0x28C2, 0x8BEE, 0x93EE, 0x8BCD, 0x4183, 0x838B, 0x7B6A, 0x6287, 0x8BCB }; Programming the new project After saving the modified code and recompile the sln_viznas_iot_elock_oobe project (as shown in the figure below), then connect the MCU-LINK to J6 on the SLN-VIZNAS-IOT, just like Fig9 shows. Fig8 Recompile code Fig9 MCU-LINK (Note: it needs to reselect the Flash driver, as the below figure shows.) Fig10 Flash driver After that, it's able to program the code project to the on-board Hyperflash. Test & Summary When the new code project boot-up, please refer to Get Started with the SLN-VIZNAS-IOT to use the serial terminal to test the newly added three Shell commands: orders, set_taste, and get_taste. Once a face is successfully recognized, the cafe logo will appear up (as shown in Fig11). Fig11 Cafe logo Definitely, this smart coffee machine seems like a 'toy' demo, and there is a lot of work to improve it. Below is the list of my future work plans, Use the LCD panel instead of USB to display; Connect an external amplifier to enable voice prompt feature; Enable the Wifi feature to connect to the App; Use the GUI library to enhance UI experience; Add a voice recognition feature to control; And I'll be glad to hear any comments from you.    
View full article
Issue: 802.11 IEEE station Power Save mode is not working as expected with the latest SDK 2.11.1, supporting NXP wireless solutions 88W8987/88W8977/IW416.   Solution: Modify the structure in file : middleware/wifi/wifidriver/incl/mlan_fw.h, Replace  “ENH_PS_MODES action” to “uint16_t action”.    Note: This fix will officially be part of SDK: 2.12.0
View full article
Introduction  This document is an extension of section 3.1.3, “Software implementation” from the application note AN12077, using the i.MX RT FlexRAM. It's important that before continue reading this document, you read this application note carefully.  Link to the application note.  Section 3.1.3 of the application note explains how to reallocate the FlexRAM through software within the startup code of your application. This document will go into further detail on all the implications of making these modifications and what is the best way to do it.  Prerequisites RT10xx-EVK  The latest SDK which you can download from the following link: Welcome | MCUXpresso SDK Builder MCUXpresso IDE Internal SRAM  The amount of internal SRAM varies depending on the RT. In some cases, not all the internal SRAM can be reallocated with the FlexRAM.  RT  Internal SRAM FlexRAM RT1010 Up to 128 KB Up to 128 KB RT1015 Up to 128 KB Up to 128 KB RT1020 Up to 256 KB Up to 256 KB RT1050 Up to 512 KB Up to 512 KB RT1060 Up to 1MB  Up to 512 KB RT1064 Up to 1MB Up to 512 KB   In the case of the RT106x, only 512 KB out of the 1MB of internal SRAM can be reallocated through the FlexRAM as DTCM, ITCM, and OCRAM. The remaining 512 KB are from OCRAM and cannot be reallocated. For all the other RT10xx you can reallocate the whole internal SRAM either as DTCM, ITCM, and OCRAM. Section 3.1.3.1 of the application note explains the limitations of the size when reallocating the FlexRAM. One thing that's important to mention is that the ROM bootloader in all the RT10xx parts uses the OCRAM, hence you should keep some  OCRAM when reallocating the FlexRAM, this doesn't apply to the RT106x since you will always have the 512 KB of OCRAM that cannot be reallocated. To know more about how many OCRAM each RT family needs please refer to section 2.1.1.1 of the application note. Implementation in MCUXpresso IDE First, you need to import any of the SDK examples into your MCUXpresso IDE workspace. In my case, I imported the igpio_led_output example for the RT1050-EVKB. If you compile this project, you will see that the default configuration for the FlexRAM on the RT1050-EVKB is the following:  SRAM_DTC 128 KB SRAM_ITC 128 KB SRAM_OC 256 KB   Now we need to go to the Reset handler located in the file startup_mimxrt1052.c. Reallocating the FlexRAM has to be done before the FlexRAM is configured, this is why it's done inside the Reset Handler.  The registers that we need to modify to reallocate the FlexRAM are IOMUXC_GPR_GPR16, and IOMUXC_GPR_GPR17. So first we need to have in hand the addresses of these three registers. Register Address IOMUXC_GPR_GPR16 0x400AC040 IOMUXC_GPR_GPR17 0x400AC044   Now, we need to determine how we want to reallocate the FlexRAM to see the value that we need to load into register IOMUXC_GPR_GPR17. In my case, I want to have the following configuration:  SRAM_DTC 256 KB SRAM_ITC 128 KB SRAM_OC 128 KB   When choosing the new sizes of the FlexRAM be sure that you choose a configuration that you can also apply through the FlexRAM fuses, I will explain the reason for this later. The configurations that you can achieve through the fuses are shown in the Fusemap chapter of the reference manual in the table named "Fusemap Descriptions", the fuse name is "Default_FlexRAM_Part".  Based on the following explanation of the IOMUXC_GPR_GPR17 register: The value that I need to load to the register is 0xAAAAFF55. Where the first  4 banks correspond to the 128KB of SRAM_OC, the next 4 banks correspond to the 128KB of SRAM_ITC and the last 8 banks are the 256KB of SRAM_DTC.  Now, that we have all the addresses and the values that we need we can start writing the code in the Reset handler. The first thing to do is load the new value into the register IOMUXC_GPR_GPR17. After, we need to configure register IOMUXC_GPR_GPR16 to specify that the FlexRAM bank configuration should be taken from register IOMUXC_GPR_GPR17 instead of the fuses. Then if in your new configuration of the FlexRAM either the SRAM_DTC or SRAM_ITC are of size 0, you need to disable these memories in the register IOMUXC_GPR_GPR16. At the end your code should look like the following:    void ResetISR(void) { // Disable interrupts __asm volatile ("cpsid i"); /* Reallocating the FlexRAM */ __asm (".syntax unified\n" "LDR R0, =0x400ac044\n"//Address of register IOMUXC_GPR_GPR17 "LDR R1, =0xaaaaff55\n"//FlexRAM configuration DTC = 265KB, ITC = 128KB, OC = 128KB "STR R1,[R0]\n" "LDR R0,=0x400ac040\n"//Address of register IOMUXC_GPR_GPR16 "LDR R1,[R0]\n" "ORR R1,R1,#4\n"//The 4 corresponds to setting the FLEXRAM_BANK_CFG_SEL bit in register IOMUXC_GPR_GPR16 "STR R1,[R0]\n" #ifdef FLEXRAM_ITCM_ZERO_SIZE "LDR R0,=0x400ac040\n"//Address of register IOMUXC_GPR_GPR16 "LDR R1,[R0]\n" "AND R1,R1,#0xfffffffe\n"//Disabling SRAM_ITC in register IOMUXC_GPR_GPR16 "STR R1,[R0]\n" #endif #ifdef FLEXRAM_DTCM_ZERO_SIZE "LDR R0,=0x400ac040\n"//Address of register IOMUXC_GPR_GPR16 "LDR R1,[R0]\n" "AND R1,R1,#0xfffffffd\n"//Disabling SRAM_DTC in register IOMUXC_GPR_GPR16 "STR R1,[R0]\n" #endif ".syntax divided\n"); #if defined (__USE_CMSIS) // If __USE_CMSIS defined, then call CMSIS SystemInit code SystemInit(); ...‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   If you compile your project you will see the memory distribution that appears on the console is still the default configuration.  This is because we did modify the Reset handler to reallocate the FlexRAM but we haven't modified the linker file to match these new sizes. To do this you need to go to the properties of your project. Once in the properties, you need to go to C/C++ Build -> MCU settings. Once you are in the MCU settings you need to modify the sizes of the SRAM memories to match the new configuration.  When you make these changes click Apply and Close. After making these changes if you compile the project you will see the memory distribution that appears in the console is now matching the new sizes.  Now we need to modify the Memory Protection Unit (MPU) to match these new sizes of the memories. To do this you need to go to the function BOARD_ConfigMPU inside the file board.c. Inside this function, you need to locate regions 5, 6, and 7 which correspond to SRAM_ITC, SRAM_DTC, and SRAM_OC respectively. Same as for register IOMUXC_GPR_GPR14, if the new size of your memory is not 32, 64, 128, 256, or 512 you need to choose the next greater number. Your configuration should look like the following:    /* Region 5 setting: Memory with Normal type, not shareable, outer/inner write back */ MPU->RBAR = ARM_MPU_RBAR(5, 0x00000000U); MPU->RASR = ARM_MPU_RASR(0, ARM_MPU_AP_FULL, 0, 0, 1, 1, 0, ARM_MPU_REGION_SIZE_128KB); /* Region 6 setting: Memory with Normal type, not shareable, outer/inner write back */ MPU->RBAR = ARM_MPU_RBAR(6, 0x20000000U); MPU->RASR = ARM_MPU_RASR(0, ARM_MPU_AP_FULL, 0, 0, 1, 1, 0, ARM_MPU_REGION_SIZE_256KB); /* Region 7 setting: Memory with Normal type, not shareable, outer/inner write back */ MPU->RBAR = ARM_MPU_RBAR(7, 0x20200000U); MPU->RASR = ARM_MPU_RASR(0, ARM_MPU_AP_FULL, 0, 0, 1, 1, 0, ARM_MPU_REGION_SIZE_128KB);‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   We need to change the image entry address to the Reset handler. To do this, you need to go to the file fsl_flexspi_nor_boot.c inside the xip folder. You need to declare the ResetISR and change the entry address in the image vector table.  Finally, we need to place the stack at the start of the DTCM memory. To do this, we need to go to the properties of your project. From there, we have to C/C++ Build and Manage Linker Script.  From there, we will need to add two more assembly instructions in our ResetISR function. We have to add these two instructions at the beginning of our assembly code:  In the attached c file, you'll find all the assembly instructions mentioned above.  That's it, these are all the changes that you need to make to reallocate the FlexRAM during the startup.  Debug Session  To verify that all the modifications that we just did were correct we will launch the debug session. As soon as we reach the main, before running the application, we will go to the peripheral view to see registers IOMUXC_GPR_GPR16, and IOMUXC_GPR_GPR17 and verify that the values are the correct ones. In register IOMUXC_GPR_GPR16 as shown in the image below we configure the FLEXRAM_BANK_CFG_SEL as 1 to use the use register IOMUXC_GPR_GPR17 to configure the FlexRAM.  Finally, in register IOMUXC_GPR_GPR17 we can see the value 0xAAAAFF55 that corresponds to the new configuration.  Reallocating the FlexRAM through the Fuses  We just saw how to reallocate the FlexRAM through software by writing some code in the Reset Handler. This procedure works fine, however, it's recommended that you use this approach to test the different sizes that you can configure but once you find the correct configuration for your application we highly recommend that you configure these new sizes through the fuses instead of using the register IOMUXC_GPR_GPR17. There are lots of dangerous areas in reconfiguring the FlexRAM in code. It pretty much all boils down to the fact that any code/data/stack information written to the RAM can end up changing location during the reallocation.  This is the reason why once you find the correct configuration, you should apply it through the fuses. If you use the fuses to configure the FlexRAM, then you don't have the same concerns about moving around code and data, as the fuse settings are applied as a hardware default.  Keep in mind that once you burn the fuses there's no way back! This is why it's important that you first try the configuration through the software method. Once you burn the fuses you won't need to modify the Reset handler, you only need to modify the MPU to change the size of regions as we saw before and the MCU settings of your project to match the new memory sizes that you configured through the fuses.  The fuse in charge of the FlexRAM configuration is Default_FlexRAM_Part, the address of this fuse is 0x6D0[15:13]. You can find more information about this fuse and the different configurations in the Fusemap chapter of the reference manual.  To burn the fuses I recommend using either the blhost or the MCUBootUtility.  Link to download the blhost.  Link to the MCUBootUtility webpage.    I hope you find this document helpful!  Víctor Jiménez 
View full article
RT106L_S voice control system based on the Baidu cloud 1 Introduction     The NXP RT106L and RT106S are voice recognition chip which is used for offline local voice control, SLN-LOCAL-IOT is based on RT106L, SLN-LOCAL2-IOT is a new local speech recognition board based on RT106S. The board includes the murata 1DX wifi/BLE module, the AFE voice analog front end, the ASR recognition system, the external flash, 2 microphones, and the analog voice amplifier and speakers. The voice recognition process for SLN-LOCAL-IOT and SLN-LOCAL2-IOT is different and the new SLN-LOCAL2-IOT is recommended.     This article is based on the voice control board SLN-LOCAL/2-IOT to implement the following block diagram functions: Pic 1 Use the PC-side speed model tool (Cyberon DSMT) to generate WW(wake word) and VC(voice command) Command related voice engine binary files , which will be used by the demo code. This system is mainly used for the Chinese word recognition, when the user says Chinese word: "小恩小恩", it wakes up SLN-LOCAL/2-IOT, and the board gives feedback "小恩来了,请吩咐". Then system enter the voice recognition stage, the user can say the voice recognition command: “开红灯”,“关红灯”,“开绿灯”,“关绿灯”,“灯闪烁”,“开远程灯”,“关远程灯”, after recognition, the board gives feedback "好的". Among them, “开红灯”,“关红灯”,“开绿灯”,“关绿灯”,“灯闪烁”,the five commands are used for the local light switch, while the 开远程灯”,“关远程灯“two commands can through network communication Baidu cloud control the additional MIMXRT1060-EVK development board light switch. SLN-LOCAL/2-IOT through the WIFI module access to the Internet with MQTT protocol to achieve communication with Baidu cloud, when dectect the remote control command, publish the json packets to Baidu cloud, while MIMRT1060-EVK subscribe Baidu cloud data, will receive data from the IOT board and analyze the EVK board led control. PC side can use MQTT.fx software to subscribe the Baidu cloud data, it also can send data to the device to achieve remote control function directly.  Now, will give the detail content about how to use the SLN-LOCAL/2-IOT SDK demo realize the customized Chinese wake command and voice command, and remote control the MIMXRT1060-EVK through the Baidu Cloud.     2 Platform establish 2.1 Used platform SLN-LOCAL-IOT/SLN-LOCAL2-IOT MIMXRT1060-EVK MQTT.fx SDK_2_8_0_SLN-LOCAL2-IOT MCUXPresso IDE Segger JLINK Baidu Smart Cloud: Baidu cloud control+ TTS Audacity:audio file format convert tool WAVToCode:wav convert to the c array code, which used for the demo tilte play MCUBootUtility: used to burn the feedback audio file to the filesystem Cyberon DSMT: wake word and voice detect command generation tool DSMT is the very important tool to realize the wake word and voice dection, the apply follow is: Pic 2 2.2 Baidu Smart cloud 2.2.1 Baidu cloud IOT control system Enter the IoT Hub: https://cloud.baidu.com/product/iot.html     Click used now. 2.2.1.1 Create device project Create a project, select the device type, and enter the project name. Device types can use shadows as images of devices in the cloud to see directly how data is changing. Once created, an endpoint is generated, along with the corresponding address: Pic 3 2.2.1.2 Create Thing model The Thing model is mainly to establish various properties needed in the shadow, such as temperature, humidity, other variables, and the type of value given, in fact, it is also the json item in the actual MQTT communication.    Click the newly created device-type project where you can create a new thing model or shadow: Pic 4    Here create 3 attributes:LEDstatus,humid,temp It is used to represent the led status, humidity, temperature and so on, which is convenient for communication and control between the cloud and RT board. Once created, you get the following picture:   Pic 5   2.2.1.3 Create Thing shadow In the device-type project, you can select the shadow, build your own shadow platform, enter the name, and select the object model as the newly created Thing model containing three properties, after the create, we can get the details of the shadow:   Pic 6 At the same time will also generate the shadow-related address, names and keys, my test platform situation is as follows: TCP Address: tcp://rndrjc9.mqtt.iot.gz.baidubce.com:1883 SSL Address: ssl://rndrjc9.mqtt.iot.gz.baidubce.com:1884 WSS Address: wss://rndrjc9.mqtt.iot.gz.baidubce.com:443 name: rndrjc9/RT1060BTCDShadow key: y92ewvgjz23nzhgn Port 1883, does not support transmission data encryption Port 1884, supports SSL/TLS encrypted transmission Port 8884, which supports wesockets-style connections, also contains SSL encryption. This article uses a 1883 port with no transmission data encryption for easy testing. So far, Baidu cloud device-type cloud shadow has been completed, the following can use MQTTfx tools to connect and test. In practice, it is recommended that customers build their own Baidu cloud connection, the above user key is for reference only.   2.2.2 Online TTS    SLN-LOCAL/2-IOT board recognizes wake-up words, recognition words, or when powering on, you need to add corresponding demo audio, such as: "百度云端语音测试demo ", "小恩来啦!请吩咐“,"好的". These words need to do a text-to-wav audio file synthesis, here is Baidu Smart Cloud's online TTS function, the specific operation can refer to the following documents: https://ai.baidu.com/ai-doc/SPEECH/jk38y8gno   Once the base audio library is opened, use the main.py provided in the link above and modify it to add the Chinese field you want to convert to the file "TEXT" and add the audio file to be converted in "save_file" such as xxx .wav, using the command: python main.py to complete the conversion, and generate the audio format corresponding to the text, such as .mp3, .wav. Pic 7   After getting the wav file, it can’t be used directly, we need to note that for SLN-LOCAL/2-IOT board, you need to identify the audio source of the 48K sample rate with 16bit, so we need to use the Audioacity Audio tool to convert the audio file format to 48K16bit wav. Import 16K16bit wav files generated by Baidu TTS into the Audioacity tool, select project rate of 48Khz, file->export->export as WAV, select encoding as signed 16bit PCM, and regenerate 48Khz16bit wav for use. Pic 8 “百度云端语音测试demo“:Used for power-on broadcasting, demo name broadcasting, it is stored in RT demo code, so you need to convert it to a 16bit C code array and add it to the project. "小恩来啦!请吩咐",“好的“:voice detect feedback, it is saved in the filesystem ZH01,ZH02 area. 2.3 playback audio data prepare and burn   There are two playback audio file, it is "小恩来啦!请吩咐",“好的“,it is saved in the filesystem ZH01,ZH02 area. Filesystem memory map like this: Pic 9 So, we need to convert the 48K16bit wav file to the filesystem needed format, we need to use the official tool::Ivaldi_sln_local2_iot Reference document:SLN-LOCAL2-IOT-DG chapter 10.1 Generating filesystem-compatible files Use bash input the commands like the following picture: Pic10 Use the convert command to get the playback bin file: python file_format.py -if xiaoencoming_48k16bit.wav -of xiaoencoming_48k16bit.bin -ft H At last, it will generate the file: "小恩来啦!请吩咐"->xiaoencoming_48k16bit.bin,burn to flash address 0x6184_0000 “好的”->OK_48k16bit.bin, burn to flash address 0x6180_0000 Then, use MCUBootUtility tool burn the above two file to the related images. Here, take OK_48k16bit.bin as an example, demo enter the serial download mode(J27-0), power off and power on. Flash chip need to select hyper flash IS26KSXXS, use the boot device memory windows, write button to burn the .bin file to the related address, length is 0X40000 Pic11 Pic12 xiaoencoming_48k16bit.bin can use the same method to download to 0x6184_0000,Length is 0X40000.   2.4 Demo audio prepare and add The prepared baiduclouddemo_48K16bit.wav(“百度云端语音测试demo “) need to convert to the 16bit C array code, and put to the project code, calls by the code, this is used for the demo mode play. The convert need to use the WAVToCode, the operation like this: Pic 13 The generated baiducloulddemo_48K16bit.c,add it to the demo project C files: sln_local_iot_local_demo->audio->demos->smart_home.c。 2.5 WW and VC prepare Wake-up word are generated through the cyberon DSMT tool, which supports a wide range of language, customers can request the tool through Figure 2. The Chinese wake-up words and voice command words in this article are also generated through DSMT. DSMT can have multiple groups, group1 as a wake-up word configuration, CmdMapID s 1. Other groups act as voice command words, such as CMD-IOT in this article, cmdMapID=2. Pic 14   Pic 15 Wake word continuously detects the input audio stream, uses group1, and if successfully wakes up, will do the voice command detection uses group2, or other identifying groups as well as custom groups. The wake-up words using the DSMT tool, the configuration are as follows: Pic 16 The WW can support more words, customer can add the needed one in the group 1. Use the DSMT configure VC like this: Pic 17 Then, save the file, code used file are: _witMapID.bin, CMD_IOT.xml,WW.xml. In the generated files, CYBase.mod is the base model, WW.mod is the WW model, CMD_IOT.mod is the VC model. After Pic 16,17, it finishes the WW and VC command prepare, we can put the DSMT project to the RT106S demo project folder: sln_local2_iot_local_demo\local_voice\oob_demo_zh 3 Code prepare Based on the official SLN-LOCAL2-IOT SDK local_demo, the code in this article modifies the Chinese wake-up words and recognition words (or you can build a new customer custom group directly), add local voice detect the led status operations, Then feedback Chinese audio, demo Chinese audio, Wifi network communication MQTT protocol code, and Baidu cloud shadow connection publish. Source reference code SDK path: SDK_2_8_0_SLN-LOCAL2-IOT\boards\sln_local2_iot\sln_voice_examples\local_demo   SDK_2_8_0_SLN-LOCAL2-IOT\boards\sln_local2_iot\sln_boot_apps SLN-LOCAL2-IOT and SLN-LOCAL-IOT code are nearly the same, the only difference is that the ASR library file is different, for RT106S (SLN-LOCAL2-IOT) using SDK it’s own libsln_asr.a library, for RT106L (SLN-LOCAL-IOT) need to use the corresponding libsln_asr_eval.a library.    Importing code requires three projects: local_demo, bootloader, bootstrap. The three projects store in different spaces. See SLN-LOCAL2-IOT-DG .pdf, chapter 3.3 Device memory map    This is the 3 chip project boot process: Pic 18 This document is for demo testing and requires debug, so this article turns off the encryption mechanism, configures bootloader, bootstrap engineering macro definition: DISABLE_IMAGE_VERIFICATION = 1, and uses JLINK to connect SLN-LOCAL/2-IOT's SWD interface to burn code. The following is to add modification code for app local_demo projects. 3.1 sln-local/2-iot code Sln-local-iot, sln-local2-iot platform, the following modification are the same for the two platform. 3.1.1 Voice recognition related code 1)Demo audio play Play content:“百度云端语音测试demo“ sln_local2_iot_local_demo_xe_ledwifi\audio\demos\ smart_home.c content is replaced by the previously generated baiducloulddemo_48K16bit.C. audio_samples.h,modify: #define SMART_HOME_DEMO_CLIP_SIZE 110733 This code is used for the main.c announce_demo API play:         case ASR_CMD_IOT:             ret = demo_play_clip((uint8_t *)smart_home_demo_clip, sizeof(smart_home_demo_clip));   2)command print information #define NUMBER_OF_IOT_CMDS      7 IndexCommands.h static char *cmd_iot_en[] = {"Red led on", "Red led off", "Green led on", "Green led off",                              "cycle led",        "remote led on",         "remote led off"}; static char *cmd_iot_zh[] = {"开红灯", "关红灯", "开绿灯", "关绿灯", "灯闪烁", "开远程灯", "关远程灯"}; Here is the source code modification using IOT, you can actually add your own speech recognition group directly, and add the relevant command identification.   3)sln_local_voice.c Line757 , add led-related notification information in ASR_CMD_IOT mode. oob_demo_control.ledCmd = g_asrControl.result.keywordID[1];     The code is used to obtain the recognized VC command data, and the value of keywordID[1] represents the number. This number can let the code know which detail voice is detected. so that you can do specific things in the app based on the value of ledcmd. The value of keywordID[1] corresponds to Command List in Figure 17. For example, “开远程灯“, if woke up, and recognized "开远程灯", then keywordID[1] is 5, and will transfer to oob_demo_control.ledCmd, which will be used in the appTask API to realize the detail control. 4) main.c void appTask(void *arg) Under case kCommandGeneric: if the language is Chinese, then add the recognition related control code, at first, it will play the feedback as “好的”. Then, it will check the voice detect value, give the related local led control. else if (oob_demo_control.language == ASR_CHINESE) { // play audio "OK" in Chinese #if defined(SLN_LOCAL2_RD) ret = audio_play_clip((uint8_t *)AUDIO_ZH_01_FILE_ADDR, AUDIO_ZH_01_FILE_SIZE); #elif defined(SLN_LOCAL2_IOT) ret = audio_play_clip(AUDIO_ZH_01_FILE); #endif //kerry add operation code==================================================begin RGB_LED_SetColor(LED_COLOR_OFF); if (oob_demo_control.ledCmd == LED_RED_ON) { RGB_LED_SetColor(LED_COLOR_RED); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_RED_OFF) { RGB_LED_SetColor(LED_COLOR_OFF); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_BLUE_ON) { RGB_LED_SetColor(LED_COLOR_BLUE); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_BLUE_OFF) { RGB_LED_SetColor(LED_COLOR_OFF); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == CYCLE_SLOW) { for (int i = 0; i < 3; i++) { RGB_LED_SetColor(LED_COLOR_RED); vTaskDelay(400); RGB_LED_SetColor(LED_COLOR_OFF); RGB_LED_SetColor(LED_COLOR_GREEN); vTaskDelay(400); RGB_LED_SetColor(LED_COLOR_OFF); RGB_LED_SetColor(LED_COLOR_BLUE); vTaskDelay(400); } } … } In addition to local voice recognition control, this article also add remote control functions, mainly through wifi connection, use the mqtt protocol to connect Baidu cloud server, when local speech recognition get the remote control command, it publish the corresponding control message to Baidu cloud, and then the cloud send the message to the client which subscribe this message,  after the client get the message, it will refer to the message content do the related control.   3.1.3 Network connection code 1)sln_local2_iot_local_demo_xe_ledwifi\lwip\src\apps\mqtt     Add mqtt.c 2)sln_local2_iot_local_demo_xe_ledwifi\lwip\src\include\lwip\apps Add mqtt.h, mqtt_opts.h,mqtt_prv.h The related mqtt driver is from the RT1060 sdk, which already added in the attachment project. 3)sln_tcp_server.c   Add MQTT application layer API function code, client ID, server host, MQTT server port number, user name, password, subscription topic, publishing topic and data, etc., more details, check the attachment code.    The MQTT application code is ported from the mqtt project of the RT1060 SDK and added to the sln_tcp_server.c. TCP_OTA_Server function is used to initialize the wifi network, realize wifi connection, connect to the network, resolve Baidu cloud server URL to get IP, and then connect Baidu cloud server through mqtt, after the successful connection, publish the message at first, so that after power-up through mqttfx to see whether the power on network publishing message is successful. TCP_OTA_Server function code is as follows: static void TCP_OTA_Server(void *param) //kerry consider add mqtt related code { err_t err = ERR_OK; uint8_t status = kCommon_Failed; #if USE_WIFI_CONNECTION /* Start the WiFi and connect to the network */ APP_NETWORK_Init(); while (status != kCommon_Success) { status_t statusConnect; statusConnect = APP_NETWORK_Wifi_Connect(true, true); if (WIFI_CONNECT_SUCCESS == statusConnect) { status = kCommon_Success; } else if (WIFI_CONNECT_NO_CRED == statusConnect) { APP_NETWORK_Uninit(); /* If there are no credential in flash delete the TPC server task */ vTaskDelete(NULL); } else { status = kCommon_Failed; } } #endif #if USE_ETHERNET_CONNECTION APP_NETWORK_Init(true); #endif /* Wait for wifi/eth to connect */ while (0 == get_connect_state()) { /* Give time to the network task to connect */ vTaskDelay(1000); } configPRINTF(("TCP server start\r\n")); configPRINTF(("MQTT connection start\r\n")); mqtt_client = mqtt_client_new(); if (mqtt_client == NULL) { configPRINTF(("mqtt_client_new() failed.\r\n");) while (1) { } } if (ipaddr_aton(EXAMPLE_MQTT_SERVER_HOST, &mqtt_addr) && IP_IS_V4(&mqtt_addr)) { /* Already an IP address */ err = ERR_OK; } else { /* Resolve MQTT broker's host name to an IP address */ configPRINTF(("Resolving \"%s\"...\r\n", EXAMPLE_MQTT_SERVER_HOST)); err = netconn_gethostbyname(EXAMPLE_MQTT_SERVER_HOST, &mqtt_addr); configPRINTF(("Resolving status: %d.\r\n", err)); } if (err == ERR_OK) { configPRINTF(("connect to mqtt\r\n")); /* Start connecting to MQTT broker from tcpip_thread */ err = tcpip_callback(connect_to_mqtt, NULL); configPRINTF(("connect status: %d.\r\n", err)); if (err != ERR_OK) { configPRINTF(("Failed to invoke broker connection on the tcpip_thread: %d.\r\n", err)); } } else { configPRINTF(("Failed to obtain IP address: %d.\r\n", err)); } int i=0; /* Publish some messages */ for (i = 0; i < 5;) { configPRINTF(("connect status enter: %d.\r\n", connected)); if (connected) { err = tcpip_callback(publish_message_start, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } i++; } sys_msleep(1000U); } vTaskDelete(NULL); } Please note the following published json data, it can’t be publish directly in the code. {   "reported": {     "LEDstatus": false,     "humid": 88,     "temp": 22   } } Which need to use this web https://www.bejson.com/ realize the json data compression and convert: {\"reported\" : {     \"LEDstatus\" : true,     \"humid\" : 88,     \"temp\" : 11    } }   4)main appTask Under case kCommandGeneric: , if the language is Chinese, then add the corresponding voice recognition control code. "开远程灯": turn on the local yellow light, publish the “remote led on” mqtt message to Baidu cloud, control remote 1060EVK board lights on. "关远程灯": turn on the local white light, publish the “remote led off” mqtt message to Baidu cloud, control the remote 1060EVK board light off. Related operation code: else if (oob_demo_control.ledCmd == LED_REMOTE_ON) { RGB_LED_SetColor(LED_COLOR_YELLOW); vTaskDelay(5000); err_t err = ERR_OK; err = tcpip_callback(publish_message_on, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } } else if (oob_demo_control.ledCmd == LED_REMOTE_OFF) { RGB_LED_SetColor(LED_COLOR_WHITE); vTaskDelay(5000); err_t err = ERR_OK; err = tcpip_callback(publish_message_off, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } } 3.2 MIMXRT1060-EVK code The main function of the MIMXRT1060-EVK code is to configure another client in the cloud, subscribe to the message published by SLN-LOCAL/2-IOT which detect the remote command, and then the LED on the control board is used to test the voice recognition remote control function, this code is based on Ethernet, through the Ethernet port on the board, to achieve network communication, and then use mqtt to connect baidu cloud, and subscribe the message from local2, This enables the reception and execution of the Local2 command. the network code part is similar to SLN-LOCAL2-IOT board network code, the servers, cloud account passwords, etc. are all the same, the main function is to subscribe messages. See the code from attachment RT1060, lwip_mqtt_freertos.c file. When receives data published by the server, it needs to do a data analysis to get the status of the led light and then control it. Normal data from Baidu cloud shadow sent as follows Received 253 bytes from the topic "$baidu/iot/shadow/RT1060BTCDShadow/update/accepted": "{"requestId":"2fc0ca29-63c0-4200-843f-e279e0f019d3","reported":{"LEDstatus":false,"humid":44,"temp":33},"desired":{},"lastUpdatedTime":{"reported":{"LEDstatus":1635240225296,"humid":1635240225296,"temp":1635240225296},"desired":{}},"profileVersion":159}" Then you need to parse the data of LEDstatus from the received data, whether it is false or true. Because the amount of data is small, there is no json-driven parsing here, just pure data parsing, adding the following parsing code to the mqtt_incoming_data_cb function: mqtt_rec_data.mqttindex = mqtt_rec_data.mqttindex + len; if(mqtt_rec_data.mqttindex >= 250) { PRINTF("kerry test \r\n"); PRINTF("idex= %d", mqtt_rec_data.mqttindex); datap = strstr((char*)mqtt_rec_data.mqttrecdata,"LEDstatus"); if(datap != NULL) { if(!strncmp(datap+11,strtrue,4))//char strtrue[]="true"; { GPIO_PinWrite(GPIO1, 3, 1U); //pull high PRINTF("\r\ntrue"); } else if(!strncmp(datap+11,strfalse,5))//char strfalse[]="false"; { GPIO_PinWrite(GPIO1, 3, 0U); //pull low PRINTF("\r\nfalse"); } } mqtt_rec_data.mqttindex =0; It use the strstr search the “LEDstatus“ in the received data, and get the pointer position, then add the fixed length to get the LED status is true or flash. If it is true, turn on the led, if it is false, turn off the led. 4 Test Result    This section gives the test results and video of the system. Before testing the voice function, first use MQTTfx to test baidu cloud connection, release, subscription is no problem, and then test sln-local2-iot combined with mimxrt1060-evk voice wake-up recognition and remote control functions.    For SLN-LOCAL2-IOT wifi hotspot join, enter the command in the print terminal: setup AWS kerry123456   4.1 MQTT.fx test baidu cloud connection MQTT.fx is an EclipsePaho-based MQTT client tool written in the Java language that supports subscription and publishing of messages through Topic.    4.1.1 MQTT fx configuration     Download and install the tool, then open it, at first, need to do the configuration, click edit connection: Pic19 Profile name:connect name Profile type: MQTT broker Broker address: It is the baidu could generated broker address, with 1883 no encryption transfer. Broker port:1883 No encryption Client ID: RT1060BTCDShadow, here need to note, this name should be the same as the could shadow name, otherwise, on the baidu webpage, the connection is not be detected. If this Client ID name is the same as the shadow name, then when the MQTT fx connect, the online side also can see the connection is OK. User credentials: add the thing User name and password from the baidu cloud. After the configuration, click connect, and refresh the website. Before conection: Pic 20 After connection: Pic 21 4.1.2 MQTT fx subscribe When it comes to subscription publishing, what is the topic of publishing subscriptions?  Here you can open your thing shadow, select the interaction, and see that the page has given the corresponding topic situation: Pic 22 Subscribe topic is: $baidu/iot/shadow/RT1060BTCDShadow/update/accepted  Publish topic is: $baidu/iot/shadow/RT1060BTCDShadow/update Pic 23 Click subscribe, we can see it already can used to receive the data.   4.1.3 MQTT fx publish Publish need to input the topic: $baidu/iot/shadow/RT1060BTCDShadow/update It also need to input the content, it will use the json content data. Pic 24 Here, we can use this json data: {   "reported" : {     "LEDstatus" : true,     "humid" : 88,     "temp" : 11    } } The json data also can use the website to check the data: https://www.bejson.com/jsonviewernew/ Pic 25 Input the publish data, and click pubish button: Pic 26 4.1.4 Publish data test result   Before publish, clean the website thing data: Pic 27 MQTT fx publish data, then check the subscribe data and the website situation: Pic 28 We can see, the published data also can be see in the website and the mqttfx subscribe area. Until now, the connection, data transfer test is OK.   4.2 Voice recognition and remote control test This is the device connection picture: Pic 29 4.2.1 voice recognition local control Pic 30 This is the SLN-LOCAL2-IOT print information after recognize the voice WW and VC. Red led on: led cycle: 4.2.2 voice recognition remote control   Following test, wakeup + remote on, wakeup+remote off, and also give the print result and the video. Pic 31 remote control:  
View full article
Introduction NXP i.MXRT106x has two USB2.0 OTG instance. And the RT1060 EVK has both of the USB interface on the board. But the RT1060 SDK only has single USB host example. Although RT1060’s USB host stack support multiple devices, but we still need a USB HUB when user want to connect two device. This article will show you how to make both USB instance as host. RT1060 SDK has single host examples which support multiple devices, like host_hid_mouse_keyboard_bm. But this application don’t use these examples. Instead, MCUXpresso Config Tools is used to build the demo from beginning. The config tool is a very powerful tool which can configure clock, pin and peripherals, especially the USB. In this application demo, it can save 95% coding work. Hardware and software tools RT1060 EVK MCUXpresso 11.4.0 MIMXRT1060 SDK 2.9.1 Step 1 This project will support USB HID mouse and USB CDC. First, create an empty project named MIMXRT1062_usb_host_dual_port. When select SDK components, select “USB host CDC” and “”USB host HID” in Middleware label. IDE will select other necessary component automatically.     After creating the empty project, clock should be configured first. Both of the USB PHY need 480M clock.   Step 2 Next step is to configure USB host in peripheral config tool. Due to the limitation of config tool, only one host instance of the USB component is allowed. In this project, CDC VCOM is added first.   Step 3 After these settings, click “Update Code” in control bar. This will turn all the configurations into code and merge into project. Then click the “copy to clipboard” button. This will copy the host task call function. Paste it in the forever while loop in the project’s main(). Besides that, it also need to add BOARD_InitBootPeripherals() function call in main(). At this point, USB VCOM is ready. The tool will not only copy the file and configure USB, but also create basic implementation framework. If compile and download the project to RT1060 EVK, it can enumerate a USB CDC VCOM device on USB1. If characters are send from CDC device, the project can send it out to DAPLink UART port so that you can see the character on a terminal interface in computer. Step 4 To get USB HID mouse code, it need to create another USB HID project. The workflow is similar to the first project. Here is the screenshot of the USB HID configuration.   Click “Update code”, the HID mouse code will be generated. The config tool generate two files, usb_host_interface_0_hid_mouse.c and usb_host_interface_0_hid_mouse. Copy them to the “source” folder in dual host project.     Step 5 Next step is to modify some USB macro definitions. <usb_host_config.h> #define USB_HOST_CONFIG_EHCI 2 /*means there are two host instance*/ #define USB_HOST_CONFIG_MAX_HOST 2 /*The USB driver can support two ehci*/ #define USB_HOST_CONFIG_HID (1U) /*for mouse*/ Next step is merge usb_host_app.c. The project initialize USB hardware and software in USB_HostApplicationInit(). usb_status_t USB_HostApplicationInit(void) { usb_status_t status; USB_HostClockInit(kUSB_ControllerEhci0); USB_HostClockInit(kUSB_ControllerEhci1); #if ((defined FSL_FEATURE_SOC_SYSMPU_COUNT) && (FSL_FEATURE_SOC_SYSMPU_COUNT)) SYSMPU_Enable(SYSMPU, 0); #endif /* FSL_FEATURE_SOC_SYSMPU_COUNT */ status = USB_HostInit(kUSB_ControllerEhci0, &g_HostHandle[0], USB_HostEvent); status = USB_HostInit(kUSB_ControllerEhci1, &g_HostHandle[1], USB_HostEvent); /*each usb instance have a g_HostHandle*/ if (status != kStatus_USB_Success) { return status; } else { USB_HostInterface0CicVcomInit(); USB_HostInterface0HidMouseInit(); } USB_HostIsrEnable(); return status; } In USB_HostIsrEnable(), add code to enable USB2 interrupt.   irqNumber = usbHOSTEhciIrq[1]; NVIC_SetPriority((IRQn_Type)irqNumber, USB_HOST_INTERRUPT_PRIORITY); EnableIRQ((IRQn_Type)irqNumber); Then add and modify USB interrupt handler. void USB_OTG1_IRQHandler(void) { USB_HostEhciIsrFunction(g_HostHandle[0]); } void USB_OTG2_IRQHandler(void) { USB_HostEhciIsrFunction(g_HostHandle[1]); } Since both USB instance share the USB stack, When USB event come, all the event will call USB_HostEvent() in usb_host_app.c. HID code should also be merged into this function. static usb_status_t USB_HostEvent(usb_device_handle deviceHandle, usb_host_configuration_handle configurationHandle, uint32_t eventCode) { usb_status_t status1; usb_status_t status2; usb_status_t status = kStatus_USB_Success; /* Used to prevent from multiple processing of one interface; * e.g. when class/subclass/protocol is the same then one interface on a device is processed only by one interface on host */ uint8_t processedInterfaces[USB_HOST_CONFIG_CONFIGURATION_MAX_INTERFACE] = {0}; switch (eventCode & 0x0000FFFFU) { case kUSB_HostEventAttach: status1 = USB_HostInterface0CicVcomEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); status2 = USB_HostInterface0HidMouseEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); if ((status1 == kStatus_USB_NotSupported) && (status2 == kStatus_USB_NotSupported)) { status = kStatus_USB_NotSupported; } break; case kUSB_HostEventNotSupported: usb_echo("Device not supported.\r\n"); break; case kUSB_HostEventEnumerationDone: status1 = USB_HostInterface0CicVcomEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); status2 = USB_HostInterface0HidMouseEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); if ((status1 != kStatus_USB_Success) && (status2 != kStatus_USB_Success)) { status = kStatus_USB_Error; } break; case kUSB_HostEventDetach: status1 = USB_HostInterface0CicVcomEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); status2 = USB_HostInterface0HidMouseEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); if ((status1 != kStatus_USB_Success) && (status2 != kStatus_USB_Success)) { status = kStatus_USB_Error; } break; case kUSB_HostEventEnumerationFail: usb_echo("Enumeration failed\r\n"); break; default: break; } return status; } USB_HostTasks() is used to deal with all the USB messages in the main loop. At last, HID work should also be added in this function. void USB_HostTasks(void) { USB_HostTaskFn(g_HostHandle[0]); USB_HostTaskFn(g_HostHandle[1]); USB_HostInterface0CicVcomTask(); USB_HostInterface0HidMouseTask(); }   After all these steps, the dual USB function is ready. User can insert USB mouse and USB CDC device into any of the two USB port simultaneously. Conclusion All the RT/LPC/Kinetis devices with two OTG or HOST can support dual USB host. With the help of MCUXpresso Config Tool, it is easy to implement this function.
View full article
The i.MXRT1060 provides tightly coupled GPIOs to be accessed with high frequency. RT1060 provides two sets of GPIOs registers to control pad output. GPIO1 to GPIO3 are general GPIOs, and GPIO6 to GPIO8 are tightly GPIOs, but they share the same pad, which means the gpio pin can select from GPIO1/2/3 or GPIO6/7/8. The registers IOMUXC_GPR_GPR26, IOMUXC_GPR_GPR27, and IOMUXC_GPR_GPR28 are for GPIO selection. To select the gpio pin between GPIO1/2/3 or GPIO6/7/8 you can use MCUXpresso Config Tools. For example, if you select pin G10 you can select either GPIOI_IO11 for normal GPIO or GPIO6_IO11 for fast GPIO.  I made an example based on the SDK v2.7.0 to compare the speed of Normal GPIO and Fast GPIO. For this, I used pin G10 (GPIOI_IO11 and GPIO6_IO11). Firstly, I used the normal GPIO pin (GPIOI_IO11). I will toggle the pin by writing directly to the GPIO_DR register. Notice that you can access this pin through J22 pin 3 in the evaluation board, so you can measure the performance of the pin. Here are the results: With the normal GPIO pin, we reach a period of 160ns when writing directly to the GPIO_DR register. Now, if we change to the fast GPIO and use the same instructions we have the following results. As you can see when using the fast GPIO pin, the period of the signal it's almost one-third of the period when using a normal GPIO. Now, The A1 silicon of the RT1060 has a new GPIO toggle feature. If we toggle the pin with the new register DR_TOGGLE instead of the GPIO_DR we will get better performance with both pins, normal GPIO and fast. Here are the results of the normal GPIO with the DR_TOGGLE register. As you can see when using the register DR_TOGGLE along with the normal GPIO pin we get a period of around 53 ns while when writing to the GPIO_DR register we got 160 ns. When using the register DR_TOGGLE and the fast GPIO we will get the best performance of the pin. Results are shown below. Many thanks to @jorge_a_vazquez for his valuable help with this document. Hope this helps! Best regards, Victor.
View full article
As we know, the RT series MCUs support the XIP (Execute in place) mode and benefit from saving the number of pins, serial NOR Flash is most commonly used, as the FlexSPI module can high efficient fetch the code and data from the Serial NOR flash for Cortex-M7 to execute. The fetch way is implementing via utilizing the Quad IO Fast Read command, meanwhile, the serail NOR flash works in the SDR (Single Data transfer Rate) mode, it receives data on SCLK rise edge and transmits data on SCLK fall edge. Comparing to the SDR mode, the DDR (Dual Data transfer Rate) mode has a higher throughput capacity, whether it can provide better performance of XIP mode, and how to do that if we want the Serial NOR Flash to work in DDR (Dual Data transfer Rate) mode? SDR & DDR mode SDR mode: In SDR (Single Data transfer Rate) mode, data is only clocked on one edge of the clock (either the rising or falling edge). This means that for SDR to have data being transmitted at X Mbps, the clock bit rate needs to be 2X Mbps. DDR mode: For DDR (Dual Data transfer Rate) mode, also known as DTR (Dual Transfer Rate) mode, data is transferred on both the rising and falling edge of the clock. This means data is transmitted at X Mbps only requires the clock bit rate to be X Mbps, hence doubling the bandwidth (as Fig 1 shows).   Fig 1 Enable DDR mode The below steps illustrate how to make the i.MX RT1060 boot from the QSPI with working in DDR mode. Note: The board is MIMXRT1060, IDE is MCUXpresso IDE Open a hello_world as the template Modify the FDCB(Flash Device Configuration Block) a)Set the controllerMiscOption parameter to supports DDR read command. b) Set Serial Flash frequency to 60 MHz. c)Parase the DDR read command into command sequence. The following table shows a template command sequence of DDR Quad IO FAST READ instruction and it's almost matching with the FRQDTR (Fast Read Quad IO DTR) Sequence of IS25WP064 (as Fig 2 shows).   Fig2 FRQDTR Sequence d)Adjust the dummy cycles. The dummy cycles should match with the specific serial clock frequency and the default dummy cycles of the FRQDTR sequence command is 6 (as the below table shows).   However, when the serial clock frequency is 60MHz, the dummy cycle should change to 4 (as the below table shows).   So it needs to configure [P6:P3] bits of the Read Register (as the below table shows) via adding the SET READ PARAMETERS command sequence(as Fig 3 shows) in FDCB manually. Fig 3 SET READ PARAMETERS command sequence In further, in DDR mode, the SCLK cycle is double the serial root clock cycle. The operand value should be set as 2N, 2N-1 or 2*N+1 depending on how the dummy cycles defined in the device datasheet. In the end, we can get an adjusted FCDB like below. // Set Dummy Cycles #define FLASH_DUMMY_CYCLES 8 // Set Read register command sequence's Index in LUT table #define CMD_LUT_SEQ_IDX_SET_READ_PARAM 7 // Read,Read Status,Write Enable command sequences' Index in LUT table #define CMD_LUT_SEQ_IDX_READ 0 #define CMD_LUT_SEQ_IDX_READSTATUS 1 #define CMD_LUT_SEQ_IDX_WRITEENABLE 3 const flexspi_nor_config_t qspiflash_config = { .memConfig = { .tag = FLEXSPI_CFG_BLK_TAG, .version = FLEXSPI_CFG_BLK_VERSION, .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad, .csHoldTime = 3u, .csSetupTime = 3u, // Enable DDR mode .controllerMiscOption = kFlexSpiMiscOffset_DdrModeEnable | kFlexSpiMiscOffset_SafeConfigFreqEnable, .sflashPadType = kSerialFlash_4Pads, //.serialClkFreq = kFlexSpiSerialClk_100MHz, .serialClkFreq = kFlexSpiSerialClk_60MHz, .sflashA1Size = 8u * 1024u * 1024u, // Enable Flash register configuration .configCmdEnable = 1u, .configModeType[0] = kDeviceConfigCmdType_Generic, .configCmdSeqs[0] = { .seqNum = 1, .seqId = CMD_LUT_SEQ_IDX_SET_READ_PARAM, .reserved = 0, }, .lookupTable = { // Read LUTs [4*CMD_LUT_SEQ_IDX_READ] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xED, RADDR_DDR, FLEXSPI_4PAD, 0x18), // The MODE8_DDR subsequence costs 2 cycles that is part of the whole dummy cycles [4*CMD_LUT_SEQ_IDX_READ + 1] = FLEXSPI_LUT_SEQ(MODE8_DDR, FLEXSPI_4PAD, 0x00, DUMMY_DDR, FLEXSPI_4PAD, FLASH_DUMMY_CYCLES-2), [4*CMD_LUT_SEQ_IDX_READ + 2] = FLEXSPI_LUT_SEQ(READ_DDR, FLEXSPI_4PAD, 0x04, STOP, FLEXSPI_1PAD, 0x00), // READ STATUS REGISTER [4*CMD_LUT_SEQ_IDX_READSTATUS] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x05, READ_SDR, FLEXSPI_1PAD, 0x01), [4*CMD_LUT_SEQ_IDX_READSTATUS + 1] = FLEXSPI_LUT_SEQ(STOP, FLEXSPI_1PAD, 0x00, 0, 0, 0), // WRTIE ENABLE [4*CMD_LUT_SEQ_IDX_WRITEENABLE] = FLEXSPI_LUT_SEQ(CMD_SDR,FLEXSPI_1PAD, 0x06, STOP, FLEXSPI_1PAD, 0x00), // Set Read register [4*CMD_LUT_SEQ_IDX_SET_READ_PARAM] = FLEXSPI_LUT_SEQ(CMD_SDR,FLEXSPI_1PAD, 0x63, WRITE_SDR, FLEXSPI_1PAD, 0x01), [4*CMD_LUT_SEQ_IDX_SET_READ_PARAM + 1] = FLEXSPI_LUT_SEQ(STOP,FLEXSPI_1PAD, 0x00, 0, 0, 0), }, }, .pageSize = 256u, .sectorSize = 4u * 1024u, .blockSize = 64u * 1024u, .isUniformBlockSize = false, }; Is DDR mode real better? According to the RT1060's datasheet, the below table illustrates the maximum frequency of FlexSPI operation, as the MIMXRT1060's onboard QSPI flash is IS25WP064AJBLE, it doesn't contain the MQS pin, it means set MCR0.RXCLKsrc=1 (Internal dummy read strobe and loopbacked from DQS) is the most optimized option. operation mode RXCLKsrc=0 RXCLKsrc=1 RXCLKsrc=3 SDR 60 MHz 133 MHz 166 MHz DDR 30 MHz 66 MHz 166 MHz In another word, QSPI can run up to 133 MHz in SDR mode versus 66 MHz in DDR mode. From the perspective of throughput capacity, they're almost the same. It seems like DDR mode is not a better option for IS25WP064AJBLE and the following experiment will validate the assumption. Experiment mbedtls_benchmark I use the mbedtls_benchmark as the first testing demo and I run the demo under the below conditions: 100MH, SDR mode; 133MHz, SDR mode; 66MHz, DDR mode; According to the corresponding printout information (as below shows), I make a table for comparison and I mark the worst performance of implementation items among the above three conditions, just as Fig 4 shows. SDR Mode run at 100 MHz. FlexSPI clock source is 3, FlexSPI Div is 6, PllPfd2Clk is 720000000 mbedTLS version 2.16.6 fsys=600000000 Using following implementations: SHA: DCP HW accelerated AES: DCP HW accelerated AES GCM: Software implementation DES: Software implementation Asymmetric cryptography: Software implementation MD5 : 18139.63 KB/s, 27.10 cycles/byte SHA-1 : 44495.64 KB/s, 12.52 cycles/byte SHA-256 : 47766.54 KB/s, 11.61 cycles/byte SHA-512 : 2190.11 KB/s, 267.88 cycles/byte 3DES : 1263.01 KB/s, 462.49 cycles/byte DES : 2962.18 KB/s, 196.33 cycles/byte AES-CBC-128 : 52883.94 KB/s, 10.45 cycles/byte AES-GCM-128 : 1755.38 KB/s, 329.33 cycles/byte AES-CCM-128 : 2081.99 KB/s, 279.72 cycles/byte CTR_DRBG (NOPR) : 5897.16 KB/s, 98.15 cycles/byte CTR_DRBG (PR) : 4489.58 KB/s, 129.72 cycles/byte HMAC_DRBG SHA-1 (NOPR) : 1297.53 KB/s, 448.03 cycles/byte HMAC_DRBG SHA-1 (PR) : 1205.51 KB/s, 486.04 cycles/byte HMAC_DRBG SHA-256 (NOPR) : 1786.18 KB/s, 327.70 cycles/byte HMAC_DRBG SHA-256 (PR) : 1779.52 KB/s, 328.93 cycles/byte RSA-1024 : 202.33 public/s RSA-1024 : 7.00 private/s DHE-2048 : 0.40 handshake/s DH-2048 : 0.40 handshake/s ECDSA-secp256r1 : 9.00 sign/s ECDSA-secp256r1 : 4.67 verify/s ECDHE-secp256r1 : 5.00 handshake/s ECDH-secp256r1 : 9.33 handshake/s   DDR Mode run at 66 MHz. FlexSPI clock source is 2, FlexSPI Div is 5, PllPfd2Clk is 396000000 mbedTLS version 2.16.6 fsys=600000000 Using following implementations: SHA: DCP HW accelerated AES: DCP HW accelerated AES GCM: Software implementation DES: Software implementation Asymmetric cryptography: Software implementation MD5 : 16047.13 KB/s, 27.12 cycles/byte SHA-1 : 44504.08 KB/s, 12.54 cycles/byte SHA-256 : 47742.88 KB/s, 11.62 cycles/byte SHA-512 : 2187.57 KB/s, 267.18 cycles/byte 3DES : 1262.66 KB/s, 462.59 cycles/byte DES : 2786.81 KB/s, 196.44 cycles/byte AES-CBC-128 : 52807.92 KB/s, 10.47 cycles/byte AES-GCM-128 : 1311.15 KB/s, 446.53 cycles/byte AES-CCM-128 : 2088.84 KB/s, 281.08 cycles/byte CTR_DRBG (NOPR) : 5966.92 KB/s, 97.55 cycles/byte CTR_DRBG (PR) : 4413.15 KB/s, 130.42 cycles/byte HMAC_DRBG SHA-1 (NOPR) : 1291.64 KB/s, 449.47 cycles/byte HMAC_DRBG SHA-1 (PR) : 1202.41 KB/s, 487.05 cycles/byte HMAC_DRBG SHA-256 (NOPR) : 1748.38 KB/s, 328.16 cycles/byte HMAC_DRBG SHA-256 (PR) : 1691.74 KB/s, 329.78 cycles/byte RSA-1024 : 201.67 public/s RSA-1024 : 7.00 private/s DHE-2048 : 0.40 handshake/s DH-2048 : 0.40 handshake/s ECDSA-secp256r1 : 8.67 sign/s ECDSA-secp256r1 : 4.67 verify/s ECDHE-secp256r1 : 4.67 handshake/s ECDH-secp256r1 : 9.00 handshake/s   Fig 4 Performance comparison We can find that most of the implementation items are achieve the worst performance when QSPI works in DDR mode with 66 MHz. Coremark demo The second demo is running the Coremark demo under the above three conditions and the result is illustrated below. SDR Mode run at 100 MHz. FlexSPI clock source is 3, FlexSPI Div is 6, PLL3 PFD0 is 720000000 2K performance run parameters for coremark. CoreMark Size : 666 Total ticks : 391889200 Total time (secs): 16.328717 Iterations/Sec : 2449.671999 Iterations : 40000 Compiler version : MCUXpresso IDE v11.3.1 Compiler flags : Optimization most (-O3) Memory location : STACK seedcrc : 0xe9f5 [0]crclist : 0xe714 [0]crcmatrix : 0x1fd7 [0]crcstate : 0x8e3a [0]crcfinal : 0x25b5 Correct operation validated. See readme.txt for run and reporting rules. CoreMark 1.0 : 2449.671999 / MCUXpresso IDE v11.3.1 Optimization most (-O3) / STACK   SDR Mode run at 133 MHz. FlexSPI clock source is 3, FlexSPI Div is 4, PLL3 PFD0 is 664615368 2K performance run parameters for coremark. CoreMark Size : 666 Total ticks : 391888682 Total time (secs): 16.328695 Iterations/Sec : 2449.675237 Iterations : 40000 Compiler version : MCUXpresso IDE v11.3.1 Compiler flags : Optimization most (-O3) Memory location : STACK seedcrc : 0xe9f5 [0]crclist : 0xe714 [0]crcmatrix : 0x1fd7 [0]crcstate : 0x8e3a [0]crcfinal : 0x25b5 Correct operation validated. See readme.txt for run and reporting rules. CoreMark 1.0 : 2449.675237 / MCUXpresso IDE v11.3.1 Optimization most (-O3) / STACK   DDR Mode run at 66 MHz. FlexSPI clock source is 2, FlexSPI Div is 5, PLL3 PFD0 is 396000000 2K performance run parameters for coremark. CoreMark Size : 666 Total ticks : 391890772 Total time (secs): 16.328782 Iterations/Sec : 2449.662173 Iterations : 40000 Compiler version : MCUXpresso IDE v11.3.1 Compiler flags : Optimization most (-O3) Memory location : STACK seedcrc : 0xe9f5 [0]crclist : 0xe714 [0]crcmatrix : 0x1fd7 [0]crcstate : 0x8e3a [0]crcfinal : 0x25b5 Correct operation validated. See readme.txt for run and reporting rules. CoreMark 1.0 : 2449.662173 / MCUXpresso IDE v11.3.1 Optimization most (-O3) / STACK   After comparing the CoreMark scores, it gets the lowest CoreMark score when QSPI works in DDR mode with 66 MHz. However, they're actually pretty close. Through the above two testings, we can get the DDR mode maybe not a better option, at least for the i.MX RT10xx series MCU.
View full article
RT10xx SAI basic and SDCard wave file play 1. Introduction NXP RT10xx's audio modules are SAI, SPDIF, and MQS. The SAI module is a synchronous serial interface for audio data transmission. SPDIF is a stereo transceiver that can receive and send digital audio, MQS is used to convert I2S audio data from SAI3 to PWM, and then can drive external speakers, but in practical usage, it still need to add the amplifier drive circuit. When we use the SAI module, it will be related to the audio file play and the data obtained. This article will be based on the MIMXRT1060-EVK board, give the RT10xx SAI module basic knowledge, PCM waveform format, the audio file cut, and conversion tool, use the MCUXpresso IDE CFG peripheral tool to create the SAI project, play the audio data, it will also provide the SDcard with fatfs system to read the wave file and play it. 2. Basic Knowledge and the tools Before entering the project details and testing, just provide some SAI module knowledge, wave file format information, audio convert tools. 2.1 SAI module basic RT10xx SAI module can support I2S, AC97, TDM, and codec/DSP interface. SAI module contains Transmitter and Receiver, the related signals:     SAI_MCLK: master clock, used to generate the bit clock, master output, slave input.     SAI_TX_BCLK: Transmit bit clock, master output, slave input     SAI_TX_SYNC: Transmit Frame sync, master output, slave input, L/R channel select     SAI_TX_DATA[4]:Transmit data line, 1-3 share with RX_DATA[1-3]     SAI_RX_BCLK: receiver bit clock     SAI_RX_SYNC: receiver frame sync     SAI_RX_DATA[4]: receiver data line SAI module clocks: audio master clock, bus clock, bit clock SAI module Frame sync has 3 modes:      1)Transmit and receive using its own BCLK and SYNC      2)Transmit async, receive sync: use transmit BCLK and SYNC, transmit enable at first, disable at last.      3)Transmit sync, receive async: use receive BCLK and SYNC, receiver enable at first, disable at last. Valid frame sync is also ignored (slave mode) or not generated (master mode) for the first four-bit clock cycles after enabling the transmitter or receiver. Pic 1 SAI module clock structure: Pic 2 SAI module 3 clock sources:  PLL3_PFD3, PLL5, PLL4 In the above picture, SAI1_CLK_ROOT, which can be used as the MCLK, the BCLK is: BCLK= master clock/(TCR2[DIV]+1)*2 Sample rate = Bitclockfreq /(bitwidth*channel) 2.2 waveform audio file format WAVE file is used to save the PCM encode data, WAVE is using the RIFF format, the smallest unit in the RIFF file is the CK struct, CKID is the data type, the value can be: “RIFF”,“LIST”,“fmt”, “data” etc. RIFF file is little-endian. RIFF structure: typedef unsigned long DWORD;//4B typedef unsigned char BYTE;//1B typedef DWORD FOURCC; // 4B typedef struct { FOURCC ckID; //4B DWORD ckSize; //4B union { FOURCC fccType; // RIFF form type 4B BYTE ckData[ckSize]; //ckSize*1B } ckData; } RIFFCK; Pic 3 Take a 16Khz 2 channel wave file as the example: Pic 4 Yellow: CKID  Green: data length   Purple: data The detailed analysis as follows: Pic 5 We can find, the real audio data, except the wave header, the data size is 1279860bytes. 2.3 Audio file convert In practical usage, the audio file may not the required channel and the sample rate configuration, or the format is not the wave, or the time is too long, then we can use some tool to convert it to your desired format. We can use the ffmpeg tool: https://ffmpeg.org/ About the details, check the ffmpeg document, normally we use these command: mp3 file converts to 16k, 16bit, 2 channel wave file: ffmpeg -i test.mp3 -acodec pcm_s16le -ar 16000 -ac 2 test.wav or: ffmpeg -i test.mp3 -aq 16 -ar 16000 -ac 2 test.wav test.wav, cut 35s from 00:00:00, and can convert save to test1.wav: ffmpeg -ss 00:00:00 -i test.wav -t 35.0 -c copy test1.wav Pic 6 Pic 7 2.4 Obtain wave L/R channel audio data Just like the SDK code, save the L/R audio data directly in the RT RAM array, so here, we need to obtain the audio data from the wav file. We can use the python readout the wav header, then get the audio data size, and save the audio data to one array in the .h files. The related Python code can be: import sys import wave def wav2hex(strWav, strHex): with wave.open(strWav, "rb") as fWav: wavChannels = fWav.getnchannels() wavSampleWidth = fWav.getsampwidth() wavFrameRate = fWav.getframerate() wavFrameNum = fWav.getnframes() wavFrames = fWav.readframes(wavFrameNum) wavDuration = wavFrameNum / wavFrameRate wafFramebytes = wavFrameNum * wavChannels * wavSampleWidth print("Channels: {}".format(wavChannels)) print("Sample width: {}bits".format(wavSampleWidth * 8)) print("Sample rate: {}kHz".format(wavFrameRate/1000)) print("Frames number: {}".format(wavFrameNum)) print("Duration: {}s".format(wavDuration)) print("Frames bytes: {}".format(wafFramebytes)) fWav.close() pass with open(strHex, "w") as fHex: # Print WAV parameters fHex.write("/*\n"); fHex.write(" Channels: {}\n".format(wavChannels)) fHex.write(" Sample width: {}bits\n".format(wavSampleWidth * 8)) fHex.write(" Sample rate: {}kHz\n".format(wavFrameRate/1000)) fHex.write(" Frames number: {}\n".format(wavFrameNum)) fHex.write(" Duration: {}s\n".format(wavDuration)) fHex.write(" Frames bytes: {}\n".format(wafFramebytes)) fHex.write("*/\n\n") # Print WAV frames fHex.write("uint8_t music[] = {\n") print("Transferring...") i = 0 while wafFramebytes > 0: if(wafFramebytes < 16): BytesToPrint = wafFramebytes else: BytesToPrint = 16 fHex.write(" ") for j in range(0, BytesToPrint): if j != 0: fHex.write(' ') fHex.write("0x{:0>2x},".format(wavFrames[i])) i+=1 j+=1 fHex.write("\n") wafFramebytes -= BytesToPrint fHex.write("};\n") fHex.close() print("Done!") wav2hex(sys.argv[1], sys.argv[2]) Take the music1.wave as an example: Pic 8 2.4 Audio data relationship with audio wave 16bit data range is: -32768 to 32767, the goldwave related value range is(-1~1).Use goldwave tool to open the example music1.wav, check the data in 1s position, the left channel relative data is -0.08227, right channel relative data is -0.2257. Pic 9                                                                          pic 10 Now, calculate the L/R real data, and find the position in the music1.h. Pic 11 From pic 8, we can know, the real wave R/L data from line 11, each line contains 16 bytes of data. So, from music1.wav related value, we can calculate the related data, and compare it with the real data in the array, we can find, it is totally the same. 3. SAI MCUXpresso project creation Based on SDK_2.9.2_EVK-MIMXRT1060, create one SAI DMA audio play project. The audio data can use the above music1.h. Create one bare-metal project: Drivers check: clock, common, dmamux, edma,gpio,i2c,iomuxc,lpuart,sai,sai_edma,xip_device Utilities check:       Debug_console,lpuart_adapter,serial_manager,serial_manager_uart Board components check:       Xip_board Abstraction Layer check:       Codec, codec_wm8960_adapter,lpi2c_adapter Software Components check:       Codec_i2c,lists,wm8960 After the creation of the project, open the clocks, configure the clock, core, flexSPI can use the default one, we mainly configure the SAI1 related clocks: Pic 12 Select the SAI1 clock source as PLL4, PLL4_MAIN_CLK configure as 786.48MHz. SAI1 clock configure as 6.144375MHz. After the configuration, update the code. Open Pins tool, configure the SAI1 related pins, as the codec also need the I2C, so it contains the I2C pin configuration. Pic 13 Update the code. Open peripherals, configure DMA, SAI, NVIC. Pic 14 Pic 15 DMA配置如下: pic16 After configuration, generate the code. In the above configuration, we have finished the SAI DMA transfer configuration, SAI master mode, 16bits, the sample rate is 16kHz, 2channel, DMA transfer, bit clock is 512Khz, the master clock is 6.1443Mhz. void callback(I2S_Type *base, sai_edma_handle_t *handle, status_t status, void *userData) { if (kStatus_SAI_RxError == status) { } else { finishIndex++; emptyBlock++; /* Judge whether the music array is completely transfered. */ if (MUSIC_LEN / BUFFER_SIZE == finishIndex) { isFinished = true; finishIndex = 0; emptyBlock = BUFFER_NUM; tx_index = 0; cpy_index = 0; } } } int main(void) { sai_transfer_t xfer; /* Init board hardware. */ BOARD_ConfigMPU(); BOARD_InitBootPins(); BOARD_InitBootClocks(); BOARD_InitBootPeripherals(); #ifndef BOARD_INIT_DEBUG_CONSOLE_PERIPHERAL /* Init FSL debug console. */ BOARD_InitDebugConsole(); #endif PRINTF(" SAI wav module test!\n\r"); /* Use default setting to init codec */ if (CODEC_Init(&codecHandle, &boardCodecConfig) != kStatus_Success) { assert(false); } /* delay for codec output stable */ DelayMS(DEMO_CODEC_INIT_DELAY_MS); CODEC_SetVolume(&codecHandle,2U,50); // set 50% volume EnableIRQ(DEMO_SAI_IRQ); SAI_TxEnableInterrupts(DEMO_SAI, kSAI_FIFOErrorInterruptEnable); PRINTF(" MUSIC PLAY Start!\n\r"); while (1) { PRINTF(" MUSIC PLAY Again\n\r"); isFinished = false; while (!isFinished) { if ((emptyBlock > 0U) && (cpy_index < MUSIC_LEN / BUFFER_SIZE)) { /* Fill in the buffers. */ memcpy((uint8_t *)&buffer[BUFFER_SIZE * (cpy_index % BUFFER_NUM)], (uint8_t *)&music[cpy_index * BUFFER_SIZE], sizeof(uint8_t) * BUFFER_SIZE); emptyBlock--; cpy_index++; } if (emptyBlock < BUFFER_NUM) { /* xfer structure */ xfer.data = (uint8_t *)&buffer[BUFFER_SIZE * (tx_index % BUFFER_NUM)]; xfer.dataSize = BUFFER_SIZE; /* Wait for available queue. */ if (kStatus_Success == SAI_TransferSendEDMA(DEMO_SAI, &SAI1_SAI_Tx_eDMA_Handle, &xfer)) { tx_index++; } } } } }   4. SAI test result     To check the real L/R data sendout situation, we modify the music array first 16 bytes data as: 0x55,0xaa,0x01,0x00,0x02,0x00,0x03,0x00,0x04,0x00,0x05,0x00,0x06,0x00,0x07,0x00 Then test SAI_MCLK,SAI_TX_BCLK,SAI_TX_SYNC,SAI_TXD pin wave, and compare with the defined data, because the polarity is configured as active low, it is falling edge output, sample at rising edge. The test point on the MIMXRT1060-EVK board is using the codec pin position: Pic 17 4.1 Logic Analyzer tool wave Pic 18 MCLK clock frequency is 6.144375Mhz, BCLK is 512KHz, SYNC is 16KHz. Pic 19 The first frame data is:1010101001010101 0000000000000001 0XAA55  0X0001 It is the same as the array defined L/R data. SYNC low is Left 16 bit, High is right 16 bit. 4.2 Oscilloscope test wave Just like the logic analyzer, the oscilloscope wave is the same: Pic 20 Add the music.h to the project, and let the main code play the music array data in loop, we will hear the music clear when insert the headphone to on board J12 or add a speaker. 5. SAI SDcard wave music play This part will add the sd card, fatfs system, to read out the 16bit 16K 2ch wave file in the sd card, and play it in loop. 5.1 driver add     Code is based on SDK_2.9.2_EVK-MIMXRT1060, just on the previous project, add the sdcard, sd fatfs driver, now the bare-metal driver situation is: Drivers check: cache, clock, common, dmamux, edma,gpio,i2c,iomuxc,lpuart,sai,sai_edma,sdhc, xip_device Utilities check:       Debug_console,lpuart_adapter,serial_manager,serial_manager_uart Middleware check:       File System->FAT File System->fatfs+sd, Memories Board components check:       Xip_board Abstraction Layer check:       Codec, codec_wm8960_adapter,lpi2c_adapter Software Components check:       Codec_i2c,lists,wm8960 5.2 WAVE header analyzer with code    From previous content, we can know the wav header structure, we need to play the wave file from the sd card, then we need to analyze the wave header to get the audio format, audio data-related information. The header analysis code is: uint8_t Fun_Wave_Header_Analyzer(void) { char * datap; uint8_t ErrFlag = 0; datap = strstr((char*)Wav_HDBuffer,"RIFF"); if(datap != NULL) { wav_header.chunk_size = ((uint32_t)*(Wav_HDBuffer+4)) + (((uint32_t)*(Wav_HDBuffer + 5)) << + (((uint32_t)*(Wav_HDBuffer + 6)) << 16) +(((uint32_t)*(Wav_HDBuffer + 7)) << 24); movecnt += 8; } else { ErrFlag = 1; return ErrFlag; } datap = strstr((char*)(Wav_HDBuffer+movecnt),"WAVEfmt"); if(datap != NULL) { movecnt += 8; wav_header.fmtchunk_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); wav_header.audio_format = ((uint16_t)*(Wav_HDBuffer+movecnt+4) + (uint16_t)*(Wav_HDBuffer+movecnt+5)); wav_header.num_channels = ((uint16_t)*(Wav_HDBuffer+movecnt+6) + (uint16_t)*(Wav_HDBuffer+movecnt+7)); wav_header.sample_rate = ((uint32_t)*(Wav_HDBuffer+movecnt+8)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 9)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 10)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 11)) << 24); wav_header.byte_rate = ((uint32_t)*(Wav_HDBuffer+movecnt+12)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 13)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 14)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 15)) << 24); wav_header.block_align = ((uint16_t)*(Wav_HDBuffer+movecnt+16) + (uint16_t)*(Wav_HDBuffer+movecnt+17)); wav_header.bps = ((uint16_t)*(Wav_HDBuffer+movecnt+18) + (uint16_t)*(Wav_HDBuffer+movecnt+19)); movecnt +=(4+wav_header.fmtchunk_size); } else { ErrFlag = 1; return ErrFlag; } datap = strstr((char*)(Wav_HDBuffer+movecnt),"LIST"); if(datap != NULL) { movecnt += 4; wav_header.list_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); movecnt +=(4+wav_header.list_size); } //LIST not Must datap = strstr((char*)(Wav_HDBuffer+movecnt),"data"); if(datap != NULL) { movecnt += 4; wav_header.datachunk_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); movecnt += 4; ErrFlag = 0; } else { ErrFlag = 1; return ErrFlag; } PRINTF("Wave audio format is %d\r\n",wav_header.audio_format); PRINTF("Wave audio channel number is %d\r\n",wav_header.num_channels); PRINTF("Wave audio sample rate is %d\r\n",wav_header.sample_rate); PRINTF("Wave audio byte rate is %d\r\n",wav_header.byte_rate); PRINTF("Wave audio block align is %d\r\n",wav_header.block_align); PRINTF("Wave audio bit per sample is %d\r\n",wav_header.bps); PRINTF("Wave audio data size is %d\r\n",wav_header.datachunk_size); return ErrFlag; } Mainly divide RIFF to 4 parts: “RIFF”,“fmt”,“LIST”,“data”. The 4 bytes data follows the “data” is the whole audio data size, it can be used to the fatfs to read the audio data. The above code also recodes the data position, then when using the fatfs read the wave, we can jump to the data area directly. 5.3 SD card wave data play     Define the array audioBuff[4* 512], used to read out the sd card wave file, and use these data send to the SAI EDMA and transfer it to the I2S interface until all the data is transmitted to the I2S interface.     Callback record each 512 bytes data send out finished, and judge the transmit data size is reached the whole wave audio data size. 5.4 sd card wave play result    Prepare one wave file, 16bit 16k sample rate, 2 channel file, named as music.wav, put in the sd card which already does the fat32 format, insert it to the MIMXRT1060-EVK J39, run the code, will get the printf information: Please insert a card into the board. Card inserted. Make file system......The time may be long if the card capacity is big. SAI wav module test! MUSIC PLAY Start! Wave audio format is 1 Wave audio channel number is 2 Wave audio sample rate is 16000 Wave audio byte rate is 64000 Wave audio block align is 4 Wave audio bit per sample is 16 Wave audio data size is 2728440 Playback is begin! Playback is finished! At the same time, after inserting the headphone or the speaker into the J12, we can hear the music. Attachment is the mcuxpresso10.3.0 and the wave samples.  
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/Design-an-IoT-edge-node-for-CV-application-base-on-the-i/ta-p/1127423 
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-Community-Articles/Effortless-GUI-Development-with-NXP-Microcontrollers/ba-p/1131179  
View full article