i.MX RT Crossover MCUs Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX RT Crossover MCUs Knowledge Base

Discussions

Sort by:
RT1015 APP BEE encryption operation method 1 Introduction    NXP RT product BEE encryption can use the master key(the fixed OTPMK SNVS key) or the User Key method. The Master key method is the fixed key, and the user can’t modify it, in the practical usage, a lot of customers need to define their own key, in this situation, customer can use the use key method. This document will take the NXP RT1015 as an example, use the flexible user key method to realize the BEE encryption without the HAB certification.     The BEE encryption test will on the MIMXRT1015-EVK board, mainly three ways to realize it: MCUBootUtility tool , the Commander line method with MFGTool and the MCUXPresso Secure Provisioning tool to download the BEE encryption code.   2 Preparation 2.1  Tool preparation    MCUBootUtility download link:     https://github.com/JayHeng/NXP-MCUBootUtility/archive/v2.3.0.zip    image_enc2.zip download link: https://www.cnblogs.com/henjay724/p/10189602.html After unzip the image_enc2.zip, will get the image_enc.exe, put it under the MCUBootUtility tool folder: NXP-MCUBootUtility-2.3.0\tools\image_enc2\win RT1015 SDK download link: https://mcuxpresso.nxp.com/ 2.2 app file preparation    This document will use the iled_blinky MCUXpresso IDE project in the SDK_2.8.0_EVK-MIMXRT1015 as an example, to generate the app without the XIP boot header. Generate evkmimxrt1015_igpio_led_output.s19 will be used later. Fig 1 3 MCUbootUtility BEE encryption with user key   This chapter will use MCUBootUtility tool to realize the app BEE encryption with the user key, no HAB certification. 3.1 MIMXRT1015-EVK original fuse map    Before doing the BEE encryption, readout the original fuse map, it will be used to compare with the fuse map after the BEE encryption operation. Use the MCUbootUtility tool effuse operation utility page can read out all the fuse map. Fig 2 3.2 MCUbootutility BEE encryption configuration Fig 3 This document just use the BEE encryption, without the HAB certificate, so in the “Enable Certificate for HAB(BEE/OTFAD) encryption”, select: No.    Check Fig4, Select the”Key storage region” as flexible user keys, the protect region 0 start from 0X60001000, length is 0x2000, didn’t encrypt all the app region, just used to compare the original app with the BEE encrypted app code, we can find from 0X60003000, the code will be the plaintext code. But from 0X60001000 to 0X60002FFF will be the BEE encrypted code. After the configuration, Click the button”all in one action”, burn the code to the external QSPI flash. Fig 4 Fig 5 SW_GP2 region in the fuse can be burned separated, click the button”burn DEK data” is OK. Fig 6 Then read out all the fuse map again, we can find in the cfg1, BEE_KEY0_SEL is SW-GP2, it defines the BEE key is using the flexible use key method, not the fixed master key. Fig 7 Then, readout the BEE burned code from the flash with the normal burned code from the flash, and compare with it, the detail situation is: Fig 8 Fig 9 Fig 10 Fig 11 Fig 12    We can find, after the BEE encryption, 0X60001000 to 0X60002FFF is the encrypted code, 0X6000400 area add the EKIB0 data, 0X6000480 area add the EPRDB0 data. Because we just select the BEE engine 0, no BEE engine 1, then we can find 0X60000800 EKIB1 and EPRDB1 are all 0, not the valid data. From 0X60003000, we can find the app data is the plaintext data, the same result with our expected BEE configuration app encrypted range.    Until now, we already realize the MCUBootUtility tool BEE encryption. Exit the serial download mode, configure the MIMXRT10150-EVK on board SW8 as: 1-ON, 2-OFF, 3-ON, 4-OFF, reset the board, we can find the on board user LED is blinking, the BEE encrypted code is working. 4 BEE encryption with the Commander line mode    In practical usage, a lot of customers also need to use the commander line mode to realize the BEE encryption operation, and choose MFGTool download method. So this document will also give the way how to use the SDK SDK_2.8.0_EVK-MIMXRT1015\middleware\mcu-boot\bin\Tools and image_enc tool to realize the BEE commander line method encryption operation, then use the MFGTool download the BEE encrypted code to the RT1015 external QSPI flash.     Because from SDK2.8.0, blhost, elftosb related tools will not be packed in the SDK middleware directly, the customer need to download it from this link: www.nxp.com/mcuboot   4.1 Commander line file preparation     Prepare one folder, put elftosb.exe, image_enc.exe,app file evkmimxrt1015_iled_blinky_0x60002000.s19,RemoveBinaryBytes.exe to that folder. RemoveBinaryBytes.exe is used to modify the bin file, it can be downloaded from this link: https://community.nxp.com/pwmxy87654/attachments/pwmxy87654/imxrt/8733/2/Test.zip (https://community.nxp.com/t5/i-MX-RT/RT1015-BEE-XIP-Step-Confirm/m-p/1070076/page/2)    Then prepare the following files: imx-flexspinor-normal-unsigned.bd imxrt1015_app_flash_sb_gen.bd burn_fuse.bd 4.1.1 imx-flexspinor-normal-unsigned.bd imx-flexspinor-normal-unsigned.bd files is used to generate the app file evkmimxrt1015_iled_blinky_0x60002000.s19 related boot .bin file, which is include the IVT header code: ivt_evkmimxrt1015_iled_blinky_0x60002000.bin ivt_evkmimxrt1015_iled_blinky_0x60002000_nopadding.bin bd file content is   /*********************file start****************************/ options {     flags = 0x00;     startAddress = 0x60000000;     ivtOffset = 0x1000;     initialLoadSize = 0x2000;     //DCDFilePath = "dcd.bin";     # Note: This is required if the default entrypoint is not the Reset_Handler     #       Please set the entryPointAddress to Reset_Handler address     // entryPointAddress = 0x60002000; }   sources {     elfFile = extern(0); }   section (0) { } /*********************file end****************************/‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   4.1.2 imxrt1015_app_flash_sb_gen.bd    This file is used to configure the external QSPI flash, and realize the program function, normally use this .bd file to generate the .sb file, then use the MFGtool select this .sb file and download the code to the external flash.   /*********************file start****************************/ sources {     myBinFile = extern (0); }   section (0) {     load 0xc0000007 > 0x20202000;     load 0x0 > 0x20202004;     enable flexspinor 0x20202000;     erase  0x60000000..0x60005000;     load 0xf000000f > 0x20203000;     enable flexspinor 0x20203000;     load  myBinFile > 0x60000400; } /*********************file end****************************/‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   4.1.3 burn_fuse.bd     BEE encryption operation need to burn the fuse map, but the fuse data is the one time operation from 0 to 1, here will separate the burn fuse operation, only do the burn fuse operation during the first time which the RT chip still didn’t be modified the fuse map. Otherwise, in the next operation, just modify the app code, don’t need to burn the fuse. Burn_fuse.bd is mainly used to configure the fuse data which need to burn the related fuse map, then generate the .sb file, and use the MFGTool burn it with the app together.   /*********************file start****************************/ # The source block assign file name to identifiers sources { }   constants { }   #                !!!!!!!!!!!! WARNING !!!!!!!!!!!! # The section block specifies the sequence of boot commands to be written to the SB file # Note: this is just a template, please update it to actual values in users' project section (0) {     # program SW_GP2     load fuse 0x76543210 > 0x29;     load fuse 0xfedcba98 > 0x2a;     load fuse 0x89abcdef > 0x2b;     load fuse 0x01234567 > 0x2c;         # Program BEE_KEY0_SEL     load fuse 0x00003000 > 0x6;     } /*********************file end****************************/‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ 4.2 BEE commander line operation steps  Create the rt1015_bee_userkey_gp2.bat file, the content is:   elftosb.exe -f imx -V -c imx-flexspinor-normal-unsigned.bd -o ivt_evkmimxrt1015_iled_blinky_0x60002000.bin evkmimxrt1015_iled_blinky_0x60002000.s19 image_enc.exe hw_eng=bee ifile=ivt_evkmimxrt1015_iled_blinky_0x60002000.bin ofile=evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted.bin base_addr=0x60000000 region0_key=0123456789abcdeffedcba9876543210 region0_arg=1,[0x60001000,0x2000,0] region0_lock=0 use_zero_key=1 is_boot_image=1 RemoveBinaryBytes.exe evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted.bin evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin 1024 elftosb.exe -f kinetis -V -c program_imxrt1015_qspi_encrypt_sw_gp2.bd -o boot_image_encrypt.sb evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin elftosb.exe -f kinetis -V -c burn_fuse.bd -o burn_fuse.sb pause‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Fig 13 Fig 14 it mainly has 5 steps: 4.2.1 elftosb generate app file with IVT header elftosb.exe -f imx -V -c imx-flexspinor-normal-unsigned.bd -o ivt_evkmimxrt1015_iled_blinky_0x60002000.bin evkmimxrt1015_iled_blinky_0x60002000.s19 After this commander, will generate two files with the IVT header: ivt_evkmimxrt1015_iled_blinky_0x60002000.bin,ivt_evkmimxrt1015_iled_blinky_0x60002000_nopadding.bin Here, we will use the ivt_evkmimxrt1015_iled_blinky_0x60002000.bin 4.2.2 image_enc generate the app related BEE encrypted code image_enc.exe hw_eng=bee ifile=ivt_evkmimxrt1015_iled_blinky_0x60002000.bin ofile=evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted.bin base_addr=0x60000000 region0_key=0123456789abcdeffedcba9876543210 region0_arg=1,[0x60001000,0x2000,0] region0_lock=0 use_zero_key=1 is_boot_image=1 About the keyword meaning in the image_enc, we can run the image_enc directly to find it. Fig 15 This commander line run result will be the same as the MCUBootUtility configuration. The encryption area from 0X60001000, the length is 0x2000, more details, can refer to Fig 4. After the operation, we can get this file: evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted.bin   4.2.3 RemoveBinaryBytes remove the BEE encrypted file above 1024 bytes RemoveBinaryBytes.exe evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted.bin evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin 1024 This commaner will used to remove the BEE encrypted file, the above 0X400 length data, after the modification, the encrypted file will start from EKIB0 directly. After running it, will get this file: evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin   4.2.4 elftosb generate BEE encrypted app related sb file elftosb.exe -f kinetis -V -c program_imxrt1015_qspi_encrypt_sw_gp2.bd -o boot_image_encrypt.sb evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin This commander will use evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin and program_imxrt1015_qspi_encrypt_sw_gp2.bd to generate the sb file which can use the MFGTool download the code to the external flash After running it, we can get this file: boot_image_encrypt.sb   4.2.5 elftosb generate the burn fuse related sb file elftosb.exe -f kinetis -V -c burn_fuse.bd -o burn_fuse.sb This commander is used to generate the BEE code related fuse bits sb file, this sb file will be burned together with the boot_image_encrypt.sb in the MFGTool. But after the fuse is burned, the next app modify operation don’t need to add the burn fuse operation, can download the add directly. After running it, can get this file: burn_fuse.sb   4.3 MFGTool downloading   MIMXRT1015-EVK board enter the serial downloader mode, find two USB cable, plug it in J41 and J9 to the PC. MFGTool can be found in folder: SDK_2.8.0_EVK-MIMXRT1015\middleware\mcu-boot\bin\Tools\mfgtools-rel   If need to burn the burn_fuse.sb, need to modify the ucl2.xml, folder path: \SDK_2.8.0_EVK-MIMXRT1015\middleware\mcu-boot\bin\Tools\mfgtools-rel\Profiles\MXRT1015\OS Firmware    Add the following list to realize it. <LIST name="MXRT1015-beefuse_DevBoot" desc="Boot Flashloader"> <!-- Stage 1, load and execute Flashloader -->        <CMD state="BootStrap" type="boot" body="BootStrap" file="ivt_flashloader.bin" > Loading Flashloader. </CMD>     <CMD state="BootStrap" type="jump"  onError = "ignore"> Jumping to Flashloader. </CMD> <!-- Stage 2, burn BEE related fuse using Flashloader -->      <CMD state="Blhost" type="blhost" body="get-property 1" > Get Property 1. </CMD> <!--Used to test if flashloader runs successfully-->     <CMD state="Blhost" type="blhost" body="receive-sb-file \"Profiles\\MXRT1015\\OS Firmware\\burn_fuse.sb\"" > Program Boot Image. </CMD>     <CMD state="Blhost" type="blhost" body="reset" > Reset. </CMD> <!--Reset device--> <!-- Stage 3, Program boot image into external memory using Flashloader -->       <CMD state="Blhost" type="blhost" body="get-property 1" > Get Property 1. </CMD> <!--Used to test if flashloader runs successfully-->     <CMD state="Blhost" type="blhost" timeout="15000" body="receive-sb-file \"Profiles\\MXRT1015\\OS Firmware\\ boot_image_encrypt.sb\"" > Program Boot Image. </CMD>     <CMD state="Blhost" type="blhost" body="Update Completed!">Done</CMD> </list>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍     If already have burned the Fuse bits, just need to update the app, then we can use MIMXRT1015-DevBoot   <LIST name="MXRT1015-DevBoot" desc="Boot Flashloader"> <!-- Stage 1, load and execute Flashloader -->        <CMD state="BootStrap" type="boot" body="BootStrap" file="ivt_flashloader.bin" > Loading Flashloader. </CMD>     <CMD state="BootStrap" type="jump"  onError = "ignore"> Jumping to Flashloader. </CMD> <!-- Stage 2, Program boot image into external memory using Flashloader -->       <CMD state="Blhost" type="blhost" body="get-property 1" > Get Property 1. </CMD> <!--Used to test if flashloader runs successfully-->     <CMD state="Blhost" type="blhost" timeout="15000" body="receive-sb-file \"Profiles\\MXRT1015\\OS Firmware\\boot_image.sb\"" > Program Boot Image. </CMD>     <CMD state="Blhost" type="blhost" body="Update Completed!">Done</CMD> </list>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Which detail list is select, it is determined by the cfg.ini name item [profiles] chip = MXRT1015 [platform] board = [LIST] name = MXRT1015-DevBoot‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   Because my side do the MCUbootUtility operation at first, then the fuse is burned, so in the commander line, I just use MXRT1015-DevBoot download the app.sb Fig 16 We can find, it is burned successfully, click stop button, Configure the MIMXRT1015-EVK on board SW8 as 1-ON,2-OFF,3-ON,4-OFF, reset the board, we can find the on board LED is blinking, it means the commander line also can finish the BEE encryption successfully.   5  MCUXpresso Secure Provisioning BEE unsigned operation      This part will use the MCUXPresso Secure Provisioning tool to finish the BEE unsigned image downloading BEE unsigned image is just use the BEE, no certification. 5.1 Tool downloading MCUXPresso Secure Provisioning download link is: https://www.nxp.com/design/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-secure-provisioning-tool:MCUXPRESSO-SECURE-PROVISIONING Download it and install it, it’s better to read the tool document at first: C:\nxp\MCUX_Provi_v2.1\MCUXpresso Secure Provisioning Tool.pdf 5.2 Operation Steps Step1: Create the new tool workspace File->New Workspace, select the workspace path. Fig 17 Step2: Chip boot related configuration Fig 18 Here, please note, the boot type need to select as XIP Encrypted(BEE User Keys) unsigned, which is not added the HAB certification function. Step3: USB connection Connect Select USB, it will use the USB HID to connect the board in serial download mode, so the MIMXRT1015-EVK board need insert the USB port to the J9, and the board need to enter the serial download mode: SW8:1-ON,2-OFF,3-OFF,4-ON Connect Test Connection Button, the connection result is: Fig 19 We can see the connection is OK, due to this board has done the BEE operation in the previous time, so the related BEE fuse is burned, then we can find the BEE key and the key source SW-GP2 fuse already has data. Step4: image selection Just like the previous content, prepare one app image. Step 5: XIP Encryption(BEE user keys) configuration Fig 20 Here, it will need to select which engine, we select Engine0, BEE engine KEY use zero key, key source use the SW-GP2, then the detail user key data: 0123456789abcdeffedcba9876543210 Will be wrote to the swGp2 fuse area. Because my board already do that fuse operation, so here it won’t burn the fuse again. Step 6: build image Fig 21 Here, we will find, after this operation, the tool will generate 5 files: 1) evkmimxrt1015_iled_blinky_0x60002000.bin 2) evkmimxrt1015_iled_blinky_0x60002000_bootable.bin 3) evkmimxrt1015_iled_blinky_0x60002000_bootable_nopadding.bin 4) evkmimxrt1015_iled_blinky_0x60002000_nopadding.bin 5) evkmimxrt1015_iled_blinky_0x60002000_nopadding_ehdr0.bin 1), 2), 3) is the plaintext file, 1) and 2) are totally the same, this file maps the data from base 0, from 0x1000 it is IVT+BD+DCD, from 0X2000 is app, so these files are the whole image, just except the FlexSPI Configuration block data, which should put from base address 0. 3) it is the 2) image just delete the first 0X1000 data, and just from IVT+BD+DCD+app. 4) ,5) is the BEE encrypted image, 4) is related to 3), just the BEE encrypted image, 5) is the EKIB0, EPRDB0 data, which should be put in the real address from 0X60000400, it is the BEE Encrypted Key Info Block 0 and Encrypted Protection Region Descriptor Block 0 data, as we just use the engine0, so just have the engin0 data. In fact, the BEE whole image contains : FlexSPI Configuration block data +IVT+BD+DCD+APP FlexSPI Configuration block data is the plaintext, but from 0X60001000 to 0X60002fff is the encrypted image. Step 7: burn the encrypted image Fig 22 Click the Write Image button, to finish the BEE image program. Here, just open the bee_user_key0.bin, we will find, it is just the user key data which is defined in Fig 20, which also should be written to the swGp2 fuse. Check the log, we will find it mainly these process: Erase image from 0x60000000, length is 0x5000. Generate the flexSPI Configuration block data, and download from 0x60000000 Burn evkmimxrt1015_iled_blinky_0x60002000_nopadding_ehdr0.bin to 0X60000400 Burn evkmimxrt1015_iled_blinky_0x60002000_nopadding.bin to 0x60001000 Modify the MIMXRT1015-EVK SW8:1-ON,2-OFF,3-ON,4-OFF, reset or repower on the board, we will find the on board led is blinking, it means the bee encrypted image already runs OK. Please note: SW8_1 is the Encrypted XIP pin, it must be enable, otherwise, even the BEE encrypted image is downloaded to the external flash, but the boot will be failed, as the ROM will use normal boot not the BEE encrypted boot. So, SW8_1 should be ON.    Following pictures are the BEE encrypted image readout file to compare with the tool generated files. Fig 23 Fig 24 Fig 25 Fig 26 Fig 27 About the MCUBootUtility lack the BEE tool image_enc.exe, we also can use the MCUXPresso Secure Provisioning’s image_enc.exe: Copy: C:\nxp\MCUX_Provi_v2.1\bin\tools\image_enc\win\ image_enc.exe To the MCUbootUtility folder: NXP-MCUBootUtility-3.2.0\tools\image_enc2\win Attachment also contains the video about this tool usage operation.    
View full article
RT1050 Boundary Scan test based on lauterbach 1. Abstract Boundary Scan is a method of testing interconnections on circuit boards or internal sub-blocks of circuits. You can also debug and observe the pin status of the integrated circuit, measure the voltage or analyze the sub-modules inside the integrated circuit, and test based on the JTAG interface. NXP officials have provided two good application notes: AN13507 (LPC) and AN12919 (RT). Based on the reference application note test method, this article provides the boundary scan test results for NXP MIMXRT1050-EVK revA1. It can use Lauterbach to connect the chip and perform boundary scan to control the external pins. A script file is also provided. It can realize one-click connection to boundary scan and achieve level control of external pins. 2. RT1050 test details 2.1 Hardware platform Lauterbach:LA3050 MIMXRT1050-EVK rev A1 hardware modification point are as follows: (1)Modify fuse bit 0X460[19], which is DAP_SJC_SWD_SEL, from 0-SWD to 1-JTAG. To modify Fuse, you can enter serial download mode and use MCUbootUtility to connect and modify it. Fig 1 (2)DNP R38 ,R323,R309,R152,R303   (3)  JTAG_MODE connect to 3.3V= on board TP11 connect to J24_8 (4)R35 connect 100K resistor (5)ONOFF pin pull up external 100K resistor to 3.3V,board modification point is SW2 pin3 or pin4 connect 100K resistor and pull up to J24_8.    (6) disconnect J32,J33 which will disconnect the on board debugger, because this test need to use the external Lauterbach.    (7) Use the external Lauterbach connect to JTAG interface J21, the connection picture is:     Fig 2 2.2 Software operation Download Lauderbach's supporting software and install it. After installation, open the TRACE32 ICD Arm USB. If the Lauderbach device is connected, the interface will open successfully. Fig 3 At this time, you can enter the relevant commands in the yellow box in the picture above. Here you need to prepare the .bsdl file of the chip, which is usually placed on the chip introduction page of nxp.com. For example, the link to the bsdl file of RT1050 is https://www.nxp.com/downloads/en/bsdl/RT1050.bsdl You can copy the RT1050.bsdl file to the Lauderbach installation path: C:\T32 Next, enter the following command in the window to open the boundary scan window SYStem.Mode Down BSDL.RESet BSDL.ParkState Select-DR-Scan BSDL.state Here, it will open the window: Fig 4 Click FILE item, input the downloaded RT1050.bsdl , then in the window input the commander: BSDL.SOFTRESET     Fig 5 Click check->BYPASSall,IDCODEall,SAMPLEall, make sure the 3 methods can be passed. It is found here that the following problems are encountered when clicking IDCODEall:   Fig 6 It prompts that the IDCODE read is 188c301d, but the expected IDCODE is 088c301d. So what is the correct IDCODE? You can view RT1050RM:   Fig 7 It can be seen that the currently read 188c301d is in line with RM and is correct. Therefore, the version of bsdl downloaded from the official website needs to be modified. Open the RT1050.bsdl file:   Fig 8 Modify line 408 version from 0000 to 0001,Fig 8 is the modified result. Save, run the above commands again, we can see the current BYPASSall,IDCODEall,SAMPLEall connection result is:   Fig 9 Fig 10 Fig 11 To test the output control situation you need to do: BSDLSET 1.: instructions->EXTEXT, DR mode->Set Write, Filter data-> uncheck intern BSDL.state->Run: check SetAndRun, TwoStepDR,  Click RUN button. BSDLSET 1. Window, you can control the pin output status, eg, control GPIO_AD_B1_06 which is J22_2, control the output level: 1 high, 0 low.   Fig 12 2.3 Automation control command script As can be seen from Section 2.2, single-step operation requires manual typing of commands. In actual testing, the efficiency is very low, so scripting language can be used to directly implement automated command control. Below, we take RT1050 as an example to control the level of the onboard GPIO_AD_B1_06 and J22_2 pins, and use a multimeter to test the high and low levels. In this way, when the TRACE32 software is opened, you only need to open the script directly, enter the debug mode, run it to the end with one click, and view the board Just turn on the light and control the status. Script language, suffix .cmm, step: File->New Script, enter the following script command: ;system setup SYStem.Mode Down SYStem.CPU CortexM7 SYSTEM.CONFIG.DEBUGPORTTYPE JTAG SYStem.JtagClock 1MHz ;BSDL Settings BSDL.RESet BSDL.ParkState Select-DR-Scan BSDL.state ;configure boundary scan chain BSDL.FILE RT1050.bsdl ;Check boundary scan chain BSDL.SOFTRESET BSDL.BYPASSall BSDL.IDCODEall BSDL.SAMPLEall ;Perform Sample test BSDL.RUN BSDL.SetAndRun ON BSDL.TwoStepDR ON BSDL.SET 1. BSDL.SET 1. IR EXTEST BSDL.RUN BSDL.SET 1. PORT GPIO_AD_B1_06 0 BSDL.SET 1. PORT GPIO_AD_B1_06 1 BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 6.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 6.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s Function: Pull the GPIO_AD_B1_06 pin high and low 6 times, with no delay, delay 5s, delay 2s. After the script is written, save it and debug it.   Fig 13 This is the video for the testing: It can be seen that automatic control of the onboard GPIO_AD_B1_06 and J22_2 pins can be achieved, and there is no disconnection issue when the test delay is greater than 5S, indicating that the BSDL automatic test has been completed so far. If you encounter problems, be sure to pay attention to whether the hardware modification points of the board have been completely modified.   At last, thanks so much for my colleague @leilei_du  and @albert_li 's endless help!    
View full article
INTRODUCTION REQUIREMENTS INTEGRATION     1. INTRODUCTION   This document provides an step-by-step guide to migrate the webcam application explained on AN12103 "Developing a simple UVC device based on i.MX RT1050" to EVKB-MIMXRT1050. The goal is getting the application working on rev. B silicon, using the current SDK components (v2.4.2) and with MCUXpresso IDE (v10.2.1), because the original implementation from the application note is using rev. A silicon and is developed on IAR IDE.   2. REQUIREMENTS   A) Download and install MCUXpresso IDE v10.2.1. B) Build an MCUXpresso SDK v2.4.2 for EVKB-MIMXRT1050 from the "SDK Builder web page", ensuring that CSI and USB components are included, and MCUXpresso IDE is selected, and install it. For A) and B) steps, you could refer to the following Community document: https://community.nxp.com/docs/DOC-341985  C) Download the source code related to AN12103. D) Having the EVKB-MIMXRT1050 board, with MT9M1114 camera module. 3. INTEGRATION   a) Open MCUXpresso IDE, and click on "Import SDK example" shortcut, select the "evkbimxrt1050" board and click on "Next" button. b) Select the "driver_examples->csi->csi_rgb565" and "usb_examples->dev_video_virtual_camera_bm" examples, and click on "Finish" button. c) Copy the "fsl_csi.h", "fsl_csi.c", "fsl_lpi2c.h" and "fsl_lpi2c.c" files from the "drivers" folder of CSI project, to the "drivers" folder of the Virtual_Camera project. d) Copy the "pin_mux.h" and "pin_mux.c" files from the "board" folder of CSI project, to the "board->src" folder of the Virtual_Camera project, replacing the already included files. e) Copy the "camera" folder from AN12103 software package from the path below, to the Virtual_Camera project: <AN12103SW\boards\evkmimxrt1050\user_apps\uvc_demo\src\camera> Also copy the "main.c" file from AN12103 software package to the "sources" folder of the Virtual_Camera project. Ensure selecting the option "Copy files and folders" when copying folders/files. f) Right click on the recently added "camera" folder, and select "Properties". Then, on the "C/C++ Build" menu, remove the checkbox "Exclude resource from build" option, and then click on "Apply and Close" button. g) Right click on the Virtual_Camera project, and select "Properties". Then, select the "C/C++ Build -> Settings -> MCU C Compiler -> Preprocessor" menu, and click on the "+" button to add the following value: "SDK_I2C_BASED_COMPONENT_USED=1", and click on "OK" button. h) Now, move to the "Includes" menu of the same window, and click on the "+" button to add the following value: "../camera". Repeat the same procedure on "MCU Assembler -> General" menu, and then, click on "Apply and Close" button. i) Refer to "usb" folder from AN12103 software package from the path below, and copy "video_camera.h", "video_camera.c", "usb_device_descriptor.h" and "usb_device_descriptor.c" files to the "sources" folder of Virtual_Camera project, ensuring selecting the option "Copy files and folders" and overwriting the already included files: <AN12103SW\boards\evkmimxrt1050\user_apps\uvc_demo\src\usb> j) Select "video_data.h", "video_data.c", "virtual_camera.h" and "virtual_camera.c" files and "doc" folder, then right click and select "Delete". Click on "OK" button of the confirmation window to remove these resources from the Virtual_Camera project. k) Refer to "fsl_mt9m114.c" file from "camera" folder of Virtual_Camera project, and delete the "static" definition from functions "MT9M114_Init", "MT9M114_Deinit", "MT9M114_Start", "MT9M114_Stop", "MT9M114_Control" and "MT9M114_InitExt". l) Refer to "main.c" file from "sources" folder of Virtual_Camera project, and comment out the call to the function "BOARD_InitLPI2C1Pins". Also, refer to "board.c" file from "board->src" folder of Virtual_Camera project, and comment out the call to the function "SCB_EnableDCache". m) Refer to "camera_device.c" file from "camera" folder of Virtual_Camera project, and comment out the line "AT_NONCACHEABLE_SECTION_ALIGN(static uint16_t s_cameraFrameBuffer[CAMERA_FRAME_BUFFER_COUNT][CAMERA_VERTICAL_POINTS * CAMERA_HORIZONTAL_POINTS + 32u], FRAME_BUFFER_ALIGN);" and add the following line: static uint16_t __attribute__((section (".noinit.$BOARD_SDRAM"))) s_cameraFrameBuffer[CAMERA_FRAME_BUFFER_COUNT][CAMERA_VERTICAL_POINTS * CAMERA_HORIZONTAL_POINTS + 32u] __attribute__ ((aligned (FRAME_BUFFER_ALIGN))); n) Compile and download the application into the EVKB-MIMXRT1050 board. The memory usage is shown below: o) When running the application, if you also have the serial terminal connected, you should see the print message. Additionally, if connected to Windows OS, you could find it as "CSI Camera Device" under the "Imaging devices" category. p) Optionally, you could rename the Virtual_Camera project to any other desired name, with rigth click on Project, and selecting "Rename" option, and finally, click on "OK" button. It is also attached the migrated MCUXpresso IDE project including all the steps mentioned on the present document. Hope this will be useful for you. Best regards! Some additional references: https://community.nxp.com/thread/321587  Defining Variables at Absolute Addresses with gcc | MCU on Eclipse   
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-Community-Articles/Effortless-GUI-Development-with-NXP-Microcontrollers/ba-p/1131179  
View full article
The path of SDRAM Clock in Clock Tree                 According CCM clock tree in i.MXRT1050 reference manual, we can abstract part of SDRAM clock, and draw it’s diagram below.   Descriptions for Diagram 1 (1) PLL2 PFD2                 ① Registers related to PLL2 PFD2 ---CCM_ANALOG_PLL_SYSn (page 767, in reference manual) Address: 0x400D_8030h important bits: bit[15:14]---- select clock source. Bit[13] ----- Enable PLL output Bit[0]------- This field controls the PLL loop divider. 0 - Fout=Fref*20; 1 - Fout=Fref*22. ---CCM_ANALOG_PLL_SYS_NUM(page 768, in reference manual) Address: 0x400D_8050h important bits: bit[29:0]--- 30 bit numerator (A) of fractional loop divider (signed integer) ---CCM_ANALOG_PLL_SYS_DENOM (page 769, in reference manual) Address: 0x400D_8060h important bits: bit[29:0]---- 30 bit Denominator (B) of fractional loop divider (unsigned integer).   ---CCM_ANALOG_PFD_528n (page 769, in reference manual) Address: 0x400D_8100h important bits: bit[21:16]----- This field controls the fractional divide value. The resulting frequency shall be 528*18/PFD2_FRAC where PFD2_FRAC is in the range 12-35.   ② Computational formula PLL2_PFD2_OUT=(External 24MHz)*(Fout + A/B) * 18/ PFD2_FRAC   ③ Example for PLL2_PFD2_OUT computation CCM_ANALOG_PLL_SYSn[0] = 1  // Fout=Fref*22 CCM_ANALOG_PLL_SYS_NUM[29:0] = 56  // A = 56 CCM_ANALOG_PLL_SYS_DENOM[29:0] = 256  // B=256 CCM_ANALOG_PFD_528n[21:16] = 29                       // PFD2_FRAC=29   PLL2_PFD2_OUT = 24 * (22 + 56/256)*18/29 = 331MHz (330.98MHz)   (2) Clock Select Register : CCM_CBCDR Address: 0x 400F_C014h important bits: SEMC_ALT_CLK_SEL & SEMC_CLK_SEL & SEMC_PODF bit[7] --- bit[SEMC_ALT_CLK_SEL] 0---PLL2 PFD2 will be selected as alternative clock for SEMC root clock 1---PLL3 PFD1 will be selected as alternative clock for SEMC root clock Bit[6] --- bit[SEMC_CLK_SEL] 0----Periph_clk output will be used as SEMC clock root 1----SEMC alternative clock will be used as SEMC clock root Bit[18:16] --- bit[SEMC_PODF] Post divider for SEMC clock. NOTE: Any change of this divider might involve handshake with EMI. See CDHIPR register for the handshake busy bits. 000 divide by 1 001 divide by 2 010 divide by 3 011 divide by 4 100 divide by 5 101 divide by 6 110 divide by 7 111 divide by 8 Example for configuration of SDRAM Clock   Example : 166MHz SDRAM Clock   ---- 0x400D8030 = 0x00002001 // wirte  0x00002001 to CCM_ANALOG_PLL_SYSn ---- 0x400D8050 = 0x00000038 // write 0x00000038 to CCM_ANALOG_PLL_SYS_NUM ---- 0x400D8060 = 0x00000100 // write 0x00000100 to CCM_ANALOG_PLL_SYS_DENOM ---- 0x400D8100 = 0x001d0000 // write 0x001d0000 to CCM_ANALOG_PFD_528n ---- 0x400FC014 = 0x00010D40 // write 0x00010D40 to CCM_CBCDR, divided by 2         NXP TIC team Weidong Sun 2018-06-01
View full article
RT10xx SAI basic and SDCard wave file play 1. Introduction NXP RT10xx's audio modules are SAI, SPDIF, and MQS. The SAI module is a synchronous serial interface for audio data transmission. SPDIF is a stereo transceiver that can receive and send digital audio, MQS is used to convert I2S audio data from SAI3 to PWM, and then can drive external speakers, but in practical usage, it still need to add the amplifier drive circuit. When we use the SAI module, it will be related to the audio file play and the data obtained. This article will be based on the MIMXRT1060-EVK board, give the RT10xx SAI module basic knowledge, PCM waveform format, the audio file cut, and conversion tool, use the MCUXpresso IDE CFG peripheral tool to create the SAI project, play the audio data, it will also provide the SDcard with fatfs system to read the wave file and play it. 2. Basic Knowledge and the tools Before entering the project details and testing, just provide some SAI module knowledge, wave file format information, audio convert tools. 2.1 SAI module basic RT10xx SAI module can support I2S, AC97, TDM, and codec/DSP interface. SAI module contains Transmitter and Receiver, the related signals:     SAI_MCLK: master clock, used to generate the bit clock, master output, slave input.     SAI_TX_BCLK: Transmit bit clock, master output, slave input     SAI_TX_SYNC: Transmit Frame sync, master output, slave input, L/R channel select     SAI_TX_DATA[4]:Transmit data line, 1-3 share with RX_DATA[1-3]     SAI_RX_BCLK: receiver bit clock     SAI_RX_SYNC: receiver frame sync     SAI_RX_DATA[4]: receiver data line SAI module clocks: audio master clock, bus clock, bit clock SAI module Frame sync has 3 modes:      1)Transmit and receive using its own BCLK and SYNC      2)Transmit async, receive sync: use transmit BCLK and SYNC, transmit enable at first, disable at last.      3)Transmit sync, receive async: use receive BCLK and SYNC, receiver enable at first, disable at last. Valid frame sync is also ignored (slave mode) or not generated (master mode) for the first four-bit clock cycles after enabling the transmitter or receiver. Pic 1 SAI module clock structure: Pic 2 SAI module 3 clock sources:  PLL3_PFD3, PLL5, PLL4 In the above picture, SAI1_CLK_ROOT, which can be used as the MCLK, the BCLK is: BCLK= master clock/(TCR2[DIV]+1)*2 Sample rate = Bitclockfreq /(bitwidth*channel) 2.2 waveform audio file format WAVE file is used to save the PCM encode data, WAVE is using the RIFF format, the smallest unit in the RIFF file is the CK struct, CKID is the data type, the value can be: “RIFF”,“LIST”,“fmt”, “data” etc. RIFF file is little-endian. RIFF structure: typedef unsigned long DWORD;//4B typedef unsigned char BYTE;//1B typedef DWORD FOURCC; // 4B typedef struct { FOURCC ckID; //4B DWORD ckSize; //4B union { FOURCC fccType; // RIFF form type 4B BYTE ckData[ckSize]; //ckSize*1B } ckData; } RIFFCK; Pic 3 Take a 16Khz 2 channel wave file as the example: Pic 4 Yellow: CKID  Green: data length   Purple: data The detailed analysis as follows: Pic 5 We can find, the real audio data, except the wave header, the data size is 1279860bytes. 2.3 Audio file convert In practical usage, the audio file may not the required channel and the sample rate configuration, or the format is not the wave, or the time is too long, then we can use some tool to convert it to your desired format. We can use the ffmpeg tool: https://ffmpeg.org/ About the details, check the ffmpeg document, normally we use these command: mp3 file converts to 16k, 16bit, 2 channel wave file: ffmpeg -i test.mp3 -acodec pcm_s16le -ar 16000 -ac 2 test.wav or: ffmpeg -i test.mp3 -aq 16 -ar 16000 -ac 2 test.wav test.wav, cut 35s from 00:00:00, and can convert save to test1.wav: ffmpeg -ss 00:00:00 -i test.wav -t 35.0 -c copy test1.wav Pic 6 Pic 7 2.4 Obtain wave L/R channel audio data Just like the SDK code, save the L/R audio data directly in the RT RAM array, so here, we need to obtain the audio data from the wav file. We can use the python readout the wav header, then get the audio data size, and save the audio data to one array in the .h files. The related Python code can be: import sys import wave def wav2hex(strWav, strHex): with wave.open(strWav, "rb") as fWav: wavChannels = fWav.getnchannels() wavSampleWidth = fWav.getsampwidth() wavFrameRate = fWav.getframerate() wavFrameNum = fWav.getnframes() wavFrames = fWav.readframes(wavFrameNum) wavDuration = wavFrameNum / wavFrameRate wafFramebytes = wavFrameNum * wavChannels * wavSampleWidth print("Channels: {}".format(wavChannels)) print("Sample width: {}bits".format(wavSampleWidth * 8)) print("Sample rate: {}kHz".format(wavFrameRate/1000)) print("Frames number: {}".format(wavFrameNum)) print("Duration: {}s".format(wavDuration)) print("Frames bytes: {}".format(wafFramebytes)) fWav.close() pass with open(strHex, "w") as fHex: # Print WAV parameters fHex.write("/*\n"); fHex.write(" Channels: {}\n".format(wavChannels)) fHex.write(" Sample width: {}bits\n".format(wavSampleWidth * 8)) fHex.write(" Sample rate: {}kHz\n".format(wavFrameRate/1000)) fHex.write(" Frames number: {}\n".format(wavFrameNum)) fHex.write(" Duration: {}s\n".format(wavDuration)) fHex.write(" Frames bytes: {}\n".format(wafFramebytes)) fHex.write("*/\n\n") # Print WAV frames fHex.write("uint8_t music[] = {\n") print("Transferring...") i = 0 while wafFramebytes > 0: if(wafFramebytes < 16): BytesToPrint = wafFramebytes else: BytesToPrint = 16 fHex.write(" ") for j in range(0, BytesToPrint): if j != 0: fHex.write(' ') fHex.write("0x{:0>2x},".format(wavFrames[i])) i+=1 j+=1 fHex.write("\n") wafFramebytes -= BytesToPrint fHex.write("};\n") fHex.close() print("Done!") wav2hex(sys.argv[1], sys.argv[2]) Take the music1.wave as an example: Pic 8 2.4 Audio data relationship with audio wave 16bit data range is: -32768 to 32767, the goldwave related value range is(-1~1).Use goldwave tool to open the example music1.wav, check the data in 1s position, the left channel relative data is -0.08227, right channel relative data is -0.2257. Pic 9                                                                          pic 10 Now, calculate the L/R real data, and find the position in the music1.h. Pic 11 From pic 8, we can know, the real wave R/L data from line 11, each line contains 16 bytes of data. So, from music1.wav related value, we can calculate the related data, and compare it with the real data in the array, we can find, it is totally the same. 3. SAI MCUXpresso project creation Based on SDK_2.9.2_EVK-MIMXRT1060, create one SAI DMA audio play project. The audio data can use the above music1.h. Create one bare-metal project: Drivers check: clock, common, dmamux, edma,gpio,i2c,iomuxc,lpuart,sai,sai_edma,xip_device Utilities check:       Debug_console,lpuart_adapter,serial_manager,serial_manager_uart Board components check:       Xip_board Abstraction Layer check:       Codec, codec_wm8960_adapter,lpi2c_adapter Software Components check:       Codec_i2c,lists,wm8960 After the creation of the project, open the clocks, configure the clock, core, flexSPI can use the default one, we mainly configure the SAI1 related clocks: Pic 12 Select the SAI1 clock source as PLL4, PLL4_MAIN_CLK configure as 786.48MHz. SAI1 clock configure as 6.144375MHz. After the configuration, update the code. Open Pins tool, configure the SAI1 related pins, as the codec also need the I2C, so it contains the I2C pin configuration. Pic 13 Update the code. Open peripherals, configure DMA, SAI, NVIC. Pic 14 Pic 15 DMA配置如下: pic16 After configuration, generate the code. In the above configuration, we have finished the SAI DMA transfer configuration, SAI master mode, 16bits, the sample rate is 16kHz, 2channel, DMA transfer, bit clock is 512Khz, the master clock is 6.1443Mhz. void callback(I2S_Type *base, sai_edma_handle_t *handle, status_t status, void *userData) { if (kStatus_SAI_RxError == status) { } else { finishIndex++; emptyBlock++; /* Judge whether the music array is completely transfered. */ if (MUSIC_LEN / BUFFER_SIZE == finishIndex) { isFinished = true; finishIndex = 0; emptyBlock = BUFFER_NUM; tx_index = 0; cpy_index = 0; } } } int main(void) { sai_transfer_t xfer; /* Init board hardware. */ BOARD_ConfigMPU(); BOARD_InitBootPins(); BOARD_InitBootClocks(); BOARD_InitBootPeripherals(); #ifndef BOARD_INIT_DEBUG_CONSOLE_PERIPHERAL /* Init FSL debug console. */ BOARD_InitDebugConsole(); #endif PRINTF(" SAI wav module test!\n\r"); /* Use default setting to init codec */ if (CODEC_Init(&codecHandle, &boardCodecConfig) != kStatus_Success) { assert(false); } /* delay for codec output stable */ DelayMS(DEMO_CODEC_INIT_DELAY_MS); CODEC_SetVolume(&codecHandle,2U,50); // set 50% volume EnableIRQ(DEMO_SAI_IRQ); SAI_TxEnableInterrupts(DEMO_SAI, kSAI_FIFOErrorInterruptEnable); PRINTF(" MUSIC PLAY Start!\n\r"); while (1) { PRINTF(" MUSIC PLAY Again\n\r"); isFinished = false; while (!isFinished) { if ((emptyBlock > 0U) && (cpy_index < MUSIC_LEN / BUFFER_SIZE)) { /* Fill in the buffers. */ memcpy((uint8_t *)&buffer[BUFFER_SIZE * (cpy_index % BUFFER_NUM)], (uint8_t *)&music[cpy_index * BUFFER_SIZE], sizeof(uint8_t) * BUFFER_SIZE); emptyBlock--; cpy_index++; } if (emptyBlock < BUFFER_NUM) { /* xfer structure */ xfer.data = (uint8_t *)&buffer[BUFFER_SIZE * (tx_index % BUFFER_NUM)]; xfer.dataSize = BUFFER_SIZE; /* Wait for available queue. */ if (kStatus_Success == SAI_TransferSendEDMA(DEMO_SAI, &SAI1_SAI_Tx_eDMA_Handle, &xfer)) { tx_index++; } } } } }   4. SAI test result     To check the real L/R data sendout situation, we modify the music array first 16 bytes data as: 0x55,0xaa,0x01,0x00,0x02,0x00,0x03,0x00,0x04,0x00,0x05,0x00,0x06,0x00,0x07,0x00 Then test SAI_MCLK,SAI_TX_BCLK,SAI_TX_SYNC,SAI_TXD pin wave, and compare with the defined data, because the polarity is configured as active low, it is falling edge output, sample at rising edge. The test point on the MIMXRT1060-EVK board is using the codec pin position: Pic 17 4.1 Logic Analyzer tool wave Pic 18 MCLK clock frequency is 6.144375Mhz, BCLK is 512KHz, SYNC is 16KHz. Pic 19 The first frame data is:1010101001010101 0000000000000001 0XAA55  0X0001 It is the same as the array defined L/R data. SYNC low is Left 16 bit, High is right 16 bit. 4.2 Oscilloscope test wave Just like the logic analyzer, the oscilloscope wave is the same: Pic 20 Add the music.h to the project, and let the main code play the music array data in loop, we will hear the music clear when insert the headphone to on board J12 or add a speaker. 5. SAI SDcard wave music play This part will add the sd card, fatfs system, to read out the 16bit 16K 2ch wave file in the sd card, and play it in loop. 5.1 driver add     Code is based on SDK_2.9.2_EVK-MIMXRT1060, just on the previous project, add the sdcard, sd fatfs driver, now the bare-metal driver situation is: Drivers check: cache, clock, common, dmamux, edma,gpio,i2c,iomuxc,lpuart,sai,sai_edma,sdhc, xip_device Utilities check:       Debug_console,lpuart_adapter,serial_manager,serial_manager_uart Middleware check:       File System->FAT File System->fatfs+sd, Memories Board components check:       Xip_board Abstraction Layer check:       Codec, codec_wm8960_adapter,lpi2c_adapter Software Components check:       Codec_i2c,lists,wm8960 5.2 WAVE header analyzer with code    From previous content, we can know the wav header structure, we need to play the wave file from the sd card, then we need to analyze the wave header to get the audio format, audio data-related information. The header analysis code is: uint8_t Fun_Wave_Header_Analyzer(void) { char * datap; uint8_t ErrFlag = 0; datap = strstr((char*)Wav_HDBuffer,"RIFF"); if(datap != NULL) { wav_header.chunk_size = ((uint32_t)*(Wav_HDBuffer+4)) + (((uint32_t)*(Wav_HDBuffer + 5)) << + (((uint32_t)*(Wav_HDBuffer + 6)) << 16) +(((uint32_t)*(Wav_HDBuffer + 7)) << 24); movecnt += 8; } else { ErrFlag = 1; return ErrFlag; } datap = strstr((char*)(Wav_HDBuffer+movecnt),"WAVEfmt"); if(datap != NULL) { movecnt += 8; wav_header.fmtchunk_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); wav_header.audio_format = ((uint16_t)*(Wav_HDBuffer+movecnt+4) + (uint16_t)*(Wav_HDBuffer+movecnt+5)); wav_header.num_channels = ((uint16_t)*(Wav_HDBuffer+movecnt+6) + (uint16_t)*(Wav_HDBuffer+movecnt+7)); wav_header.sample_rate = ((uint32_t)*(Wav_HDBuffer+movecnt+8)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 9)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 10)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 11)) << 24); wav_header.byte_rate = ((uint32_t)*(Wav_HDBuffer+movecnt+12)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 13)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 14)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 15)) << 24); wav_header.block_align = ((uint16_t)*(Wav_HDBuffer+movecnt+16) + (uint16_t)*(Wav_HDBuffer+movecnt+17)); wav_header.bps = ((uint16_t)*(Wav_HDBuffer+movecnt+18) + (uint16_t)*(Wav_HDBuffer+movecnt+19)); movecnt +=(4+wav_header.fmtchunk_size); } else { ErrFlag = 1; return ErrFlag; } datap = strstr((char*)(Wav_HDBuffer+movecnt),"LIST"); if(datap != NULL) { movecnt += 4; wav_header.list_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); movecnt +=(4+wav_header.list_size); } //LIST not Must datap = strstr((char*)(Wav_HDBuffer+movecnt),"data"); if(datap != NULL) { movecnt += 4; wav_header.datachunk_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) << + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24); movecnt += 4; ErrFlag = 0; } else { ErrFlag = 1; return ErrFlag; } PRINTF("Wave audio format is %d\r\n",wav_header.audio_format); PRINTF("Wave audio channel number is %d\r\n",wav_header.num_channels); PRINTF("Wave audio sample rate is %d\r\n",wav_header.sample_rate); PRINTF("Wave audio byte rate is %d\r\n",wav_header.byte_rate); PRINTF("Wave audio block align is %d\r\n",wav_header.block_align); PRINTF("Wave audio bit per sample is %d\r\n",wav_header.bps); PRINTF("Wave audio data size is %d\r\n",wav_header.datachunk_size); return ErrFlag; } Mainly divide RIFF to 4 parts: “RIFF”,“fmt”,“LIST”,“data”. The 4 bytes data follows the “data” is the whole audio data size, it can be used to the fatfs to read the audio data. The above code also recodes the data position, then when using the fatfs read the wave, we can jump to the data area directly. 5.3 SD card wave data play     Define the array audioBuff[4* 512], used to read out the sd card wave file, and use these data send to the SAI EDMA and transfer it to the I2S interface until all the data is transmitted to the I2S interface.     Callback record each 512 bytes data send out finished, and judge the transmit data size is reached the whole wave audio data size. 5.4 sd card wave play result    Prepare one wave file, 16bit 16k sample rate, 2 channel file, named as music.wav, put in the sd card which already does the fat32 format, insert it to the MIMXRT1060-EVK J39, run the code, will get the printf information: Please insert a card into the board. Card inserted. Make file system......The time may be long if the card capacity is big. SAI wav module test! MUSIC PLAY Start! Wave audio format is 1 Wave audio channel number is 2 Wave audio sample rate is 16000 Wave audio byte rate is 64000 Wave audio block align is 4 Wave audio bit per sample is 16 Wave audio data size is 2728440 Playback is begin! Playback is finished! At the same time, after inserting the headphone or the speaker into the J12, we can hear the music. Attachment is the mcuxpresso10.3.0 and the wave samples.  
View full article
Source code: https://github.com/JayHeng/NXP-MCUBootUtility 【v1.3.0】 Features: > 1. Can generate .sb file by actions in efuse operation utility window >    支持生成仅含自定义efuse烧写操作(在efuse operation windows里指定)的.sb格式文件 Improvements: > 1. HAB signed mode should not appliable for FlexSPI/SEMC NOR device Non-XIP boot with RT1020/1015 ROM >    HAB签名模式在i.MXRT1020/1015下应不支持从FlexSPI NOR/SEMC NOR启动设备中Non-XIP启动 > 2. HAB encrypted mode should not appliable for FlexSPI/SEMC NOR device boot with RT1020/1015 ROM >    HAB加密模式在i.MXRT1020/1015下应不支持从FlexSPI NOR/SEMC NOR启动设备中启动 > 3. Multiple .sb files(all, flash, efuse) should be generated if there is efuse operation in all-in-one action >    当All-In-One操作中包含efuse烧写操作时,会生成3个.sb文件(全部操作、仅flash操作、仅efuse操作) > 4. Can generate .sb file without board connection when boot device type is NOR >    当启动设备是NOR型Flash时,可以不用连接板子直接生成.sb文件 > 5. Automatic image readback can be disabled to save operation time >    一键操作下的自动程序回读可以被禁掉,用以节省操作时间 > 6. The text of language option in menu bar should be static and easy understanding >    菜单栏里的语言选项标签应该是静态且易于理解的(中英双语同时显示) Bugfixes: > 1. Cannot generate bootable image when original image (hex/bin) size is larger than 64KB >    当输入的源image文件格式为hex或者bin且其大小超过64KB时,生成可启动程序会失败 > 2. Cannot download large image file (eg 6.8MB) in some case >    当输入的源image文件非常大时(比如6.8MB),下载可能会超时失败 > 3. There is language switch issue with some dynamic labels >    当切换显示语言时,有一些控件标签(如Connect按钮)不能实时更新 > 4. Some led demos of RT1050 EVKB board are invalid >    /apps目录下RT1050 EVKB板子的一些LED demo是无效的 【v1.4.0】 Features: > 1. Support for loading bootable image into uSDHC SD/eMMC boot device >    支持下载Bootable image进主动启动设备 - uSDHC接口SD/eMMC卡 > 2. Provide friendly way to view and set mixed eFuse fields >    支持更直观友好的方式去查看/设置某些混合功能的eFuse区域 Improvements: > 1. Set default FlexSPI NOR device to align with NXP EVK boards >    默认FlexSPI NOR device应与恩智浦官方EVK板卡相匹配 > 2. Enable real-time gauge for Flash Programmer actions >    为通用Flash编程器里的操作添加实时进度条显示
View full article
Overview ======== The LPUART example for FreeRTOS demonstrates the possibility to use the LPUART driver in the RTOS with hardware flow control. The example uses two instances of LPUART IP and sends data between them. The UART signals must be jumpered together on the board. Toolchain supported =================== - MCUXpresso 11.0.0 Hardware requirements ===================== - Mini/micro USB cable - MIMXRT1050-EVKB board - Personal Computer Board settings ============== R278 and R279 must be populated, or have pads shorted. These resistors are under the display opposite side of board from uSD connector. The following pins need to be jumpered together: --------------------------------------------------------------------------------- | | UART3 (UARTA) | UART8 (UARTB) | |---|-------------------------------------|-------------------------------------| | # | Signal | Function | Jumper | Jumper | Function | Signal | |---|---------------|----------|----------|----------|----------|---------------| | 1 | GPIO_AD_B1_07 | RX | J22-pin1 | J23-pin1 | TX | GPIO_AD_B1_10 | | 2 | GPIO_AD_B1_06 | TX | J22-pin2 | J23-pin2 | RX | GPIO_AD_B1_11 | | 3 | GPIO_AD_B1_04 | CTS | J23-pin3 | J24-pin5 | RTS | GPIO_SD_B0_03 | | 4 | GPIO_AD_B1_05 | RTS | J23-pin4 | J24-pin4 | CTS | GPIO_SD_B0_02 | --------------------------------------------------------------------------------- Prepare the Demo ================ 1. Connect a USB cable between the host PC and the OpenSDA USB port on the target board. 2. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 3. Download the program to the target board. 4. Either press the reset button on your board or launch the debugger in your IDE to begin running the demo. Running the demo ================ You will see status of the example printed to the console. Customization options =====================
View full article
Introduction NXP i.MX RT1xxx series provide the High Assurance Boot (HAB) feature which makes the hardware to have a mechanism to ensure that the software can be trusted, as the HAB feature enables the ROM to authenticate the program image by using digital signatures, which can assure the application image's integrity, authenticated and undeniable. So the OEM can utilize it to make their product reject any system image which is not authorized to run. However, what's the trust chain of HAB for implementing the purpose? How the key and certificate generate In the installation directory of MCUXpresso Secure Provisioning:  ~\nxp\MCUX_Provi_v3.1\bin\tools_scripts\keys , there are scripts for generating keys: hab4_pki_tree.sh and hab4_pki_tree.bat (both are applicable to Linux and Windows systems respectively), running any of the above scripts will generate 13 pairs of public and private keys in sequence through OpenSSL, which constitute the below tree structure. Fig1 Key Tree structure The public key and private key generated by OpenSSL are paired one by one, saving the private key and publishing the corresponding public key to the outside world can easily implement asymmetric encryption applications. But how to ensure that the obtained public key is correct and has not been tampered with? At this time, the intervention of authoritative departments is required. Just like everyone can print their resume and say who they are, but if they have the seal of the Public Security Bureau, only the household registration book can prove you are you. This issued by the authority is called a certificate. What's in the certificate? Of course, it should contain a public key, which is the most important; there is also the owner of the certificate, just like the household registration book with your name and ID number, indicating that the book is yours; in addition, there is the issuer of the certificate and the validity period of the ID card is a bit like the issuer institution on the ID card, and how many years of the validity period. If someone fakes a certificate issued by an authority, it's like having fake ID cards and fake household registration books. To generate a certificate, you need to initiate a certificate request, and then send the request to an authority for certification, which is called a CA(Certificate Authority). After sending this request to the authority, the authority will give the certificate a signature. Another question arises, how can the signature be guaranteed to be signed by a genuine authority? Of course, it can only be signed with something that is only in the hands of the authority, which is the CA's private key. The signature algorithm probably works like this: a Hash calculation is performed on the target information to obtain a Hash value. And this process is irreversible, that is to say, the original information content cannot be obtained through the Hash value. When the information is sent out, the hash value is encrypted and sent together with the information as a signature. The process is as follows. Fig2 Signature and verification process Looking at the content of the certificate (as shown below), we will find that there is an Issuer, that is, who issued the certificate; The subject is to who the certificate is issued; Validity is the certificate period; Public-key is the content of the public key, and related signature algorithm. You will find that in order to verify the certificate, the public key of the CA is required. Then a new question arises. How can we be sure that the public key of the CA is correct? This requires a superior CA to sign the CA's public key, and then form the CA's certificate. If you want to know whether a CA's certificate is reliable, you need to see if the public key of the CA's superior certificate can unlock the CA's signature. Just likes if you don’t trust the District Public Security Bureau, you can call the Municipal Public Security Bureau and ask the Municipal Public Security Bureau to confirm its legitimacy of the District Public Security Bureau. This goes up layer by layer until the root CA makes the final endorsement. Through this layer-by-layer credit endorsement method, the normal operation of the asymmetric encryption mode is guaranteed. How does the Root CA prove itself? At this time, Root CA will issue another certificate (as shown below), called the Self-Signed Certificate, which is to sign itself with its own private key, giving people a feeling of "I am me, whether you believe it or not", Therefore, its format content is slightly different from the above CA certificate. Its Issuer and Subject are the same, and its own public key can be used for authentication. So the certificate authentication process will also end here. In this way, in addition to generating the public key and private key through running the script, the OpenSSL will also generate the certificate chain shown below.  Fig3 certificates Boot flow of the HAB mode Figure 4 shows the boot flow of the HAB mode. And steps 1, 2, and 3 are essentially the signature verification process. Fig4 Boot flow of the HAB mode The verification process (as shown in Figure 2) can be used to detect data integrity, identity authentication, and non-repudiation when the public key is trusted, so hab4_pki_tree.sh and hab4_pki_tree.bat scripts can ensure the generated public key and private key pair and the certificate are trusted, it's the "perfectly closed loop". However, the Application image in Figure 4 is plaintext, and the confidentiality of the data is not implemented, so the encrypted boot is always a combination of the HAB boot and the encrypted boot is an advanced usage of an authenticated boot. Reference AN4581: i.MX Secure Boot on HABv4 Supported Devices AN12681: How to use HAB secure boot in i.MX RT10xx  
View full article
One-stop secure boot tool: NXP-MCUBootUtility v1.0.0 is released Source code: https://github.com/JayHeng/NXP-MCUBootUtility 【v1.1.0】 Feature:   1. Support i.MXRT1015   2. Add Language option in Menu/View and support Chinese Improvement:   1. USB device auto-detection can be disabled   2. Original image can be a bootable image (with IVT&BootData/DCD)   3. Show boot sequence page dynamically according to action Interest:   1. Add sound effect (Mario) 【v1.2.0】 Feature:   1. Can generate .sb file for MfgTool and RT-Flash   2. Can show cost time along with gauge Improvement:   1. Non-XIP image can also be supported for BEE Encryption case   2. Display guage in real time Bug:   1. Region count cannot be set more than 1 for Fixed OTPMK Key case   2. Option1 field is not implemented for FlexSPI NOR configuration
View full article
RT106L_S voice control system based on the Baidu cloud 1 Introduction     The NXP RT106L and RT106S are voice recognition chip which is used for offline local voice control, SLN-LOCAL-IOT is based on RT106L, SLN-LOCAL2-IOT is a new local speech recognition board based on RT106S. The board includes the murata 1DX wifi/BLE module, the AFE voice analog front end, the ASR recognition system, the external flash, 2 microphones, and the analog voice amplifier and speakers. The voice recognition process for SLN-LOCAL-IOT and SLN-LOCAL2-IOT is different and the new SLN-LOCAL2-IOT is recommended.     This article is based on the voice control board SLN-LOCAL/2-IOT to implement the following block diagram functions: Pic 1 Use the PC-side speed model tool (Cyberon DSMT) to generate WW(wake word) and VC(voice command) Command related voice engine binary files , which will be used by the demo code. This system is mainly used for the Chinese word recognition, when the user says Chinese word: "小恩小恩", it wakes up SLN-LOCAL/2-IOT, and the board gives feedback "小恩来了,请吩咐". Then system enter the voice recognition stage, the user can say the voice recognition command: “开红灯”,“关红灯”,“开绿灯”,“关绿灯”,“灯闪烁”,“开远程灯”,“关远程灯”, after recognition, the board gives feedback "好的". Among them, “开红灯”,“关红灯”,“开绿灯”,“关绿灯”,“灯闪烁”,the five commands are used for the local light switch, while the 开远程灯”,“关远程灯“two commands can through network communication Baidu cloud control the additional MIMXRT1060-EVK development board light switch. SLN-LOCAL/2-IOT through the WIFI module access to the Internet with MQTT protocol to achieve communication with Baidu cloud, when dectect the remote control command, publish the json packets to Baidu cloud, while MIMRT1060-EVK subscribe Baidu cloud data, will receive data from the IOT board and analyze the EVK board led control. PC side can use MQTT.fx software to subscribe the Baidu cloud data, it also can send data to the device to achieve remote control function directly.  Now, will give the detail content about how to use the SLN-LOCAL/2-IOT SDK demo realize the customized Chinese wake command and voice command, and remote control the MIMXRT1060-EVK through the Baidu Cloud.     2 Platform establish 2.1 Used platform SLN-LOCAL-IOT/SLN-LOCAL2-IOT MIMXRT1060-EVK MQTT.fx SDK_2_8_0_SLN-LOCAL2-IOT MCUXPresso IDE Segger JLINK Baidu Smart Cloud: Baidu cloud control+ TTS Audacity:audio file format convert tool WAVToCode:wav convert to the c array code, which used for the demo tilte play MCUBootUtility: used to burn the feedback audio file to the filesystem Cyberon DSMT: wake word and voice detect command generation tool DSMT is the very important tool to realize the wake word and voice dection, the apply follow is: Pic 2 2.2 Baidu Smart cloud 2.2.1 Baidu cloud IOT control system Enter the IoT Hub: https://cloud.baidu.com/product/iot.html     Click used now. 2.2.1.1 Create device project Create a project, select the device type, and enter the project name. Device types can use shadows as images of devices in the cloud to see directly how data is changing. Once created, an endpoint is generated, along with the corresponding address: Pic 3 2.2.1.2 Create Thing model The Thing model is mainly to establish various properties needed in the shadow, such as temperature, humidity, other variables, and the type of value given, in fact, it is also the json item in the actual MQTT communication.    Click the newly created device-type project where you can create a new thing model or shadow: Pic 4    Here create 3 attributes:LEDstatus,humid,temp It is used to represent the led status, humidity, temperature and so on, which is convenient for communication and control between the cloud and RT board. Once created, you get the following picture:   Pic 5   2.2.1.3 Create Thing shadow In the device-type project, you can select the shadow, build your own shadow platform, enter the name, and select the object model as the newly created Thing model containing three properties, after the create, we can get the details of the shadow:   Pic 6 At the same time will also generate the shadow-related address, names and keys, my test platform situation is as follows: TCP Address: tcp://rndrjc9.mqtt.iot.gz.baidubce.com:1883 SSL Address: ssl://rndrjc9.mqtt.iot.gz.baidubce.com:1884 WSS Address: wss://rndrjc9.mqtt.iot.gz.baidubce.com:443 name: rndrjc9/RT1060BTCDShadow key: y92ewvgjz23nzhgn Port 1883, does not support transmission data encryption Port 1884, supports SSL/TLS encrypted transmission Port 8884, which supports wesockets-style connections, also contains SSL encryption. This article uses a 1883 port with no transmission data encryption for easy testing. So far, Baidu cloud device-type cloud shadow has been completed, the following can use MQTTfx tools to connect and test. In practice, it is recommended that customers build their own Baidu cloud connection, the above user key is for reference only.   2.2.2 Online TTS    SLN-LOCAL/2-IOT board recognizes wake-up words, recognition words, or when powering on, you need to add corresponding demo audio, such as: "百度云端语音测试demo ", "小恩来啦!请吩咐“,"好的". These words need to do a text-to-wav audio file synthesis, here is Baidu Smart Cloud's online TTS function, the specific operation can refer to the following documents: https://ai.baidu.com/ai-doc/SPEECH/jk38y8gno   Once the base audio library is opened, use the main.py provided in the link above and modify it to add the Chinese field you want to convert to the file "TEXT" and add the audio file to be converted in "save_file" such as xxx .wav, using the command: python main.py to complete the conversion, and generate the audio format corresponding to the text, such as .mp3, .wav. Pic 7   After getting the wav file, it can’t be used directly, we need to note that for SLN-LOCAL/2-IOT board, you need to identify the audio source of the 48K sample rate with 16bit, so we need to use the Audioacity Audio tool to convert the audio file format to 48K16bit wav. Import 16K16bit wav files generated by Baidu TTS into the Audioacity tool, select project rate of 48Khz, file->export->export as WAV, select encoding as signed 16bit PCM, and regenerate 48Khz16bit wav for use. Pic 8 “百度云端语音测试demo“:Used for power-on broadcasting, demo name broadcasting, it is stored in RT demo code, so you need to convert it to a 16bit C code array and add it to the project. "小恩来啦!请吩咐",“好的“:voice detect feedback, it is saved in the filesystem ZH01,ZH02 area. 2.3 playback audio data prepare and burn   There are two playback audio file, it is "小恩来啦!请吩咐",“好的“,it is saved in the filesystem ZH01,ZH02 area. Filesystem memory map like this: Pic 9 So, we need to convert the 48K16bit wav file to the filesystem needed format, we need to use the official tool::Ivaldi_sln_local2_iot Reference document:SLN-LOCAL2-IOT-DG chapter 10.1 Generating filesystem-compatible files Use bash input the commands like the following picture: Pic10 Use the convert command to get the playback bin file: python file_format.py -if xiaoencoming_48k16bit.wav -of xiaoencoming_48k16bit.bin -ft H At last, it will generate the file: "小恩来啦!请吩咐"->xiaoencoming_48k16bit.bin,burn to flash address 0x6184_0000 “好的”->OK_48k16bit.bin, burn to flash address 0x6180_0000 Then, use MCUBootUtility tool burn the above two file to the related images. Here, take OK_48k16bit.bin as an example, demo enter the serial download mode(J27-0), power off and power on. Flash chip need to select hyper flash IS26KSXXS, use the boot device memory windows, write button to burn the .bin file to the related address, length is 0X40000 Pic11 Pic12 xiaoencoming_48k16bit.bin can use the same method to download to 0x6184_0000,Length is 0X40000.   2.4 Demo audio prepare and add The prepared baiduclouddemo_48K16bit.wav(“百度云端语音测试demo “) need to convert to the 16bit C array code, and put to the project code, calls by the code, this is used for the demo mode play. The convert need to use the WAVToCode, the operation like this: Pic 13 The generated baiducloulddemo_48K16bit.c,add it to the demo project C files: sln_local_iot_local_demo->audio->demos->smart_home.c。 2.5 WW and VC prepare Wake-up word are generated through the cyberon DSMT tool, which supports a wide range of language, customers can request the tool through Figure 2. The Chinese wake-up words and voice command words in this article are also generated through DSMT. DSMT can have multiple groups, group1 as a wake-up word configuration, CmdMapID s 1. Other groups act as voice command words, such as CMD-IOT in this article, cmdMapID=2. Pic 14   Pic 15 Wake word continuously detects the input audio stream, uses group1, and if successfully wakes up, will do the voice command detection uses group2, or other identifying groups as well as custom groups. The wake-up words using the DSMT tool, the configuration are as follows: Pic 16 The WW can support more words, customer can add the needed one in the group 1. Use the DSMT configure VC like this: Pic 17 Then, save the file, code used file are: _witMapID.bin, CMD_IOT.xml,WW.xml. In the generated files, CYBase.mod is the base model, WW.mod is the WW model, CMD_IOT.mod is the VC model. After Pic 16,17, it finishes the WW and VC command prepare, we can put the DSMT project to the RT106S demo project folder: sln_local2_iot_local_demo\local_voice\oob_demo_zh 3 Code prepare Based on the official SLN-LOCAL2-IOT SDK local_demo, the code in this article modifies the Chinese wake-up words and recognition words (or you can build a new customer custom group directly), add local voice detect the led status operations, Then feedback Chinese audio, demo Chinese audio, Wifi network communication MQTT protocol code, and Baidu cloud shadow connection publish. Source reference code SDK path: SDK_2_8_0_SLN-LOCAL2-IOT\boards\sln_local2_iot\sln_voice_examples\local_demo   SDK_2_8_0_SLN-LOCAL2-IOT\boards\sln_local2_iot\sln_boot_apps SLN-LOCAL2-IOT and SLN-LOCAL-IOT code are nearly the same, the only difference is that the ASR library file is different, for RT106S (SLN-LOCAL2-IOT) using SDK it’s own libsln_asr.a library, for RT106L (SLN-LOCAL-IOT) need to use the corresponding libsln_asr_eval.a library.    Importing code requires three projects: local_demo, bootloader, bootstrap. The three projects store in different spaces. See SLN-LOCAL2-IOT-DG .pdf, chapter 3.3 Device memory map    This is the 3 chip project boot process: Pic 18 This document is for demo testing and requires debug, so this article turns off the encryption mechanism, configures bootloader, bootstrap engineering macro definition: DISABLE_IMAGE_VERIFICATION = 1, and uses JLINK to connect SLN-LOCAL/2-IOT's SWD interface to burn code. The following is to add modification code for app local_demo projects. 3.1 sln-local/2-iot code Sln-local-iot, sln-local2-iot platform, the following modification are the same for the two platform. 3.1.1 Voice recognition related code 1)Demo audio play Play content:“百度云端语音测试demo“ sln_local2_iot_local_demo_xe_ledwifi\audio\demos\ smart_home.c content is replaced by the previously generated baiducloulddemo_48K16bit.C. audio_samples.h,modify: #define SMART_HOME_DEMO_CLIP_SIZE 110733 This code is used for the main.c announce_demo API play:         case ASR_CMD_IOT:             ret = demo_play_clip((uint8_t *)smart_home_demo_clip, sizeof(smart_home_demo_clip));   2)command print information #define NUMBER_OF_IOT_CMDS      7 IndexCommands.h static char *cmd_iot_en[] = {"Red led on", "Red led off", "Green led on", "Green led off",                              "cycle led",        "remote led on",         "remote led off"}; static char *cmd_iot_zh[] = {"开红灯", "关红灯", "开绿灯", "关绿灯", "灯闪烁", "开远程灯", "关远程灯"}; Here is the source code modification using IOT, you can actually add your own speech recognition group directly, and add the relevant command identification.   3)sln_local_voice.c Line757 , add led-related notification information in ASR_CMD_IOT mode. oob_demo_control.ledCmd = g_asrControl.result.keywordID[1];     The code is used to obtain the recognized VC command data, and the value of keywordID[1] represents the number. This number can let the code know which detail voice is detected. so that you can do specific things in the app based on the value of ledcmd. The value of keywordID[1] corresponds to Command List in Figure 17. For example, “开远程灯“, if woke up, and recognized "开远程灯", then keywordID[1] is 5, and will transfer to oob_demo_control.ledCmd, which will be used in the appTask API to realize the detail control. 4) main.c void appTask(void *arg) Under case kCommandGeneric: if the language is Chinese, then add the recognition related control code, at first, it will play the feedback as “好的”. Then, it will check the voice detect value, give the related local led control. else if (oob_demo_control.language == ASR_CHINESE) { // play audio "OK" in Chinese #if defined(SLN_LOCAL2_RD) ret = audio_play_clip((uint8_t *)AUDIO_ZH_01_FILE_ADDR, AUDIO_ZH_01_FILE_SIZE); #elif defined(SLN_LOCAL2_IOT) ret = audio_play_clip(AUDIO_ZH_01_FILE); #endif //kerry add operation code==================================================begin RGB_LED_SetColor(LED_COLOR_OFF); if (oob_demo_control.ledCmd == LED_RED_ON) { RGB_LED_SetColor(LED_COLOR_RED); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_RED_OFF) { RGB_LED_SetColor(LED_COLOR_OFF); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_BLUE_ON) { RGB_LED_SetColor(LED_COLOR_BLUE); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == LED_BLUE_OFF) { RGB_LED_SetColor(LED_COLOR_OFF); vTaskDelay(5000); } else if (oob_demo_control.ledCmd == CYCLE_SLOW) { for (int i = 0; i < 3; i++) { RGB_LED_SetColor(LED_COLOR_RED); vTaskDelay(400); RGB_LED_SetColor(LED_COLOR_OFF); RGB_LED_SetColor(LED_COLOR_GREEN); vTaskDelay(400); RGB_LED_SetColor(LED_COLOR_OFF); RGB_LED_SetColor(LED_COLOR_BLUE); vTaskDelay(400); } } … } In addition to local voice recognition control, this article also add remote control functions, mainly through wifi connection, use the mqtt protocol to connect Baidu cloud server, when local speech recognition get the remote control command, it publish the corresponding control message to Baidu cloud, and then the cloud send the message to the client which subscribe this message,  after the client get the message, it will refer to the message content do the related control.   3.1.3 Network connection code 1)sln_local2_iot_local_demo_xe_ledwifi\lwip\src\apps\mqtt     Add mqtt.c 2)sln_local2_iot_local_demo_xe_ledwifi\lwip\src\include\lwip\apps Add mqtt.h, mqtt_opts.h,mqtt_prv.h The related mqtt driver is from the RT1060 sdk, which already added in the attachment project. 3)sln_tcp_server.c   Add MQTT application layer API function code, client ID, server host, MQTT server port number, user name, password, subscription topic, publishing topic and data, etc., more details, check the attachment code.    The MQTT application code is ported from the mqtt project of the RT1060 SDK and added to the sln_tcp_server.c. TCP_OTA_Server function is used to initialize the wifi network, realize wifi connection, connect to the network, resolve Baidu cloud server URL to get IP, and then connect Baidu cloud server through mqtt, after the successful connection, publish the message at first, so that after power-up through mqttfx to see whether the power on network publishing message is successful. TCP_OTA_Server function code is as follows: static void TCP_OTA_Server(void *param) //kerry consider add mqtt related code { err_t err = ERR_OK; uint8_t status = kCommon_Failed; #if USE_WIFI_CONNECTION /* Start the WiFi and connect to the network */ APP_NETWORK_Init(); while (status != kCommon_Success) { status_t statusConnect; statusConnect = APP_NETWORK_Wifi_Connect(true, true); if (WIFI_CONNECT_SUCCESS == statusConnect) { status = kCommon_Success; } else if (WIFI_CONNECT_NO_CRED == statusConnect) { APP_NETWORK_Uninit(); /* If there are no credential in flash delete the TPC server task */ vTaskDelete(NULL); } else { status = kCommon_Failed; } } #endif #if USE_ETHERNET_CONNECTION APP_NETWORK_Init(true); #endif /* Wait for wifi/eth to connect */ while (0 == get_connect_state()) { /* Give time to the network task to connect */ vTaskDelay(1000); } configPRINTF(("TCP server start\r\n")); configPRINTF(("MQTT connection start\r\n")); mqtt_client = mqtt_client_new(); if (mqtt_client == NULL) { configPRINTF(("mqtt_client_new() failed.\r\n");) while (1) { } } if (ipaddr_aton(EXAMPLE_MQTT_SERVER_HOST, &mqtt_addr) && IP_IS_V4(&mqtt_addr)) { /* Already an IP address */ err = ERR_OK; } else { /* Resolve MQTT broker's host name to an IP address */ configPRINTF(("Resolving \"%s\"...\r\n", EXAMPLE_MQTT_SERVER_HOST)); err = netconn_gethostbyname(EXAMPLE_MQTT_SERVER_HOST, &mqtt_addr); configPRINTF(("Resolving status: %d.\r\n", err)); } if (err == ERR_OK) { configPRINTF(("connect to mqtt\r\n")); /* Start connecting to MQTT broker from tcpip_thread */ err = tcpip_callback(connect_to_mqtt, NULL); configPRINTF(("connect status: %d.\r\n", err)); if (err != ERR_OK) { configPRINTF(("Failed to invoke broker connection on the tcpip_thread: %d.\r\n", err)); } } else { configPRINTF(("Failed to obtain IP address: %d.\r\n", err)); } int i=0; /* Publish some messages */ for (i = 0; i < 5;) { configPRINTF(("connect status enter: %d.\r\n", connected)); if (connected) { err = tcpip_callback(publish_message_start, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } i++; } sys_msleep(1000U); } vTaskDelete(NULL); } Please note the following published json data, it can’t be publish directly in the code. {   "reported": {     "LEDstatus": false,     "humid": 88,     "temp": 22   } } Which need to use this web https://www.bejson.com/ realize the json data compression and convert: {\"reported\" : {     \"LEDstatus\" : true,     \"humid\" : 88,     \"temp\" : 11    } }   4)main appTask Under case kCommandGeneric: , if the language is Chinese, then add the corresponding voice recognition control code. "开远程灯": turn on the local yellow light, publish the “remote led on” mqtt message to Baidu cloud, control remote 1060EVK board lights on. "关远程灯": turn on the local white light, publish the “remote led off” mqtt message to Baidu cloud, control the remote 1060EVK board light off. Related operation code: else if (oob_demo_control.ledCmd == LED_REMOTE_ON) { RGB_LED_SetColor(LED_COLOR_YELLOW); vTaskDelay(5000); err_t err = ERR_OK; err = tcpip_callback(publish_message_on, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } } else if (oob_demo_control.ledCmd == LED_REMOTE_OFF) { RGB_LED_SetColor(LED_COLOR_WHITE); vTaskDelay(5000); err_t err = ERR_OK; err = tcpip_callback(publish_message_off, NULL); if (err != ERR_OK) { configPRINTF(("Failed to invoke publishing of a message on the tcpip_thread: %d.\r\n", err)); } } 3.2 MIMXRT1060-EVK code The main function of the MIMXRT1060-EVK code is to configure another client in the cloud, subscribe to the message published by SLN-LOCAL/2-IOT which detect the remote command, and then the LED on the control board is used to test the voice recognition remote control function, this code is based on Ethernet, through the Ethernet port on the board, to achieve network communication, and then use mqtt to connect baidu cloud, and subscribe the message from local2, This enables the reception and execution of the Local2 command. the network code part is similar to SLN-LOCAL2-IOT board network code, the servers, cloud account passwords, etc. are all the same, the main function is to subscribe messages. See the code from attachment RT1060, lwip_mqtt_freertos.c file. When receives data published by the server, it needs to do a data analysis to get the status of the led light and then control it. Normal data from Baidu cloud shadow sent as follows Received 253 bytes from the topic "$baidu/iot/shadow/RT1060BTCDShadow/update/accepted": "{"requestId":"2fc0ca29-63c0-4200-843f-e279e0f019d3","reported":{"LEDstatus":false,"humid":44,"temp":33},"desired":{},"lastUpdatedTime":{"reported":{"LEDstatus":1635240225296,"humid":1635240225296,"temp":1635240225296},"desired":{}},"profileVersion":159}" Then you need to parse the data of LEDstatus from the received data, whether it is false or true. Because the amount of data is small, there is no json-driven parsing here, just pure data parsing, adding the following parsing code to the mqtt_incoming_data_cb function: mqtt_rec_data.mqttindex = mqtt_rec_data.mqttindex + len; if(mqtt_rec_data.mqttindex >= 250) { PRINTF("kerry test \r\n"); PRINTF("idex= %d", mqtt_rec_data.mqttindex); datap = strstr((char*)mqtt_rec_data.mqttrecdata,"LEDstatus"); if(datap != NULL) { if(!strncmp(datap+11,strtrue,4))//char strtrue[]="true"; { GPIO_PinWrite(GPIO1, 3, 1U); //pull high PRINTF("\r\ntrue"); } else if(!strncmp(datap+11,strfalse,5))//char strfalse[]="false"; { GPIO_PinWrite(GPIO1, 3, 0U); //pull low PRINTF("\r\nfalse"); } } mqtt_rec_data.mqttindex =0; It use the strstr search the “LEDstatus“ in the received data, and get the pointer position, then add the fixed length to get the LED status is true or flash. If it is true, turn on the led, if it is false, turn off the led. 4 Test Result    This section gives the test results and video of the system. Before testing the voice function, first use MQTTfx to test baidu cloud connection, release, subscription is no problem, and then test sln-local2-iot combined with mimxrt1060-evk voice wake-up recognition and remote control functions.    For SLN-LOCAL2-IOT wifi hotspot join, enter the command in the print terminal: setup AWS kerry123456   4.1 MQTT.fx test baidu cloud connection MQTT.fx is an EclipsePaho-based MQTT client tool written in the Java language that supports subscription and publishing of messages through Topic.    4.1.1 MQTT fx configuration     Download and install the tool, then open it, at first, need to do the configuration, click edit connection: Pic19 Profile name:connect name Profile type: MQTT broker Broker address: It is the baidu could generated broker address, with 1883 no encryption transfer. Broker port:1883 No encryption Client ID: RT1060BTCDShadow, here need to note, this name should be the same as the could shadow name, otherwise, on the baidu webpage, the connection is not be detected. If this Client ID name is the same as the shadow name, then when the MQTT fx connect, the online side also can see the connection is OK. User credentials: add the thing User name and password from the baidu cloud. After the configuration, click connect, and refresh the website. Before conection: Pic 20 After connection: Pic 21 4.1.2 MQTT fx subscribe When it comes to subscription publishing, what is the topic of publishing subscriptions?  Here you can open your thing shadow, select the interaction, and see that the page has given the corresponding topic situation: Pic 22 Subscribe topic is: $baidu/iot/shadow/RT1060BTCDShadow/update/accepted  Publish topic is: $baidu/iot/shadow/RT1060BTCDShadow/update Pic 23 Click subscribe, we can see it already can used to receive the data.   4.1.3 MQTT fx publish Publish need to input the topic: $baidu/iot/shadow/RT1060BTCDShadow/update It also need to input the content, it will use the json content data. Pic 24 Here, we can use this json data: {   "reported" : {     "LEDstatus" : true,     "humid" : 88,     "temp" : 11    } } The json data also can use the website to check the data: https://www.bejson.com/jsonviewernew/ Pic 25 Input the publish data, and click pubish button: Pic 26 4.1.4 Publish data test result   Before publish, clean the website thing data: Pic 27 MQTT fx publish data, then check the subscribe data and the website situation: Pic 28 We can see, the published data also can be see in the website and the mqttfx subscribe area. Until now, the connection, data transfer test is OK.   4.2 Voice recognition and remote control test This is the device connection picture: Pic 29 4.2.1 voice recognition local control Pic 30 This is the SLN-LOCAL2-IOT print information after recognize the voice WW and VC. Red led on: led cycle: 4.2.2 voice recognition remote control   Following test, wakeup + remote on, wakeup+remote off, and also give the print result and the video. Pic 31 remote control:  
View full article
How to create RT AVB switch&endpoint platform 1. Abstract In the previous article, it talked about how to use a single-point RT1170 as a talker and a single-point RT1170 as a listener, and connect the two boards directly to implement AVB endpoint testing. However, in actual use, many applications are multipoint to multipoint, but AVB switch is required. Therefore, based on the previous article, this article adds another listener endpoint and AVB switch to implement an AVB platform with one talker and two listeners. Fig 1 The AVB switch can be a third-party AVB switch product. Of course, you can also consider using NXP's upcoming new product RT1180. This chip has AVB/TSN switch function, and our RT1180 supporting stack has also been released. 2. Platform creation This article will use two AVB switches to do AVB testing: one uses the NXP official MIMXRT1180-EVK as an AVB switch, and the other uses the third-party product MOTU's AVB switch. The endpoints use three NXP MIMXRT1170-EVK boards, one for talker configuration and the other two for listener configuration. For the configuration of RT1170 as endpoint, that is, talker and listener, you can refer to the previous article: RT1170 AVB fresh tasting Here you can directly start quickly, take the avb_app.bin prepared in the stack and burn it directly to MIMXRT1170-EVK for talker and listener configuration. Of course, if there are some customized functions that modify the source code, you can also refer to the above article to recompile, generate the avb_app.bin file and then burn it. 2.1 Software and hardware Hardware:       MOTU AVB SWITCH(switch)       MIMXRT1180-EVK*1(switch)       MIMXRT1170-EVK*3(1: talker, 2: listener), hardware need to be modified, refer to the previous document Software: RT1170 AVB/TSN stack: genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_1: https://mcuxpresso.nxp.com/download/52643189c4d74a7b26b8e096ab28df0e RT1180 AVB/TSN stack: genavb_tsn-mcuxpresso-SDK_2_15_0-6_0_0 : https://mcuxpresso.nxp.com/download/c584c33a8d4f55c29b5505b9be8f537a   2.2 Configure RT1170 AVB endpoints Directly burn the files in avbstack: genavb_tsn-mcuxpresso-SDK_2_13_0-5_6_1\binaries\genavb-avb_audio_app-evaluation-freertos_rt1176-5_6_1.tar\genavb-avb_audio_app-evaluation-freertos_rt1176-5_6_1\release\avb_app.bin to the three MIMXRT1170-EVK development boards and enter the serial download mode to program: Fig 2 The three boards are burned with the same code. After burning, let the board enter the internal boot mode and configure the talker and listener through the serial port. After the code is burned successfully, the onboard serial port will keep sending log information. You only need to enter INSERT on the keyboard to enter the shell command line state. 2.2.1 1MIMXRT1170-EVK do the talker configuration cd .. ls mkdir avb_app write avb_app/mclock_role 0 mkdir avdecc write avdecc/btb_mode 0 mkdir fgptp write fgptp/gmCapable 1 mkdir port0 write port0/hw_addr 00:22:33:44:55:66 2.2.2 2 MIMXRT1170-EVK do the listener configuration cd .. ls mkdir avb_app write avb_app/mclock_role 1 mkdir avdecc write avdecc/btb_mode 1 write avdecc/talker_id 0x00049f4455660000 2.3 AVB Switch configuration     The following are two SWITCH configuration connections: 2.3.1 MOTU AVB Switch Use MOTU AVB switch as the AVB switch connection block diagram: Fig 3   The physical board connections are as follows: Fig 4 For the dedicated AVB switch, no specific configuration is required, because you can think of it as a switch with AVB function, which can realize the forwarding function of AVB data. You only need to connect the 1G network port of a talker and the 1G network ports of two listeners to the network port of MOTU AVB SWITCH. Then as long as the functions of the talker and the listener are normal, the entire audio transmission can be normal. The talker is responsible for collecting the audio data information of the microphone and then forwarding it to the two listeners for playback. Of course, the two listeners need to be connected to the speakers respectively. 2.3.2 RT1180 AVB switch For the configuration of RT1180 AVB switch, there are two methods: quick start and self-compilation. If there is no change in the source code, you can directly use the bin file that comes with the stack. Here you need to pay attention to select the correct bin file. RT1180 has two cores: CM33 and CM7 cores. The CM33 image supports the TSN/AVB bridge function, that is, the switch, and the CM7 image supports the TSN endpoint function.    MIMXRT1180-EVK contains multi-network ports, the situation is: Fig 5 Fig 6 Therefore, when using the AVB switch network port, you need to pay attention to using ENET0, 1, 2, and 3 ports. The connection diagram of using MIMXRT1180-EVK as the AVB switch network port is as follows: Fig 7 The actual connection diagram is as follows: Fig 8 To implement the RT1180 code, you need to download the RT1180 M33 TSN bridge code to the MIMXRT1180-EVK board. If the source code of the AVB/TSN stack does not need to be modified, you can use the ready-made bin file for testing: genavb_tsn-mcuxpresso-SDK_2_15_0-6_0_0\binaries\genavb-tsn_app-evaluation-freertos_rt1189_cm33-6_0_0\release\tsn_app.bin There are many ways to burn, you can use tools or command line methods. The tool can be MCUBootutility or the official SEC tool. Here we choose to use the MCUBootutility tool, download link: https://github.com/JayHeng/NXP-MCUBootUtility/releases/tag/v6.2.0 If you use the SEC tool to download, you can refer to the stack documentation: genavb_tsn-mcuxpresso-SDK_2_15_0-6_0_0\doc\ NXP_GenAVB_TSN_MCUXpresso_User_s_Guide_6_0_rev0.pdf, chapter 11 Flash Image booting. When use the MCUBootutility tool, it needs to do the modification: \NXP-MCUBootUtility-6.2.0\src\targets\MIMXRT1189 \MIMXRT1189\bltargetconfig.py Modify: #flexspiNorMemBase0 = 0x38000000 # CM33 Secure #flexspiNorMemBase0Ns = 0x28000000 # CM33 Non-Secure To: flexspiNorMemBase0 = 0x28000000 # CM33 Non-Secure flexspiNorMemBase0Ns = 0x38000000 # CM33 Secure Fig 9 Burn the tsn_app.bin to the RT1180 address 0x2800b000。 Let the MIMXRT1180-EVK board enter serial download mode,SW5:1-OFF,2-OFF,3-OFF,4-ON. Then, find another usb cable to connect J33 to do the code flash downloading. After the code is programmed, need to enter the internal boot mode for QSPI: SW5:1-OFF,2-ON,3-OFF,4-OFF. This completes the burning of the app with AVB switch function. This code does not need to enter the shell to configure the filesystem like RT1170. For the RT1180 bridge code, after burning, the switch function will be built-in after restarting. Of course, if you need to recompile your own project, you can directly refer to the stack documentation: NXP_GenAVB_TSN_MCUXpresso_User_s_Guide_6_0_rev0.pdf. If you use Linux system to compile, the method is the same as RT1170, three steps:      (1) Patch the AVB stack for the RT1180 SDK     (2)add two soft links to the RT1180 AVB stack, one for the board SDK and the other for the AVB SDK source code. The structure is as follows:   Fig 10    (3) At last, build ./ build_release.sh \genavb_tsn-mcuxpresso-SDK_2_15_0-6_0_0\genavb-apps-freertos-6_0_0.tar\genavb-apps-freertos-6_0_0\boards\evkmimxrt1180\demo_apps\avb_tsn\tsn_app\cm33\armgcc\ build_release.sh Then, it will generate the according tsn_app.bin file. 3. AVB network data packet analysis I have always wanted to check the AVB network data packets, so I thought of the following method to do it. I also found a general network switch that can package some of the network ports to specific network ports. This method is used here just to check the basic packets. In principle, the general switch does not have the AVB physical layer function, so it should have some impact on the synchronization function. However, due to the limitation of the equipment, this article only has a basic understanding of the AVB data packet structure. Prepare a switch with port mirror function: NETGERA plus switch ProSAFE GS105E. Then configure the switch to mirror the data of ports 2 and 3 to port 1: Fig 11 Then the entire AVB system connection diagram is as follows: Fig 12 The physical connection diagram is as follows: Fig 13 Open the entire system platform and let the system function run, that is, the talker endpoint has sound input and the amplifiers of the two listener endpoints have output. Open the wireshark software on the PC and capture the packets. The captured situation is as follows: Fig 14 As you can see, there are many AVTP packets, and there are two destination addresses. To analyze AVTP packets, you must first know what the standard AVTP packets are like. The standard packets have the following structure: Fig 15 Next, open the wireshark software, configure the network port to be captured, and compare the captured data packets: Fig 16 As you can see, the whole packet is basically captured, but the details, such as VLAN tag and IEC 61883 header, are not present. This is probably caused by the physical layer of ordinary switches cannot support AVB. However, the audio data above can still be seen, and it is indeed dual-channel, but the data is only transmitted through one channel. Therefore, for the RT1170 listener, although a dual-channel speaker is connected, the two speakers correspond to the left and right channels, but when listening, only one speaker channel has sound, and the other has no sound. This is consistent with the captured data packet. The source of this is that the stack code uses one channel for microphone acquisition, and although the audio is configured with two channels, there is actually only one channel with data. So far, the architecture and test of the AVB switch&endpoint platform have been realized. The test effect can be viewed in the video.    
View full article
1 Introduction    With the quick development of science and technology, the Internet of Things(IoT) is widely used in various areas, such as industry, agriculture, environment, transportation, logistics, security, and other infrastructure. IoT usage makes our lives more colorful and intelligent. The explosive development of the IoT cannot be separated from the cloud platform. At present, there are many types of cloud services on the market, such as Amazon's AWS, Microsoft's Azure, google cloud, China's Alibaba Cloud, Baidu Cloud, OneNet, etc.    Amazon AWS Cloud is a professional cloud computing service that is provided by Amazon. It provides a complete set of infrastructure and cloud solutions for customers in various countries and regions around the world. It is currently a cloud computing with a large number of users. AWS IoT is a managed cloud platform that allows connected devices to easily and securely interact with cloud applications and other devices.    NXP crossover MCU RT product has launched a series of AWS sample codes. This article mainly explains the remote_control_wifi_nxp code in the official MIMXRT1060-EVK SDK as an example to realize the data interaction with AWS IoT cloud, Android mobile APP, and MQTTfx client. The cloud topology of this article is as follows: Fig.1-1 2 AWS cloud operation 2.1 Create an AWS account Prepare a credit card, and then go to the below amazon link to create an AWS account:    https://console.aws.amazon.com/console/home   2.2 Create a Thing    Open the AWS IOT link: https://console.aws.amazon.com/iot    Choose the Things item under manage, if it is the first time usage, customer can choose “register a thing” to create the thing. If it is used in the previous time, customers can click the “create” button in the right corner to create the thing. Choose “create a single thing” to create the new thing, more details check the following picture. Fig. 2-1 Fig.2-2 Fig.2-3 2.3 Create certificate    Create a certificate for the newly created thing, click the “create certificate” button under the following picture: Fig.2-4    After the certificate is built, it will have the information about the certificate created, it means the certificate is generated and can be used. Fig. 2-5 Please note, download files: certificate for this thing, public key, private key. It will be used in the mqttfx tool configuration. Click “A root CA for AWS for Download”, download the root CA for AWS IoT, the mqttfx tool setting will also use it. Open the root CA download link, can download the CA certificate. RSA 2048 bit key: VeriSign Class 3 Public Primary G5 root CA certificate Fig. 2-6 At last, we can get these files: 7abfd7a350-certificate.pem.crt 7abfd7a350-private.pem.key 7abfd7a350-public.pem.key AmazonRootCA1.pem Save it, it will be used later. Click “active” button to active the certificate, and click “Done” button. The policy will be attached later.   2.4 Create Policies     Back to the iot view page: https://console.aws.amazon.com/iot/     Select the policies under Secure item, to create the new policies.  Fig. 2-7 Input the policy name, in the action area, fill: iot:*, Resource ARN area fill: * Check Allow item, click the create button to finish the new policy creation. Fig. 2-8 2.5 Things attach relationship     After the thing, certificate, policies creation, then will attach the policy to the certificate, and attach the certificate to the Things. Fig. 2-9 Choose the certificates under Secure item, in the related certificate item, choose “…”, you will find the down list, click “attach policy”, and choose the newly created policy. Then click attach thing, choose the newly created thing. Fig. 2-10 Fig. 2-11 Fig. 2-12 Now, open the Things under Mange item, check the detail things related information. Fig.2-13 Double click the thing, in the Interact item, we can find the Rest API Endpoint, the RT code and the mqttfx tool will use this endpoint to realize the cloud connection. Fig. 2-14 Check the security, you will find the previously created certificate, it means this thing already attach the new certificates: Fig. 2-15 Until now, we already finish the Things related configuration, then it will be used for the MQTT fx, Android app, RT EVK board connections, and testing, we also can check the communication information through the AWS shadow in the webpage directly.       3 Android related configuration 3.1 AWS cognito configuration    If use the Android app to communicate with the AWS IoT clould, the AWS side still needs to use the cognito service to authorize the AWS IoT, then access the device shadows. Create a new identity pools at first from the following link: https://console.aws.amazon.com/cognitohttps://console.aws.amazon.com/cognito Fig. 3-1 Click “manage Identity pools”, after enter it, then click “create new identity pool” Fig. 3-2 Fig. 3-3 Fig. 3-4 Here, it will generate two Roles: Cognito_PoolNameAuth_Role Cognito_PoolNameUnauth_Role Click Allow, to finish the identity pool creation. Fig. 3-5 Please record the related Identity pool ID, it will be used in the Android app .properties configuration files. 3.2 Create plicies in IAM for cognito   Open https://console.aws.amazon.com/iam   Click the “policies” item under “access management” Fig. 3-6 Choose “create policy”, create a IAM policies, in the Policy JSON area, write the following content: Fig. 3-7 { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Connect" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "iot:Publish" ], "Resource": [ "arn:aws:iot:us-east-1:965396684474:topic/$aws/things/RTAWSThing/shadow/update", "arn:aws:iot:us-east-1:965396684474:topic/$aws/things/RTAWSThing/shadow/get" ] }, { "Effect": "Allow", "Action": [ "iot:Subscribe", "iot:Receive" ], "Resource": [ "*" ] } ] }‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Please note, in the JSON content: "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/update", "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/get" Region:the us-east-1 inFig. 3-5 ACCOUNT ID, it can be found in the upper right corner my account side. Fig 3-8 Fig 3-9 After finished the IAM policy creation, then back to IAM policies page, choose Filter policies as customer managed, we can find the new created customer’s policy. Fig. 3-10 3.3 Attach policy for the cognito role in IAM   In IAM, choose roles item: Fig. 3-11 Double click the cognito_PoolNameUnauth_Role which is generated when creating the pool in cognito, click attach policies, select the new created policy. Fig. 3-12 Fig. 3-13 Until now, we already finish the AWS cognito configuration.   3.4  Android properties file configuration Create a file with .properties, the content is:     customer_specific_endpoint=<REST API ENDPOINT>     cognito_pool_id=<COGNITO POOL ID>     thing_name=<THING NAME>     region=<REGION> Please fill the correct content: REST API ENDPOINT:Fig 2-14 COGNITO POOL ID:fig 3-5 THING NAME:fig 2-14,upper left corner REGION:Fig 3-5, the region data in COGNITO POOL ID Take an example, my properties file content is:  customer_specific_endpoint=a215vehc5uw107-ats.iot.us-east-1.amazonaws.com  cognito_pool_id=us-east-1:c5ca6d11-f069-416c-81f9-fc1ec8fd8de5  thing_name=RTAWSThing  region=us-east-1 In the real usage, please use your own configured data, otherwise, it will connect to my cloud endpoint. 4. MQTTfx configuration and testing MQTT.fx is an MQTT client tool which is based on EclipsePaho and written in Java language. It supports subscribe and publish of messages through Topic. You can download this tool from the following link:   http://mqttfx.jensd.de/index.php/download    The new version is:1.7.1.   4.1 MQTT.fx configuration     Choose connect configuration button, then enter the connection configuration page: Fig. 4-1 Profile Name: Enter the configuration name Broker Address: it is REST API ENDPOINT。 Broker Port:8883 Client ID: generate it freely CA file: it is the downloaded CA certificate file Client Certificate File: related certificate file Client key File: private key file Check PEM formatted。 Click apply and OK to finish the configuration. 4.2 Use the AWS cloud to test connection   In order to test whether it can be connected to the event cloud, a preliminary connection test can be performed. Open the aws page: https://console.aws.amazon.com/iot here is a Test button under this interface, which can be tested by other clients or by itself.Both AWS cloud and MQTTfx subscribe topic: $aws/things/RTAWSThing/shadow/update MQTTfx publishes data to the topic: $aws/things/RTAWSThing/shadow/update It can be found that both the cloud test port and the MQTTfx subscribe can receive data: Fig. 4-2 Below, the Publish data is tested by the cloud, and then you can see that both the MQTTFX subscribe and the cloud subscribe can receive data: Fig. 4-3 Until now, the AWS cloud can transfer the data between the AWS iot cloud and the client. 5 RT1060 and wifi module configuration   We mainly use the RT1060 SDK2.8.0 remote_control_wifi_nxp as the RT test code: SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\aws_examples\remote_control_wifi_nxp Test platform is:MIMXRT1060-EVK Panasonic PAN9026 SDIO ADAPTER + SD to uSD adapter The project is using Panasonic PAN9026 SDIO ADAPTER in default. 5.1 WIFI and the AWS code configuration    The project need the working WIFI SSID and the password, so prepare a working WIFI for it. Then add the SSID and the password in the aws_clientcredential.h #define clientcredentialWIFI_SSID       "Paste WiFi SSID here." #define clientcredentialWIFI_PASSWORD   "Paste WiFi password here." The connection for AWS also in file: aws_clientcredential.h #define clientcredentialMQTT_BROKER_ENDPOINT "a215vehc5uw107-ats.iot.us-east-1.amazonaws.com" #define clientcredentialIOT_THING_NAME       "RTAWSThing" #define clientcredentialMQTT_BROKER_PORT      8883   5.2 certificate and the key configuration Open the SDK following link: SDK_2.8.0_EVK-MIMXRT1060\rtos\freertos\tools\certificate_configuration\CertificateConfigurator.html Fig. 5-1 Generate the new aws_clientcredential_keys.h, and replace the old one. Take the MCUXPresso IDE project as an example, the file location is: Fig. 5-2 Build the project and download it to the MIMXRT1060-EVK board. 6 Test result Androd mobile phone download and install the APK under this folder: SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\aws_examples\remote_control_android\AwsRemoteControl.apk SDK can be downloaded from this link: Welcome | MCUXpresso SDK Builder  Then, we can use the Android app to remote control the RT EVK on board LED, the test result is 6.1 APP and EVK test result MIMXRT1060-EVK printf information: Fig. 6-1 Turn on and turn off the led:   Fig. 6-2                                        Fig. 6-3 6.2 MQTTfx subscribe result MQTTfx subscribe data Turn on the led, we can subscribe two messages: Fig. 6-4 Fig. 6-5   Turn off the led, we also can subscribe two messages: Fig. 6-6 Fig. 6-7 In the two message, the first one is used to set the led status. The second one is the EVK used to report the EVK led information. MQTTfx also can use the publish page, publish this data: {"state":{"desired":{"LEDstate":1}}} or {"state":{"desired":{"LEDstate":0}}} To topic: $aws/things/RTAWSThing/shadow/update It also can realize the on board LED turn on or off. 6.3 AWS cloud shadows display result Turn on the led: Fig. 6-8 Turn off the led: Fig. 6-9 In conclusion, after the above configuration and testing, it can finish the Android mobile phone to remote control the RT EVK on board LED and get the information. Also can use the MQTTFX client tool and the AWS shadow page to check the communication data.
View full article
RT1170 flexSPI1 secondary QSPI flash debug flashdriver 1. Abstract RT1170 has two groups of flexSPI: FlexSPI1, FlexSPI2. Each group of flexSPI is also divided into primary option group and secondary pin group. For specific chip connections, please refer to the following article: https://www.cnblogs.com/henjay724/p/15139381.html NXP provided burning algorithms are started from the flexSPI1 primary group for RT1170. However, in actual use, some customers need to start from the flash of the FlexSPI1 secondary pin group and use the debugger to program the chip. How to prepare the corresponding flash algorithm? Moreover, different debuggers correspond to different program algorithms. This article will take RT1170 flexSPI1 secondary pin group flash boot as an example to explain how to use CMSIS DAP and JLINK as debugger to prepare the corresponding burning algorithm and do debug, as well as the necessary conditions for booting from the secondary port, and provide modified flash algorithm which can be directly used to debug and programming. Here, I would like to thank the customer who are providing the test platform, because the official MIMXRT1170-EVK is an external QSPI flash connected from the FlexSPI1 primary interface and does not provide a secondary group interface. 2 Related prepare To test the FlexSPI1 Secondary pin group, you must first prepare a board that connects the QSPI flash to the Secondary pin, and then configure the FlexSPI_PIN_GROUP_SEL fuse to 1. Since the FlexSPI1 secondary pin group is already connected to the GPIO_AD port, the maximum speed limit is 104Mhz.    Fig 1 2.1 Hardware prepare Fig 2 2.2 fuse FLEXSPI_PIN_GROUP_SEL burn   FLEXSPI_PIN_GROUP_SEL fuse address is 0X9A0[10]: Fig 3 To burn fuse, let the chip to enter serial download mode and connect through MCUBootutility. When connecting, you need to select the FlexSPI1 secondary option, as follows:   Fig 4 Burn fuse result is:   Fig 5 After the fuse is burned successfully, change the board boot mode to internal boot mode, then we can program the app and boot from the flash which is connecting to the FlexSPI1 secondary pin group. 3 Flash algorithm modification and debug test Regarding the flash algorithm of RT1170 FlexSPI1 for secondary pin group, this article mainly focuses on the preparation and testing of MCUxpresso IDE, two debuggers: CMSIS DAP's .cfx, and JLINK's RT-UFL flash algorithm. Here is a brief explanation of the principle of modifying the flash algorithm. It is actually based on the ROM API. Therefore, the main modification point is : option0 =0xc1000005, option1=0x00010000.   Fig 6 3.1 CMSIS DAP .cfx flash algorithm prepare and test The algorithm source code of RT1170 CMSIS DAP can be found in the path of MCUXpressoIDE: C:\nxp\MCUXpressoIDE_11.8.0_1165\ide\Examples\Flashdrivers\NXP\iMXRT\iMXRT117x_FlexSPI_SFDP.zip To the new MCUXpresso IDE 11.10, use this path: C:\NXP\MCUXpressoIDE_11.10.0_3148\ide\LinkServer\Examples\Flashdrivers\NXP\iMXRT\iMXRT117x_FlexSPI_SFDP.zip After importing the algorithm source code, first compile the LPCXFlashDriverLib<Release_SectorHashing> version to get the lib that needs to be called. For iMXRTt117x_FlexSPI_SFDP, select the MIMXRT1170_SFDP_QSPI (FlexSPI1 Port A QSPI) version, and modify Imxrt117xFlexSPI_SFDP, FlashConfig.h: #define CONFIG_OPTION0 0xc1 000005 #define CONFIG_OPTION1 0x00010000     Then compile the lib and compile Imxrt117xFlexSPI_SFDP to generate .cfx,    Fig 7 After build, the .cfx file can be found in project folder: iMXRT117x_FlexSPI_SFDP\builds Rename it: MIMXRT1170_SFDP_QSPI1_Secondary.cfx, and copy it to the IDE installation directory: C:\nxp\MCUXpressoIDE_11.8.0_1165\ide\binaries\Flash    This is done for the later app, you can directly select the corresponding .cfx in the list.    For details on how to compile the algorithm source code to obtain .cfx, you can refer to the article: https://www.cnblogs.com/henjay724/p/14190485.html    After preparing the APP and CMSIS DAP debugger, select compiled .cfx flash algorithm in the app project:   Fig 8 To the app, it should be noted that the serialClkFreq in the FCB is configured as 100MHz, and the LUT commander matches the flash command used. The FCB of W25Q128 is as follows: const flexspi_nor_config_t qspiflash_config = { .memConfig = { .tag = FLEXSPI_CFG_BLK_TAG, .version = FLEXSPI_CFG_BLK_VERSION, .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad, .csHoldTime = 3u, .csSetupTime = 3u, // Enable DDR mode, Wordaddassable, Safe configuration, Differential clock .controllerMiscOption = 0x10, .deviceType = kFlexSpiDeviceType_SerialNOR, .sflashPadType = kSerialFlash_4Pads, .serialClkFreq = kFlexSpiSerialClk_100MHz,//kFlexSpiSerialClk_133MHz, .sflashA1Size = 16u * 1024u * 1024u, .lookupTable = { // Read LUTs [0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xEC, RADDR_SDR, FLEXSPI_4PAD, 0x20), [1] = FLEXSPI_LUT_SEQ(DUMMY_SDR, FLEXSPI_4PAD, 0x06, READ_SDR, FLEXSPI_4PAD, 0x04), // Read Status LUTs [4 * 1 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x05, READ_SDR, FLEXSPI_1PAD, 0x04), // Write Enable LUTs [4 * 3 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x06, STOP, FLEXSPI_1PAD, 0x0), // Erase Sector LUTs [4 * 5 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x21, RADDR_SDR, FLEXSPI_1PAD, 0x20), // Erase Block LUTs [4 * 8 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xD8, RADDR_SDR, FLEXSPI_1PAD, 0x18), // Pape Program LUTs [4 * 9 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x12, RADDR_SDR, FLEXSPI_1PAD, 0x20), [4 * 9 + 1] = FLEXSPI_LUT_SEQ(WRITE_SDR, FLEXSPI_1PAD, 0x04, STOP, FLEXSPI_1PAD, 0x0), // Erase Chip LUTs [4 * 11 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x60, STOP, FLEXSPI_1PAD, 0x0), }, }, .pageSize = 256u, .sectorSize = 4u * 1024u, .ipcmdSerialClkFreq = 0x1, .blockSize = 64u * 1024u, .isUniformBlockSize = false, }; Debug result is:   Fig 9 It can be seen that the modified flexSPI1 secondary group flash algorithm has been successfully called, the downloading is successful, and the app function runs normally. 3.2 JLINK RT-UFL flash algorithm prepare and test Some customers like to use JLINK, but the Segger JLINK driver algorithm source code are not open, so you can use the RT-UFL algorithm, modify it to match the option of RT1170 FlexSPI1 secondray group, and then call it.     For information on the RT-UFL algorithm, please see the following link:    https://www.cnblogs.com/henjay724/p/13951686.html https://www.cnblogs.com/henjay724/p/14942574.html https://www.cnblogs.com/henjay724/p/15430619.html    To the RT UFL modification points: ufl_main.c case kChipId_RT116x: case kChipId_RT117x: uflTargetDesc->flexspiInstance = MIMXRT117X_1st_FLEXSPI_INSTANCE; uflTargetDesc->flexspiBaseAddr = MIMXRT117X_1st_FLEXSPI_BASE; uflTargetDesc->flashBaseAddr = MIMXRT117X_1st_FLEXSPI_AMBA_BASE; uflTargetDesc->configOption.option0.U = 0xc1000005; uflTargetDesc->configOption.option1.U = 0x00010000; Build the code, generate: MIMXRT_FLEXSPI_UV5_UFL_Flexspi1secondary_qspi.FLM,copy To: C:\Program Files\SEGGER\JLINKV768B\Devices\NXP\iMXRT_UFL Please note, to the SEGGER DRIVER path, it determined by your own JLINK driver version install path. In file C:\Program Files\SEGGER\JLINKV768B\JLinkDevices.xml, add the new flash algorithm file calling code: <!------------------------> <Device> <ChipInfo Vendor="NXP" Name="MIMXRT1170_UFL_flexspi1_2nd" WorkRAMAddr="0x20240000" WorkRAMSize="0x00040000" Core="JLINK_CORE_CORTEX_M7" JLinkScriptFile="Devices/NXP/iMXRT_UFL/iMXRT117x_CortexM7.JLinkScript" Aliases="MIMXRT1176xxx8_M7; MIMXRT1176xxxA_M7" /> <FlashBankInfo Name="QSPI Flash" BaseAddr="0x30000000" MaxSize="0x01000000" Loader="Devices/NXP/iMXRT_UFL/MIMXRT_FLEXSPI_UV5_UFL_Flexspi1secondary_qspi.FLM" LoaderType="FLASH_ALGO_TYPE_OPEN" /> </Device> <!------------------------> In the app project, debug configuration,configure the JLINK device as: MIMXRT1170_UFL_flexspi1_2nd   Note: uncheck “reset before running”, otherwise, it will be stopped in ROM after entering the debug mode.   Fig 10 Test the debug result is:   Fig 11 We can see, when use the modified RT-UFL, the JLINK debug also works OK. 4. summarize This article mainly provides the algorithm modification of RT1170 FlexSPI1 secondary group. The algorithm modification of other Flash interfaces and port is similar. The main attention is paid to fuse and algorithm matching. This article provides two modified FlexSPI1 secondary group burning algorithms: MIMXRT1170_SFDP_QSPI1_Secondary.cfx RT-UFL modified MIMXRT_FLEXSPI_UV5_UFL_Flexspi1secondary_qspi.FLM attachments: CMSIS DAP: RT117X FlexSPI1 2nd flashalgo->CMSIS DAP->MIMXRT1170_SFDP_QSPI1_Secondary.cfx JLINK RT-UFL: RT117X FlexSPI1 2nd flashalgo\JLINK RT-UFL Copy JLINK RT-UFL folder to Segger JLINK install path: C:\Program Files\SEGGER\JLINKV768B In this way, the relevant flash algorithm can be called according to the above content.    
View full article
RT1170 Boundary Scan test based on lauterbach   1. Abstract Boundary Scan is a method of testing interconnections on circuit boards or internal sub-blocks of circuits. You can also debug and observe the pin status of the integrated circuit, measure the voltage or analyze the sub-modules inside the integrated circuit, and test based on the JTAG interface. NXP officials have provided two good application notes: AN13507 (LPC) and AN12919 (RT). Based on the reference application note test method, this article provides the boundary scan test results for NXP MIMXRT1170-EVK revC1. It can use Lauterbach to connect the chip and perform boundary scan to control the external pins. A script file is also provided. It can realize one-click connection to boundary scan and achieve level control of external pins. 2. RT1170 test details   2.1 Hardware platform Lauterbach:LA3050 MIMXRT1170-EVK rev C1: The hardware modification point is to remove the onboard resistors R187, R208, R195 and R78. The purpose is that J6 prohibits external circuits from interfering with JTAG related pins. Disconnect J5, J6, J7, J8, that is, disconnect the onboard debugger, and use an external Lauterbach connection to J1. The connection situation is as follows: Fig 1 RT1170 directly supports both SWD and JTAG by default, so unlike RT10XX which needs to modify the fuse to convert from SWD to JTAG, RT1170 can directly use the JTAG interface.   2.2 Software operation Download Lauderbach's supporting software and install it. After installation, open the TRACE32 ICD Arm USB. If the Lauderbach device is connected, the interface will open successfully. Fig 2 At this time, you can enter the relevant commands in the yellow box in the picture above. Here you need to prepare the .bsdl file of the chip, which is usually placed on the chip introduction page of nxp.com. For example, the link to the bsdl file of RT1170 is: https://www.nxp.com/downloads/en/bsdl/i.MXRT1170_BDSL.bsdl You can copy the i.MXRT1170_BSDL.bsdl file to the Lauderbach installation path: C:\T32 Next, enter the following command in the window to open the boundary scan window and the i.MXRT1170_BSDL.bsdl file: SYStem.Mode Down BSDL.RESet BSDL.ParkState Select-DR-Scan BSDL.state Here, it will open the window: Fig 3 Click FILE item, input the downloaded i.MXRT1170_BSDL.bsdl, then in the window.,input the commander: BSDL.SOFTRESET Fig 4 Click check->BYPASSall,IDCODEall,SAMPLEall, make sure the 3 methods can be passed. Fig 5 Fig 6 Fig 7 To test the output control situation, it need to do the following operation: BSDLSET 1.: instructions->EXTEXT, DR mode->Set Write, Fileter data->uncheck intern BSDL.state->Run: check SetAndRun, TwoStepDR, Click RUN. BSDLSET 1. Can control the related pins, eg, GPIO_AD_26 is on the on board D34 LED. 1 ON,0 OFF. Fig 8   2.3 Automation control command script As can be seen from Section 2.2, single-step operation requires manual typing of commands. In actual testing, the efficiency is very low, so scripting language can be used to directly implement automated command control. Below, taking RT1170 as an example, we provide a script to control the on-board D34 light on and off. In this way, when the TRACE32 software is opened, you only need to open the script directly, enter the debug mode, run it to the end with one click, and check the on-board light control status. Script language file, the suffix is .cmm, step: File->New Script, enter the following script command: ;system setup SYStem.Mode Down SYStem.CPU CortexM7 SYSTEM.CONFIG.DEBUGPORTTYPE JTAG SYStem.JtagClock 1MHz ;BSDL Settings BSDL.RESet BSDL.ParkState Select-DR-Scan BSDL.state ;configure boundary scan chain BSDL.FILE i.MXRT1170_BDSL.bsdl ;Check boundary scan chain BSDL.SOFTRESET BSDL.BYPASSall BSDL.IDCODEall BSDL.SAMPLEall ;Perform Sample test BSDL.RUN BSDL.SetAndRun ON BSDL.TwoStepDR ON BSDL.SET 1. BSDL.SET 1. IR EXTEST BSDL.SET 1. PORT GPIO_AD_26 0 WAIT 1.s BSDL.SET 1. PORT GPIO_AD_26 1 WAIT 1.s BSDL.SET 1. PORT GPIO_AD_26 0 WAIT 1.s BSDL.SET 1. PORT GPIO_AD_26 1 WAIT 1.s BSDL.SET 1. PORT GPIO_AD_26 0 WAIT 1.s BSDL.SET 1. PORT GPIO_AD_26 1 WAIT 1.s BSDL.SET 1. PORT GPIO_AD_26 0 WAIT 1.s BSDL.SET 1. PORT GPIO_AD_26 1 WAIT 1.s BSDL.SET 1. PORT GPIO_AD_26 0 WAIT 1.s BSDL.SET 1. PORT GPIO_AD_26 1 WAIT 1.s Function, the led will be blinking 5 times, duration is 1s. Save the script, then debug it. Fig 9 This is the video for the testing:   It can be seen that the onboard light D34 can automatically flash, indicating that the BSDL automatic test has been completed so far.          
View full article
1.  Abstract NXP EdgeReady solution can use RT106/5 S/L/A/F to achieve speech recognition, but the relevant support software libraries for the RT4-bit series are limited to the S/L/A/F series, if you want to use normal RT chips, how to achieve speech recognition functions? NXP officially launched the VIT software package in the SDK, which can support RT1060, RT1160, RT1170, RT600, RT500 to achieve SDK-based speech recognition functions. For the acquisition of weather information, usually customer can connect with a third-party platform or the cloud weather API, using http client method to access directly, the current weather API platforms, you can register it, then call the API directly, so you can use the RT SDK lwip socket client method to call the corresponding weather API, to achieve real-time specific geographical location weather forecast data.     This article will use MIMXRT1060-EVK to implement customer-defined wake-up word(WW) and voice recognition word recognition(VC) based on SDK VIT lib, and LWIP socket client to achieve real-time weather information acquisition in Shanghai, then print it to the terminal, this article mainly use the print to share the weather information, for the sound broadcasts, it also add the simple method to broadcast the fixed sound with mp3 audio data, but for the freely sound broadcast, it may need to use real-time TTS function, which is not added now.     The system block diagram of this document is as follows:   Fig 1 System Block diagram The VIT custom wake-up word of this system is "小恩小恩", and after waking up, one of the following recognition words can be recognized: ”开灯”("Turn on the lights"),“关灯”("Turn off the lights"),”今天天气”("Today's weather"),“明天天气”("Tomorrow's Weather"),“后天天气”("The day after tomorrow's weather"). Turn on the light or Turn off the lights , that is to control  the external LED red light on the EVK board. ”今天天气” gets today’s weather forecast, it is in the following format:                     "date": "2022-05-27",                     "week": "5",                     "dayweather": "阴",                     "nightweather": "阴",                     "daytemp": "28",                     "nighttemp": "21",                     "daywind": "东南",                     "nightwind": "东南",                     "daypower": "≤3",                     "nightpower": "≤3" “明天天气”,“后天天气” are the same format, but it is 1-2 days after the date of today. To get the weather data, the MIMXRT1060-EVK board needs to connect the network to achieve the acquisition of the Gaode Map(restapi.amap.com) Weather API data. 2.  Related preparations 2.1 Weather API Platform     At present, there are many third-party platforms that can obtain weather on the Internet for Chinese, such as: Baidu Intelligent Cloud, Baidu Map API, Huawei cloud platform, Juhe weather, Gaode Map API, and so on. This article tried several platform, the test results found: Baidu intelligent cloud, the number of daily free calls is small, the need for real-time synthesis of AK, SK, cumbersome to call; Baidu Map API needs to upload ID card information; Several others have a similar situation. In the end, the Gaode Map API with convenient registration, many daily calls and relatively full feedback weather data information was selected.     Here, we mainly talk about the Gaode Map API usage, the link is: https://lbs.amap.com/api/webservice/guide/api/weatherinfo Create the account and the API key, then add the relevant parameters to implement the call of the weather API, the application for API Key is as follows: Fig 2 Gaode map API key The following diagram shows the call volume:   Fig 3 Gaode Map API call volume This is the API calling format:   Fig 4 Weather API calling parameters So, the full Gaode Map API link should like this: https://restapi.amap.com/v3/weather/weatherInfo?key=xxxxxxx&city=xxx&extensions=all&output=JSON If need to test the Shanghai weather, city code is 310000. 2.2 Postman test weather API     Postman is an interface testing tool, when doing interface testing, Postman is equivalent to a client, it can simulate various HTTP requests initiated by users, send the request data to the server, obtain the corresponding response results, and verify whether the result data in the response matches the expected value. Postman download link: https://www.postman.com/   After finding the proper weather API platform and the calling link, use the postman do the http GET operation to capture the weather data, refer to the Fig 4, fill the related parameters to the postman: Fig 5 Postman call weather API Send Get command, we can find the weather information in the position 7, the complete all information is: {     "status": "1",     "count": "1",     "info": "OK",     "infocode": "10000",     "forecasts": [         {             "city": "上海市",             "adcode": "310000",             "province": "上海",             "reporttime": "2022-05-27 17:34:12",             "casts": [                 {                     "date": "2022-05-27",                     "week": "5",                     "dayweather": "阴",                     "nightweather": "阴",                     "daytemp": "28",                     "nighttemp": "21",                     "daywind": "东南",                     "nightwind": "东南",                     "daypower": "≤3",                     "nightpower": "≤3"                 },                 {                     "date": "2022-05-28",                     "week": "6",                     "dayweather": "小雨",                     "nightweather": "中雨",                     "daytemp": "24",                     "nighttemp": "20",                     "daywind": "东南",                     "nightwind": "东南",                     "daypower": "≤3",                     "nightpower": "≤3"                 },                 {                     "date": "2022-05-29",                     "week": "7",                     "dayweather": "大雨",                     "nightweather": "小雨",                     "daytemp": "23",                     "nighttemp": "20",                     "daywind": "南",                     "nightwind": "南",                     "daypower": "≤3",                     "nightpower": "≤3"                 },                 {                     "date": "2022-05-30",                     "week": "1",                     "dayweather": "小雨",                     "nightweather": "晴",                     "daytemp": "27",                     "nighttemp": "20",                     "daywind": "北",                     "nightwind": "北",                     "daypower": "≤3",                     "nightpower": "≤3"                 }             ]         }     ] }   We can see, it can capture the continuous 4 days information, with this information, we can get the weather information easily. From the postman, we also can see the Get code, like this: Fig 6 postman API HTTP code     With this API which already passed the testing, it can capture the complete weather information, here, we can consider adding the working http API to the MIMXRT1060-EVK code.    2.3 VIT custom commands     From the maestro code of the RT1060 SDK, we can know that the SDK already supports the VIT library, what is VIT?     VIT's full name: Voice Intelligent Technology, the library provides voice recognition services designed to wake up and recognize specific commands, control IOT, and the smart home. Fig 7 VIT system block diagram     In NXP RT1060 SDK code, the generated wake word and command word have been provided and placed in the VIT_Model.h file. If in the customer's project, how to customize the wake word and command word? With the NXP's efforts, we have made a web page form for customers to choose their own command, and then generate the corresponding VIT_Model.h file for code to call. VIT command word generation web page is: https://vit.nxp.com/#/home     Login the NXP account, choose the RT chip partn umber, wakeup word, voice command. Please note, the current supported RT chip is: RT1060,RT1160,RT1170,RT600,RT500 The following is the example for generating wakeup word and voice command:   Fig 8 Custom VIT configuration Fig 9 generated result Download the generated model, you can get VIT_Model_cn.h, open to see the command word information and related model data stored in the const PL_MEM_ALIGN (PL_UINT8 VIT_Model_cn[], VIT_MODEL_ALIGN_BYTES) array, the command word information is as follows: WakeWord supported : " 小恩 小恩 " Voice Commands supported     Cmd_Id : Cmd_Name       0    : UNKNOWN       1    : 开灯       2    : 关灯       3    : 今天 天气       4    : 明天 天气       5    : 后天 天气 Use the RT1060 SDK maestro_record demo to test this custom command result:   Fig 10 Custom Wakeup word and voice command test From the test result, we can see, both the wakeup word and voice command is detected. 3 Software code 3.1 LWIP socket client code capture weather API From chapter 2.2, we have been able to obtain the weather API and through testing, we can successfully achieve weather acquisition, so we need to add relevant commands in combination with the needs of our own system. For the acquisition of the weather API, the lwip code based on the RT1060 SDK is in the form of socket client. The relevant code is as follows: #define PORT 80 #define IP_ADDR "59.82.9.133" uint8_t get_weather[]= "GET /v3/weather/weatherInfo?key=xxx&city=310000&extensions=all&output=JSON HTTP/1.1\r\nHost: restapi.amap.com\r\n\r\n\r\n\r\n"; if (sys_thread_new("weather_main", weathermain_thread, NULL, HTTPD_STACKSIZE, HTTPD_PRIORITY) == NULL) LWIP_ASSERT("main(): Task creation failed.", 0); static void weathermain_thread(void *arg) { static struct netif netif; ip4_addr_t netif_ipaddr, netif_netmask, netif_gw; ethernetif_config_t enet_config = { .phyHandle = &phyHandle, .macAddress = configMAC_ADDR, }; LWIP_UNUSED_ARG(arg); mdioHandle.resource.csrClock_Hz = EXAMPLE_CLOCK_FREQ; IP4_ADDR(&netif_ipaddr, configIP_ADDR0, configIP_ADDR1, configIP_ADDR2, configIP_ADDR3); IP4_ADDR(&netif_netmask, configNET_MASK0, configNET_MASK1, configNET_MASK2, configNET_MASK3); IP4_ADDR(&netif_gw, configGW_ADDR0, configGW_ADDR1, configGW_ADDR2, configGW_ADDR3); tcpip_init(NULL, NULL); netifapi_netif_add(&netif, &netif_ipaddr, &netif_netmask, &netif_gw, &enet_config, EXAMPLE_NETIF_INIT_FN, tcpip_input); netifapi_netif_set_default(&netif); netifapi_netif_set_up(&netif); PRINTF("\r\n************************************************\r\n"); PRINTF(" TCP client example\r\n"); PRINTF("************************************************\r\n"); PRINTF(" IPv4 Address : %u.%u.%u.%u\r\n", ((u8_t *)&netif_ipaddr)[0], ((u8_t *)&netif_ipaddr)[1], ((u8_t *)&netif_ipaddr)[2], ((u8_t *)&netif_ipaddr)[3]); PRINTF(" IPv4 Subnet mask : %u.%u.%u.%u\r\n", ((u8_t *)&netif_netmask)[0], ((u8_t *)&netif_netmask)[1], ((u8_t *)&netif_netmask)[2], ((u8_t *)&netif_netmask)[3]); PRINTF(" IPv4 Gateway : %u.%u.%u.%u\r\n", ((u8_t *)&netif_gw)[0], ((u8_t *)&netif_gw)[1], ((u8_t *)&netif_gw)[2], ((u8_t *)&netif_gw)[3]); PRINTF("************************************************\r\n"); sys_thread_new("weather", weather_thread, NULL, DEFAULT_THREAD_STACKSIZE, DEFAULT_THREAD_PRIO); vTaskDelete(NULL); } static void weather_thread(void *arg) { int sock = -1,rece; struct sockaddr_in client_addr; char* host_ip; ip4_addr_t dns_ip; err_t err; uint32_t *pSDRAM= pvPortMalloc(BUF_LEN);// host_ip = HOST_NAME; PRINTF("host name : %s , host_ip : %s\r\n",HOST_NAME,host_ip); while(1) { sock = socket(AF_INET, SOCK_STREAM, 0); if (sock < 0) { PRINTF("Socket error\n"); vTaskDelay(10); continue; } client_addr.sin_family = AF_INET; client_addr.sin_port = htons(PORT); client_addr.sin_addr.s_addr = inet_addr(host_ip); memset(&(client_addr.sin_zero), 0, sizeof(client_addr.sin_zero)); if (connect(sock, (struct sockaddr *)&client_addr, sizeof(struct sockaddr)) == -1) { PRINTF("Connect failed!\n"); closesocket(sock); vTaskDelay(10); continue; } PRINTF("Connect to server successful!\r\n"); write(sock,get_weather,sizeof(get_weather)); while (1) { rece = recv(sock, (uint8_t*)pSDRAM, BUF_LEN, 0);//BUF_LEN if (rece <= 0) break; memcpy(weather_data.weather_info, pSDRAM,1500);//max 1457 } Weather_process(); memset(pSDRAM,0,BUF_LEN); closesocket(sock); vTaskDelay(10000); } }  3.2 VIT detect customer command code    Put the generated VIT_Model_cn.h to the maestro_record folder path:   vit\RT1060_CortexM7\Lib    The specific wake word and voice command related code can be viewed from the code vit_pro.c, mainly involving function is: int VIT_Execute(void *arg, void *inputBuffer, int size) The code is modified as follows, mainly to record the wake and wake word number, for specific function control, the command directly controlled here is the local "开灯:turn on the light", "关灯:turn off the light" command, as for the weather command needs to call the socket client API, so in the main lwip call area combined with the command word recognition number to call: if (VIT_DetectionResults == VIT_WW_DETECTED) { PRINTF(" - WakeWord detected \r\n"); weather_data.ww_flag = 1; //kerry } else if (VIT_DetectionResults == VIT_VC_DETECTED) { // Retrieve id of the Voice Command detected // String of the Command can also be retrieved (when WW and CMDs strings are integrated in Model) VIT_Status = VIT_GetVoiceCommandFound(VITHandle, &VoiceCommand); if (VIT_Status != VIT_SUCCESS) { PRINTF("VIT_GetVoiceCommandFound error: %d\r\n", VIT_Status); return VIT_Status; // will stop processing VIT and go directly to MEM free } else { PRINTF(" - Voice Command detected %d", VoiceCommand.Cmd_Id); weather_data.vc_index = VoiceCommand.Cmd_Id;//kerry 1:ledon 2:ledoff 3:today weather 4:tomorrow weather 5:aftertomorrow weather if(weather_data.vc_index == 1)//1 { GPIO_PinWrite(GPIO1, 3, 1U); //pull high PRINTF(" led on!\r\n"); } else if(weather_data.vc_index == 2)//2 { GPIO_PinWrite(GPIO1, 3, 0U); //pull low PRINTF(" led off!\r\n"); } // Retrieve CMD Name: OPTIONAL // Check first if CMD string is present if (VoiceCommand.pCmd_Name != PL_NULL) { PRINTF(" %s\r\n", VoiceCommand.pCmd_Name); } else { PRINTF("\r\n"); } } }  3.3 Voice recognize weather information    In the weather_thread while, check the wakeup word and voice command, if meet the requirement, then create the socket connection, write the API and capture the weather data.   The related code is: while(1) { //add the command request, only cmd == weather flag, then call it. if((weather_data.ww_flag == 1)) { if(weather_data.vc_index >= 3) { // create connection //write API and read API Weather_process(); } memset(weather_data.weather_info, 0, sizeof(weather_data.weather_info)); weather_data.ww_flag = 0; weather_data.vc_index = 0; } vTaskDelay(10000); } void Weather_process(void) { char * datap, *datap1; datap = strstr((char*)weather_data.weather_info,"date"); if(datap != NULL) { memcpy(today_weather, datap,184);//max 1457 if(weather_data.vc_index == 3) { PRINTF("\r\n*******************today weather***********************************\n\r"); PRINTF("%s\r\n",today_weather); return; } } else return; datap1 = strstr(datap+4,"date"); if(datap1 != NULL) { memcpy(tomorr_weather, datap1,184);//max 1457 if(weather_data.vc_index == 4) { PRINTF("\r\n*******************tomorrow weather*******************************\n\r"); PRINTF("%s\r\n",tomorr_weather); return; } } else return; datap = strstr(datap1+4,"date"); if(datap != NULL) { memcpy(aftertom_weather, datap,184);//max 1457 if(weather_data.vc_index == 5) { PRINTF("\r\n*******************after tomorrow weather**************************\n\r"); PRINTF("%s\r\n",aftertom_weather); } } else return; }   Function Weather_process is used to refer to the voice recognized weather number to get the related date’s weather, and printf it. 4 Test result  the test result video: Print the log results as shown in Figure 11, after testing, you can see that the wakeup word and voice command can be successfully recognized, in the recognition of word sequence numbers 3, 4, 5 is the weather acquisition, you can successfully call the lwip socket client API, successfully obtain weather information and printf it.   Fig 11 system test print result  evkmimxrt1060_maestro_weather_backup.zip is the project without sound playback, weather information will print to the terminal! 5 Meet issues conclusion 5.1 LWIP failed to get weather    When creating the code, call the postman provided http code: GET /v3/weather/weatherInfo?key=8f777fc7d867908eebbad7f96a13af10&amp; city=310000&amp; extensions=all&amp; output=JSON HTTP/1.1 Host: restapi.amap.com    Add it to the socket API function: uint8_t get_weather[]= "GET /v3/weather/weatherInfo?key=xxx&amp;city=310000&amp;extensions=all&amp;output=JSON HTTP/1.1\r\nHost: restapi.amap.com\r\n\r\n\r\n\r\n";    The test result is:   Fig 12 socket weather API return issues     We can see, server connection is OK, http also return back the data, but it report the parameter issues, after checking, we use the postman C code, and put it to the get_weather: uint8_t get_weather[]= "GET /v3/weather/weatherInfo?key=xxx&city=310000&extensions=all&output=JSON HTTP/1.1\r\nHost: restapi.amap.com\r\n\r\n\r\n\r\n"; Then, it can capture the weather data, the same as postman test result. 5.2 VIT LWIP merger memory is not enough     After combining the maestro_record and lwip socket code together, compile it, it will meet the DTCM memory overflow issues. Fig 13 memory overflow After optimize, still meet the DTCM overflow issues, so, at last, choose to reconfigure the FlexRAM: OCRAM 192K, DTCM 256K, ITCM 64K Compile it, and the memory overflow issues disappear:   Fig 14 FlexRAM recofiguration 5.3 Print Chinese word in tera    Directly use teraterm, when the weather API returns the Chinese word, the print out information is the garbled code, and then after the following configuration, to achieve Chinese printing: Setup  ->  Terminal Locale    : american->chinese Codepage : 65001 ->936 Fig 15 Tera Term Chinese word print In summary, after various data collection and problem solving, in MIMXRT1060-EVK board  combined with the official SDK complete the function of customizing VIT voice commands to obtain real-time weather and local control.So, even if the ordinary RT series which is not S/L/A/F series, you also can use VIT to implement speech recognition functions. 6 Add the sound broadcast    This chapter mainly gives the method how to add the sound broadcast with the mp3 video data which is stored in the memory, but to the realtime weather data playback, it is not very freely, it needs to check the weather data, and use the video mp3 data lib get the correct mp3 data, as it is not the online TTS method.     So, here, just share one example add the sound broadcast, eg: WW : “小恩小恩”    ->   “小恩来了,请吩咐!” VC  :“今天天气”   ->   “温度32.1度” VC playback is fixed now, if need to play real data, it needs to generate the mp3 voice data lib, then according to the feedback weather information, to generate the correct weather mp3 data array, and play it, as this is a little complicated, but not difficult, so here, just use one fixed sound give an example of it. 6.1 MP3 playback audio data preparation     For audio broadcasting which need to convert the Chinese word into MP3 files, you can use some online speech synthesis software, here use Baidu online speech synthesis function, you can view the previous article, chapter 2.2.2 online speech synthesis: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/RT106L-S-voice-control-system-based-on-the-Baidu-cloud/ta-p/1363295     If use the Baidu online speech synthesis generated mp3 file to convert to the c array directly, it will meet the first audio play issues, so, here we use the Audacity to convert the mp3 file, the convert configuration is like this:  Fig 16 Audacity convert configuration     After the regeneration of mp3, you can use xxd .exe to convert the mp3 file to an array of C files, and then put it into RT-related memory or external flash , xxd .exe can be found at the following link: https://github.com/baldram/ESP_VS1053_Library/issues/18 The convert command like this: xxd -i your-sound.mp3 ready-to-use-header.c Convert the xiaoencoming.mp3 and temptest.mp3 file to the C array, then modify the data to the C file, save file as: xiaoencoming.h and temptest.h. Here, take xiaoencoming.c as an example: #define XIAOEN_MP3_SIZE  6847 unsigned char xiaoencoming_mp3[XIAOEN_MP3_SIZE] = {   0x49, 0x44, 0x33, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x21, 0x54, 0x58, …   0x55, 0x55, 0x55, 0x55, 0x55, 0x55, 0x55 }; unsigned int xiaoencoming1_mp3_len = XIAOEN_MP3_SIZE;//6847; Until now, the playback audio data is finished.     Copy xiaoencoming.h and temptest.h to project path: evkmimxrt1060_maestro_weather_mp3\source 6.2 Play the MP3 data from memory    Here, share the related code. 6.2.1 app_streamer.c added code    #include "xiaoencoming.h" #include "temptest.h" void *voice_inBuf = NULL; void *voice_outBuf = NULL; status_t STREAMER_file_Create(streamer_handle_t *handle, char *filename, int eap_par) { STREAMER_CREATE_PARAM params; OsaThreadAttr thread_attr; int ret; ELEMENT_PROPERTY_T prop; MEMSRC_SET_BUFFER_T inBufInfo = {0}; SET_BUFFER_DESC_T outBufInfo = {0}; PRINTF("Kerry test begin!\r\n"); if(filename == "temptest.mp3") inBufInfo = (MEMSRC_SET_BUFFER_T){.location = (int8_t *)temptest_mp3, .size = TEMPtest_MP3_SIZE}; else if(filename == "xiaoencoming.mp3") inBufInfo = (MEMSRC_SET_BUFFER_T){.location = (int8_t *)xiaoencoming_mp3, .size = XIAOEN_MP3_SIZE}; /* Create message process thread */ osa_thread_attr_init(&thread_attr); osa_thread_attr_set_name(&thread_attr, STREAMER_MESSAGE_TASK_NAME); osa_thread_attr_set_stack_size(&thread_attr, STREAMER_MESSAGE_TASK_STACK_SIZE); ret = osa_thread_create(&msg_thread, &thread_attr, STREAMER_MessageTask, (void *)handle); osa_thread_attr_destroy(&thread_attr); if (ERRCODE_NO_ERROR != ret) { return kStatus_Fail; } /* Create streamer */ strcpy(params.out_mq_name, APP_STREAMER_MSG_QUEUE); params.stack_size = STREAMER_TASK_STACK_SIZE; params.pipeline_type = STREAM_PIPELINE_MEM; params.task_name = STREAMER_TASK_NAME; params.in_dev_name = "buffer"; params.out_dev_name = "speaker"; handle->streamer = streamer_create(&params); if (!handle->streamer) { return kStatus_Fail; } prop.prop = PROP_DECODER_DECODER_TYPE; prop.val = (uintptr_t)DECODER_TYPE_MP3; ret = streamer_set_property(handle->streamer, prop, true); if (ret != STREAM_OK) { streamer_destroy(handle->streamer); handle->streamer = NULL; return kStatus_Fail; } prop.prop = PROP_MEMSRC_SET_BUFF; prop.val = (uintptr_t)&inBufInfo; ret = streamer_set_property(handle->streamer, prop, true); if (ret != STREAM_OK) { streamer_destroy(handle->streamer); handle->streamer = NULL; return kStatus_Fail; } handle->audioPlaying = false; error: PRINTF("End STREAMER_file_Create\r\n"); PRINTF("Kerry test end!\r\n"); return kStatus_Success; }   The code implements the thread build, creates a streamer, defines it as playing from memory, decodes the properties for MP3, and specifies an array of MP3 files in memory. Specify a different array of mp3 files in memory based on the calling file name. 6.2.2 cmd.c added code void play_file(char *filename, int eap_par) { STREAMER_Init(); int ret = STREAMER_file_Create(&streamerHandle, filename, eap_par); if (ret != kStatus_Success) { PRINTF("STREAMER_file_Create failed\r\n"); goto file_error; } STREAMER_Start(&streamerHandle); PRINTF("Starting playback\r\n"); file_playing = true; while (streamerHandle.audioPlaying) { osa_time_delay(100); } file_playing = false; file_error: PRINTF("[play_file] Cleanup\r\n"); STREAMER_Destroy(&streamerHandle); osa_time_delay(100); }   Play file, it calls the STREAMER_file_Create API function, start play, and wait the play finished, then release the STREAMER. shellRecMIC API function add the VIT recorded flag, which is used to play feedback audio file. static shell_status_t shellRecMIC(shell_handle_t shellHandle, int32_t argc, char **argv) { … //kerry PRINTF("Kerry MP3 stream data test!\r\n"); PRINTF("---weather_data.ww_flag =%d--\r\n ", weather_data.ww_flag); PRINTF("---weather_data.vc_inde =%d--\r\n ", weather_data.vc_index); PRINTF("---weather_data.mp3_flag =%d--\r\n ", weather_data.mp3_flag); if(weather_data.ww_flag == 1) { play_file("xiaoencoming.mp3", 0); } if(weather_data.vc_index == 3) { play_file("temptest.mp3", 0); } if(weather_data.mp3_flag != 0) { weather_data.ww_flag = 0; weather_data.vc_index = 0; } weather_data.mp3_flag = 0; /* Delay for cleanup */ osa_time_delay(100); return kStatus_SHELL_Success; } If detect the Wakeup Word: “小恩小恩”, play feedback audio: “小恩来了请吩咐”. If detect the voice command: “今天天气”, play feedback audio: “温度32.1度”, please note, this playback just an example, it is the fixed audio, you also can create audio word lib, then according to the received weather information, combine the related word audio together, then playback it. This is a little complicated, but not difficult. So, if need to play the free audio, also can consider the online TTS method in real time. 6.2.3 VIT WW and VC flag VIT_Execute function int VIT_Execute(void *arg, void *inputBuffer, int size) { … if (VIT_DetectionResults == VIT_WW_DETECTED) { PRINTF(" - WakeWord detected \r\n"); weather_data.ww_flag = 1; //kerry weather_data.mp3_flag = 1; } else if (VIT_DetectionResults == VIT_VC_DETECTED) { // Retrieve id of the Voice Command detected // String of the Command can also be retrieved (when WW and CMDs strings are integrated in Model) VIT_Status = VIT_GetVoiceCommandFound(VITHandle, &VoiceCommand); if (VIT_Status != VIT_SUCCESS) { PRINTF("VIT_GetVoiceCommandFound error: %d\r\n", VIT_Status); return VIT_Status; // will stop processing VIT and go directly to MEM free } else { PRINTF(" - Voice Command detected %d", VoiceCommand.Cmd_Id); weather_data.vc_index = VoiceCommand.Cmd_Id;//kerry 1:ledon 2:ledoff 3:today weather 4:tomorrow weather 5:aftertomorrow weather weather_data.mp3_flag = 2; if(weather_data.vc_index == 1)//1 { GPIO_PinWrite(GPIO1, 3, 1U); //pull high PRINTF(" led on!\r\n"); } else if(weather_data.vc_index == 2)//2 { GPIO_PinWrite(GPIO1, 3, 0U); //pull low PRINTF(" led off!\r\n"); } // Retrieve CMD Name: OPTIONAL // Check first if CMD string is present if (VoiceCommand.pCmd_Name != PL_NULL) { PRINTF(" %s\r\n", VoiceCommand.pCmd_Name); } else { PRINTF("\r\n"); } } } return VIT_Status; }   Until now, all the code is added. 6.2.4  playback audio test result     This is the audio playback test result:   Fig 17 playback audio log   From the test result, we can see, we also can use the mp3 data which is stored in the memory and play it as audio playback.   The code project is: evkmimxrt1060_maestro_weather_mp3.zip.  
View full article
There are two new LCD panels that are now available for i.MX RT EVKs: The original RK043FN02H-CT panel is being replaced with the newer RK043FN66HS-CTG panel, which will affect the following EVKs: i.MX RT1050 i.MX RT1060 i.MX RT1064   The original RK055HDMIPI4M panel is being replaced with the newer RK055HDMIPI4MA0 panel, which will affect the following EVKs: i.MX RT595 i.MX RT1160 i.MX RT1170   These changes are due to the previous panels being EOL by the LCD panel manufacturer. These new LCDs have the same dimensions and screen size as their original versions (4.3” 480x272 and 5.5” 720x1280 respectively) and the physical connections are the same. The version name can be found on the back of the LCD. However there are modifications to the software that may need to be made or else the LCD panel will be dark or blank when running MCUXpresso SDK demos on i.MXRT EVKs. This updated code is already available in the latest MCUXpresso SDK and SDK demos are configured by default to use the new panels.   For the i.MX RT1050/1060/1064 panel RK043FN66HS-CTG: The touch controller has changed and the SDK software has been modified to support the new touch controller. The LCD panel also has slightly different specs but the same code used for the original LCD panel will also work with the new LCD panel, so no change is necessary for display-only demos.  LCD demos are configured to support the new panel by default in the latest MCUXpresso SDK. So if you have the original panel you will need to change in the SDK code from      #define DEMO_PANEL  DEMO_PANEL_RK043FN66HS    //new panel (default SDK setting)           to       #define DEMO_PANEL  DEMO_PANEL_RK043FN02H     //older panel   For the i.MX RT595/RT1160/RT1170 panel RK055HDMIPI4MA0: Both the touch and display SDK software had to be updated to support this new panel. LCD demos are configured to support the new panel by default in the latest MCUXpresso SDK. So if you have the original panel you will need to change in the SDK code from:       #define DEMO_PANEL DEMO_PANEL_RK055MHD091    //new panel (default SDK setting)           to       #define DEMO_PANEL DEMO_PANEL_RK055AHD091    //older panel
View full article
The newly announced i.MX RT1170 is a dual-core Arm® Cortex®-M based crossover MCU that breaks the gigahertz (GHz) barrier and accelerates advanced Machine Learning (ML) applications at the edge.  Built using advanced 28nm FD-SOI technology for lower active and static power requirements, i.MX RT1170 MCU family integrates a GHz Arm Cortex-M7 and power-efficient Cortex-M4, advanced 2D vector graphics, together with NXP’s signature EdgeLock security solution.  The i.MX RT1170 delivers a total CoreMark score of 6468 and address the growing performance needs of edge computing for industrial, Internet-of-Things (IoT) and automotive applications
View full article