i.MX RT Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX RT Knowledge Base

Labels

Discussions

Sort by:
Introduction A common need for GUI applications is to implement a clock function.  Whether it be to create a clock interface for the end user's benefit, or just to time animations or other actions, implementing an accurate clock is a useful and important feature for GUI applications.  The aim of this document is to help you implement clock functions in your AppWizard project.   Methods When implementing a real-time clock, there are a couple of general methods to do so.   Use an independent timer in your MCU Using animation objects Each of these methods have their advantages and disadvantages.  If you just need a timer that doesn't require extra code and you don't require control or assurance of precision, or maybe you can't spare another timer, using an animation object (method #2) may be a good option in that application.  If your application requires an assurance of precision or requires other real-time actions to be performed that AppWizard can't control, it is best to implement an independent timer in your MCU (method #1).  Method 1:  Independent MCU Timer Implementing a timer via an independent MCU timer allows better control and guarantees the precision because it isn't a shared clock and the developer can adjust the interrupt priorities such that the timer interrupt has the highest priority.  AppWizard timing uses a common timer and then time slices activities similar to how an operating system works.  It is for this reason that implementing an independent MCU timer is best when you need control over the precision of the timer or you need other real-time actions to be triggered by this timer.  When implementing a timer using an independent MCU timer (like the RTC module), an understanding of how to interact with Text widgets is needed. Let's look at this first.   Interacting with Text Widgets Editing Text widgets occurs through the use of the emWin library API (the emWin library is the underlying code that AppWizard builds upon). The Text widget API functions are documented in the emWin Graphic Library User Guide and Reference Manual, UM3001.  Most of the Text widget API functions require a Text widget handle.  Be sure to not confuse this handle for the AppWizard ID.  Imagine a clock example where there are two Text widgets in the interface:  one for the minutes and one for the seconds.  The AppWizard IDs of these objects might be ID_TEXT_MINS and ID_TEXT_SECONDS respectively (again, these are not to be confused with the handle to the Text widget for use by emWin library functions).  The first action software should take is to obtain the handle for the Text widgets.   This can be done using the WM_GetDialogItem function.  The code to get the active window handle and the handle for the two Text widgets is shown below: activeWin = WM_GetActiveWindow(); textBoxMins = WM_GetDialogItem(activeWin, ID_TEXT_MINS); textBoxSecs = WM_GetDialogItem(activeWin, ID_TEXT_SECONDS);‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Note that this function requires the handle to the parent window of the Text widget.  If your application has multiple windows or screens, you may need to be creative in how you acquire this handle, but for this example, the software can simply call the WM_GetActiveWindow function (since there is only one screen).  When to call these functions can be a bit tricky as well.  They can be called before the MainTask() function of the application is called and the application will not crash.  However, the handles won't be correct and the Text widgets will not be updated as expected.  It's recommended that these handles be initialized when the screen is initialized.  An example of how this would be done is shown below: void cbID_SCREEN_CLOCK(WM_MESSAGE * pMsg) { extern WM_HWIN activeWin; extern WM_HWIN textBoxMins; extern WM_HWIN textBoxSecs; extern WM_HWIN textBoxDbg; if(pMsg->MsgId == WM_INIT_DIALOG) { activeWin = WM_GetActiveWindow(); textBoxMins = WM_GetDialogItem(activeWin, ID_TEXT_MINS); textBoxSecs = WM_GetDialogItem(activeWin, ID_TEXT_SECONDS); textBoxDbg = WM_GetDialogItem(activeWin, ID_TEXT_DBG); } GUI_USE_PARA(pMsg); }‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Once the Text widget handles have been acquired, the text can be updated using the TEXT_SetText() function or the TEXT_SetDec() function in this case, because the Text widgets are configured for decimal mode, since we want to display numbers.  An example of the code to do this is shown below.  /* TEXT_SetDec(Text Widget Handle, Value as Int, Length, Shift, Sign, Leading Spaces) */ if(TEXT_SetDec(textBoxSecs, (int)gSecs, 2, 0, 0, 0)) { /* Perform action here if necessary */ } if(TEXT_SetDec(textBoxMins, (int)gMins, 2, 0, 0, 0)) { /* Perform action here if necessary */ } ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Method 2:  Animation Objects When implementing a real-time clock using animation objects, it is necessary to implement a loop.  This could be done outside of the AppWizard GUI (in your code) but because the timing precision can't be guaranteed, it's just as easy to implement a loop in the AppWizard GUI if you know how (it isn't very intuitive as to how to do this). Before examining the interactions to do this, let's look at the variables and objects needed to do this.  ID_VAR_SECS - This variable holds the current seconds value. ID_VAR_SECS_1 - This variable holds the next second value.  ID_TEXT_SECONDS - Text box that displays the current seconds value. ID_END_CNT - Variable that holds the value at which the seconds rolls over and increments the minute count ID_TEXT_MINS - Text box that holds the current minute count. ID_MIN_END_CNT - Variable that holds the value at which the minutes rolls over (which would also increment the hour count if the hours were implemented). ID_BUTTON_SECS - This is a hidden button that initiates actions when the seconds variable has reached the end count.  Now, here are the interactions used to implement the clock feature using animation interactions.  The heart of the loop are the interactions triggered by ID_VAR_SECS.  ID_VAR_SECS -> ID_VAR_SECS_1:  When ID_VAR_SECS changes, it needs to add one to ID_VAR_SECS_1 so that the animation will animate to one second from the current time. ID_VAR_SECS -> ID_TEXT_SECONDS:  When ID_VAR_SECS changes, it also needs to start the animation from the current value to the next second (ID_VAR_SECS_1). A very essential part of the loop is ensuring the animation restarts every time.  So ID_TEXT_SECONDS needs to change the value of ID_VAR_SECS when the animation ends. ID_VAR_SECS is changed to the current time value, ID_VAR_SECS_1. When the ID_TEXT_SECONDS animation ends, it must also decrement the ID_VAR_END_CNT variable.  This is analogous to the control variable of a "For" loop being updated. This is done using the ADDVALUE job, adding '-1' to the variable, ID_VAR_END_CNT. When ID_VAR_END_CNT changes, it updates the hidden button, ID_BUTTON_SECS, with the new value.  This is analogous to a "For" loop checking whether its control variable is still within its limits.   The interactions in group 5 are interactions that restart the loop when the seconds reach the count that we desire.  When the loop is restarted, the following actions must be taken: Set ID_VAR_SECS and ID_VAR_SECS_1 to the initial value for the next loop ('0' in this case).  Note that ID_VAR_SECS_1 MUST be set before ID_VAR_SECS.  Additionally, if the loop is to continue, ID_VAR_SECS and ID_VAR_SECS_1 must be set to the same value.   ID_TEXT_SECONDS is set to the initial value.  If this isn't done, then the text box will try to animate from the final value to the initial value and then will look "weird". ID_VAR_END_CNT is reset to its initial value (60 in this case).  ID_BUTTON_SECS is also responsible for updating the minutes values.  In this case, it's incrementing the ID_TEXT_MINS value (counting up in minutes) and decrementing the ID_VAR_MIN_END_CNT  Adjusting the time of an animation object The animation object (as well as other emWin objects) use the GUI_X_DELAY function for timing.  It is up to the host software to implement this function.  In the i.MX RT examples, the General Purpose Timer (GPT) is used for this timer.  So how the GPT is configured will affect the timing of the application and the how fast or slow the animations run. The GPT is configured in the function BOARD_InitGPT() which resides in the main source file.  The recommended way to adjust the speed of the timer is by changing the divider value to the GPT. Conclusion So we have seen two different methods of implementing a real-time clock in an AppWizard GUI application.  Those methods are: Use an independent timer in your MCU Using animation objects Using an independent timer in your MCU may be preferred as it allows for better control over the timing, can allow for real-time actions to be performed that AppWizard can't control, and provides some assurance of precision.  Using animation objects may be preferred if you just need a quick timer implementation that doesn't require you to manually add code to your project or use a second timer.  
View full article
RT1050 HAB Encrypted Image Generation and Analysis 1, Introduction      The NXP RT series can support multiple boot modes, it incluses: unsigned image mode, HAB signed image mode, HAB encryption image mode, and BEE encryption  image mode.       In order to understand the specific structure of the HAB encryption app, this article will generate a non-XIP app image, then generate the relevant burning file through the elftosb.exe tool in the flashloader i.MX-RT1050, and use MFGTOOL to enter the serial download mode to download the .sb file.       This article will focus on the download steps of RT1050 HAB encryption related operations, and analyze the structure of the HAB encrypted app image.     2, RT1050 HAB Encypted Operation Procedure At first, we analyze the steps of MFGtool burning, which files are needed, so as to give specific preparation, open the ucl2.xml file in the following path of the flashloader: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\mfgtools-rel\Profiles\MXRT105X\OS Firmware Because we need to use the HAB encrypated boot mode, then we will use MXRT105X-SecureBoot, from the ucl2.xml file, we will find the following related code: Fig 1. MXRT1050-SecureBoot structure As you can see from the above, to implement the secure boot of RT1050, you need to prepare these three files: ivt_flashlloader_signed.bin: it is the signed flashloader binary file enable_hab.sb: it is used to modify the SRK and HABmode in the fuse map boot_image.sb: HAB encrypted app program file       Here is a flow chart of the overall HAB encryption operation step, after checking this figure, then we will follow it step by step.     Fig 2. MXRT1050 HAB encrypted image flow chart     The app image we used in this article is the RAM app, so, at first, we need to prepare one RAM based app image. In this document, we are directly use the prepared  RAM based app image: evkbimxrt1050_led_softwarereset_0xa000.s19, this app code function is: After download the code to the MIMXRT1050-EVKB(qspi flash) board, the on board led D18 will blinky and printf the information, after pressing the WAKEUP button SW8, the code will implement software reset and printf the related information. The unsigned code test print result are as follows:      BOARD RESET start.  Helloworld. WAKEUP key pressed, will do software system reset.  BOARD RESET start.  Helloworld. 2.1 CST tool preparation      Because the contains a lot of steps, then customer can refer to the following document do the related configuration, this document, we won’t give the CST configuration detail steps. Please check these documents: https://www.cnblogs.com/henjay724/p/10219459.html https://community.nxp.com/docs/DOC-340904 Security Application Note AN12079 After the CST tool configuration, please copy the cst.exe, crts filder, key folder from cst folder to the same folder that holds elftosb executable files: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Please also copy SRK_1_2_3_4_fuse.bin and SRK_1_2_3_4_table.bin to the above folder. 2.2  Sign flashloader    Please refer to application note AN12079 chapter 3.3.1, copy flashloader.elf from folder path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Flashloader And the imx-flexspinor-normal-signed.bd  from folder path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\bd_file\imx10xx to the folder: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Please open commander window under the elftosb folder, then input this commander: elftosb.exe -f imx -V -c imx-flexspinor-flashloader-signed.bd -o ivt_flashloader_signed.bin flashloader.elf Fig 3.  Sign flashloader  This steps will generate the  ivt_flashlaoder_signed.bin, which is needed to put under the MFGtool OS Firmware folder, just used for enter the signed flashloader mode. 2.3 SRK and HAB mode fuse modification files Please refer to AN12079 chapter 4.3, copy the enable_hab.bd file from folder path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\bd_file\imx10xx to this folder path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Please refer to the chapter 2.1 generated SRK_1_2_3_4_fuse.bin, modify the enable_hab.bd like the following picture: Fig 4. enable_hab.bd SRK and HAB mode fuse modification Then,  in the elftosb window, please input the following command, just used to generate the enable_hab.sb program file: elftosb.exe -f kinetis -V -c enable_hab.bd -o enable_hab.sb Fig 5. SRK and HAB mode program files generation 2.4 APP Encrypted Image      If you want to do the HAB encrypted image download, you need to prepare one non-XIP app image, here we prepared one RAM based APP srec files.      Because the app file is the RAM files, then we also need the related RAM encrypted .bd files, please copy imx-itcm-encrypted.bd from the folder path:      Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\bd_file\imx10xx to this folder path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Open imx-itcm-encrypted.bd, then modify the following content: options {     flags = 0x0c;     # Note: This is an example address, it can be any non-zero address in ITCM region     startAddress = 0x8000;     ivtOffset = 0x1000;     initialLoadSize = 0x2000;     # Note: This is required if the cst and elftsb are not in the same folder     # Note: This is required if the default entrypoint is not the Reset_Handler     #       Please set the entryPointAddress to Reset_Handler address   entryPointAddress = 0x0000a2dd; } Here, we need to note these two points: (1)    ivtOffset = 0x1000; If the external flash is flexspi flash, then we need to modify ivtOffset as 0X1000, if it is the nandflash, we need to use the 0X400. (2) entryPointAddress = 0x0000a2dd; The entryPointsAddress should be the app code reset handlder, it is the app start address+4 data, the entry address is also OK, but we suggest you to use the app Reset_Handler address. Fig 6. App reset handler address Then input the following commander in the elftosb windows: elftosb.exe -f imx -V -c imx-itcm-encrypted.bd -o ivt_evkbimxrt1050_led_softwarereset_0xa000_encrypted.bin evkbimxrt1050_led_softwarereset_0xa000.s19 Fig 7. App HAB Encrypted file generation Please note, we need to record the generated key blob offset address, it is 0XA00, just like the above data in the red frame, this address will be used in the next chapter’s .bd file. After this step, it will generate 7 files:          (1)  ivt_evkbimxrt1050_led_softwarereset_0xa000_encrypted.bin, this file includes the FDCB which is filled with 0, IVT, BD, DCD, APP HAB encrypted image data, CSF data (2)  ivt_evkbimxrt1050_led_softwarereset_0xa000_encrypted_nopadding.bin, compare with ivt_evkbimxrt1050_led_softwarereset_0xa000_encrypted.bin, this file deletes the 0s which is above IVT range. (3)  Csf.bin, it is the HAB data area, you can find the data contains the csf data, it is from 0X8000 to 0X8F80 in the generated ivt_evkbimxrt1050_led_softwarereset_0xa000_encrypted.bin. Fig 8. Csf data and the encrypted app relationship      (4) dek.bin Fig 9. Dek data DEK data is the AES-128 bits key, it is not defined by the customer, it is random generated automatically by the HAB encrypted tool. (5) input.csf Open it, you can find the following content: Fig10. Input csf file content (6) rawbytes.bin,  this is the app image plaintext data, it doesn’t contains the FDCB,IVT,BOOTDATA, DCD, csf etc.    (7) temp.bin, it is the temperate file, compare with ivt_evkbimxrt1050_led_softwarereset_0xa000_encrypted.bin, no csf files.   2.5 HAB Encrypted QSPI program file    Here we need to prepare one program_flexspinor_image_qspinor_keyblob.bd file, and put it under the same folder as elftosb, this file is used to generate the HAB encrypted program .sb file. Because the flashloader package didn’t contains it, then we paste all the related content, and I will also attach it in the attachment. # The source block assign file name to identifiers sources { myBinFile = extern (0); dekFile = extern (1); } constants { kAbsAddr_Start= 0x60000000; kAbsAddr_Ivt = 0x60001000; kAbsAddr_App = 0x60002000; } # The section block specifies the sequence of boot commands to be written to the SB file section (0) { #1. Prepare Flash option # 0xc0000006 is the tag for Serial NOR parameter selection # bit [31:28] Tag fixed to 0x0C # bit [27:24] Option size fixed to 0 # bit [23:20] Flash type option # 0 - QuadSPI SDR NOR # 1 - QUadSPI DDR NOR # 2 - HyperFLASH 1V8 # 3 - HyperFLASH 3V # 4 - Macronix Octal DDR # 6 - Micron Octal DDR # 8 - Adesto EcoXIP DDR # bit [19:16] Query pads (Pads used for query Flash Parameters) # 0 - 1 # 2 - 4 # 3 - 8 # bit [15:12] CMD pads (Pads used for query Flash Parameters) # 0 - 1 # 2 - 4 # 3 - 8 # bit [11: 08] Quad Mode Entry Setting # 0 - Not Configured, apply to devices: # - With Quad Mode enabled by default or # - Compliant with JESD216A/B or later revision # 1 - Set bit 6 in Status Register 1 # 2 - Set bit 1 in Status Register 2 # 3 - Set bit 7 in Status Register 2 # 4 - Set bit 1 in Status Register 2 by 0x31 command # bit [07: 04] Misc. control field # 3 - Data Order swapped, used for Macronix OctaFLASH devcies only (except MX25UM51345G) # 4 - Second QSPI NOR Pinmux # bit [03: 00] Flash Frequency, device specific load 0xc0000006 > 0x2000; # Configure QSPI NOR FLASH using option a address 0x2000 enable flexspinor 0x2000; #2 Erase flash as needed. erase 0x60000000..0x60020000; #3. Program config block # 0xf000000f is the tag to notify Flashloader to program FlexSPI NOR config block to the start of device load 0xf000000f > 0x3000; # Notify Flashloader to response the option at address 0x3000 enable flexspinor 0x3000; #5. Program image load myBinFile > kAbsAddr_Ivt; #6. Generate KeyBlob and program it to flexspinor # Load DEK to RAM load dekFile > 0x10100; # Construct KeyBlob Option #--------------------------------------------------------------------------- # bit [31:28] tag, fixed to 0x0b # bit [27:24] type, 0 - Update KeyBlob context, 1 Program Keyblob to flexspinor # bit [23:20] keyblob option block size, must equal to 3 if type =0, # reserved if type = 1 # bit [19:08] Reserved # bit [07:04] DEK size, 0-128bit 1-192bit 2-256 bit, only applicable if type=0 # bit [03:00] Firmware Index, only applicable if type = 1 # if type = 0, next words indicate the address that holds dek # the 3rd word #---------------------------------------------------------------------------- # tag = 0x0b, type=0, block size=3, DEK size=128bit load 0xb0300000 > 0x10200; # dek address = 0x10100 load 0x00010100 > 0x10204; # keyblob offset in boot image # Note: this is only an example bd file, the value must be replaced with actual # value in users project load 0x0000a000 > 0x10208; enable flexspinor 0x10200; #7. Program KeyBlob to firmware0 region load 0xb1000000 > 0x10300; enable flexspinor 0x10300; }‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Please note, in the above chapter, fig 7, we mentioned the keyblob offset address, we need to modify it in the following code:     load 0x0000a000 > 0x10208; Now, combine program_flexspinor_image_qspinor_keyblob.bd, ivt_evkbimxrt1050_led_softwarereset_0xa000_encrypted_nopadding.bin and dek.bin file together, we use the following commander to generate the boot_image.sb: elftosb.exe -f kinetis -V -c program_flexspinor_image_qspinor_keyblob.bd -o boot_image.sb ivt_evkbimxrt1050_led_softwarereset_0xa000_encrypted_nopadding.bin dek.bin Fig 11. App HAB encrypted program file generation Until now, we will find, all the related HAB encrypted files is prepared. 2.6 MFG Tool program HAB Encrypted files to RT1050-EVKB        Before we program it, please copy the following 3 files which is in the elftosb folder: ivt_flashloader_signed.bin enable_hab.sb boot_image.sb to this folder: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\mfgtools-rel\Profiles\MXRT105X\OS Firmware Please modify cfg.ini, the file path is: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\mfgtools-rel Modify the content as: [profiles] chip = MXRT105X [platform] board = [LIST] name = MXRT105X-SecureBoot Choose MXRT105X-SecureBoot program mode. Then open the tool MfgTool2.exe, the board MIMXRT1050-EVKB(need to modify the on board resistor, use the qspi flash) mode should be serial download mode, just modify SW7:1-OFF,2-OFF,3-OFF, 4-ON, connect two usb cable between PC and the board J28 and J9. After the connection, you will find the MfgTool2.exe can detect the HID device: Fig 12. MFG tool program After the program is finished, power off the board, modify the boot mode to internal boot, it is SW7:1-OFF,2-OFF,3-ON, 4-OFF,connect the COM terminal, power on the EVKB board, after reset, you will find the D18 led is blinking, after you press the SW8, you will find the following printf information: BOARD RESET start. Helloworld. WAKEUP key pressed, will do software system reset. ? BOARD RESET start. Helloworld.‍‍‍‍‍‍‍‍‍‍‍‍‍ So, the HAB encrypted image works OK now. 3. App HAB encrypted image structure analysis 3.1 MCUBootUtility Configuration to check the RT Encrypted image      Here, we can also use  MCUBootUtility tool to check the RT chip encrypted image and the fuse data.      If the cst is your own configured, please do the following configuration at first:     (1)Copy the configured cst folder to folder: NXP-MCUBootUtility-2.0.0\tools Delete the original cst folder. (2)Copy SRK_1_2_3_4_fuse.bin and SRK_1_2_3_4_table.bin to folder:  NXP-MCUBootUtility-2.0.0\gen\hab_cert Now, you can use the new MCUBootutility to connect your board which already done the HAB encrypted method. 3.1 RT1050 fuse map comparation Before do the HAB encrypted image program, I have read out the whole fuse map as follows: Fig 13. MIMXRT1050-EVKB fuse map before HAB encrypted image Fig 14. MIMXRT1050-EVKB fuse map after HAB encrypted image Compare the fuse map between do the HAB encrypted image and no HAB encrypted image, we can find two difference: HAB mode, 0X460 bit1:0 open, 1 close SRK area We can find, after program the HAB encrypted image, the SRK fuse data is the same as the SRK data which is defined in the enable_hab.bd. 3.2  Readout HAB encrypted QSPI APP image structure analysis From MCUBootUtility tool, we can find the HAB Encypted image structure should be like this: Fig 15. HAB Encrypted image structure What about the real example image case? Now, we use the MCUbootUtility tool to read out our HAB encrypted image, from address 0X60000000, the readout size is 0XB000. The detail image structure is like following: Fig 16. HAB Encypted image example structure   1): IVT:  hdr,  IVT header, more details, check hab_hdr 2):    IVT: entry, the app entrypointAddress, it should be the reset_handler address, in this document example, it is the address 0xa004 data, the plaintext is 0X00A2DD, but after the HAB encrypted, we can find the address -x60002004 data is the encrypted data 3):  IVT: reserved 4):  IVT: DCD, it is used for the DRAM SEMC configuration, in this example, we didn’t use the SDDRAM, so the data is 0. 5):  IVT: BOOT_DATA, used to indicate the BOOT_DATA  RAM start address 0X9020. 6):  IVT: self, ivt self RAM start address is 0X9000 7):  IVT:CSF, it is used to indicate the CST start address, this example csf ram address is 0X00010000. 8):  IVT:reserved 9): BOOT_DATA:  RAM image start,  the whole image RAM start address, this RAM example BOOT_DATA is 0X8000,0XA000-0X2000=0X8000 10): BOOT_DATA: size, APP while size, it is 0X0000A200, after checking the while generated HAB encrypted app image size, you can find the image end size is really 0XA200, just lke the fig 16. 11):  HAB  Encypted app data,  please check ivt_evkbimxrt1050_led_softwarereset_0xa000_encrypted.bin file, the address 0X2000-0X7250 data, you will find it is the same.   12): HAB data, it incluses the csf, certificate etc data, you can compare the file ivt_evkbimxrt1050_led_softwarereset_0xa000_encrypted.bin address 0X8000-0x8f70 data, it is the same. 13):DEK blob, it is the DEK key blob related data, the offset address is 0XA000, the same as fig 7. FDCB,IVT,BOOT DATA are all plaintext, but app image area is the HAB encrypted data, HAB and the DEK blocb is the generated data put in the related memory. Conclusion     This document we mainly use the elftosb and the MFGTool to generate the HAB encrypted image, and download it to the RT1050 EVKB board, document give the whole detail steps, and us ethe MCUBootutility tool to read out the HAB encrypted image, and analysis the HAB encrypted image structure with the examples.  After compare with the generated mid files, we can find all the data is consist, and all the encrypted data range is the same. The test result also demonstrate the HAB encrypted code function works, the HAB encrypted boot has no problems. All the related files is in the attachment.      
View full article
Goal Our goal is to train a model that can take a value, x, and predict its sine, y. In a real-world application, if you needed the sine of x, you could just calculate it directly. However, by training a model to approximate the result, we can demonstrate the basics of machine learning. TensorFlow and Keras TensorFlow is a set of tools for building, training, evaluating, and deploying machine learning models. Originally developed at Google, TensorFlow is now an open-source project built and maintained by thousands of contributors across the world. It is the most popular and widely used framework for machine learning. Most developers interact with TensorFlow via its Python library. TensorFlow does many different things. In this post, we’ll use Keras, TensorFlow’s high-level API that makes it easy to build and train deep learning networks. To enable TensorFlow on mobile and embedded devices, Google developed the TensorFlow Lite framework. It gives these computationally restricted devices the ability to run inference on pre-trained TensorFlow models that were converted to TensorFlow Lite. These converted models cannot be trained any further but can be optimized through techniques like quantization and pruning. Building the Model To building the Model, we should follow the below steps. Obtain a simple dataset. Train a deep learning model. Evaluate the model’s performance. Convert the model to run on-device. Please navigate to the URL in your browser to open the notebook directly in Colab, this notebook is designed to demonstrate the process of creating a TensorFlow model and converting it to use with TensorFlow Lite. Deploy the mode to the RT MCU Hardware Board: MIMXRT1050 EVK Board Fig 1 MIMXRT1050 EVK Board Template demo code: evkbimxrt1050_tensorflow_lite_cifar10 Code /* Copyright 2017 The TensorFlow Authors. All Rights Reserved. Copyright 2018 NXP. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/ #include "board.h" #include "pin_mux.h" #include "clock_config.h" #include "fsl_debug_console.h" #include <iostream> #include <string> #include <vector> #include "timer.h" #include "tensorflow/lite/kernels/register.h" #include "tensorflow/lite/model.h" #include "tensorflow/lite/optional_debug_tools.h" #include "tensorflow/lite/string_util.h" #include "Sine_mode.h" int inference_count = 0; // This is a small number so that it's easy to read the logs const int kInferencesPerCycle = 30; const float kXrange = 2.f * 3.14159265359f; #define LOG(x) std::cout void RunInference() { std::unique_ptr<tflite::FlatBufferModel> model; std::unique_ptr<tflite::Interpreter> interpreter; model = tflite::FlatBufferModel::BuildFromBuffer(sine_model_quantized_tflite, sine_model_quantized_tflite_len); if (!model) { LOG(FATAL) << "Failed to load model\r\n"; exit(-1); } model->error_reporter(); tflite::ops::builtin::BuiltinOpResolver resolver; tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (!interpreter) { LOG(FATAL) << "Failed to construct interpreter\r\n"; exit(-1); } float input = interpreter->inputs()[0]; if (interpreter->AllocateTensors() != kTfLiteOk) { LOG(FATAL) << "Failed to allocate tensors!\r\n"; } while(true) { // Calculate an x value to feed into the model. We compare the current // inference_count to the number of inferences per cycle to determine // our position within the range of possible x values the model was // trained on, and use this to calculate a value. float position = static_cast<float>(inference_count) / static_cast<float>(kInferencesPerCycle); float x_val = position * kXrange; float* input_tensor_data = interpreter->typed_tensor<float>(input); *input_tensor_data = x_val; Delay_time(1000); // Run inference, and report any error TfLiteStatus invoke_status = interpreter->Invoke(); if (invoke_status != kTfLiteOk) { LOG(FATAL) << "Failed to invoke tflite!\r\n"; return; } // Read the predicted y value from the model's output tensor float* y_val = interpreter->typed_output_tensor<float>(0); PRINTF("\r\n x_value: %f, y_value: %f \r\n", x_val, y_val[0]); // Increment the inference_counter, and reset it if we have reached // the total number per cycle inference_count += 1; if (inference_count >= kInferencesPerCycle) inference_count = 0; } } /* * @brief Application entry point. */ int main(void) { /* Init board hardware */ BOARD_ConfigMPU(); BOARD_InitPins(); BOARD_InitDEBUG_UARTPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); NVIC_SetPriorityGrouping(3); InitTimer(); std::cout << "The hello_world demo of TensorFlow Lite model\r\n"; RunInference(); std::flush(std::cout); for (;;) {} } ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Test result On the MIMXRT1050 EVK Board, we log the input data: x_value and the inferenced output data: y_value via the Serial Port. Fig2 Received data In a while loop function, It will run inference for a progression of x values in the range 0 to 2π and then repeat. Each time it runs, a new x value is calculated, the inference is run, and the data is output. Fig3 Test result In further, we use Excel to display the received data against our actual values as the below figure shows. Fig4 Dot Plot You can see that, for the most part, the dots representing predicted values form a smooth sine curve along the center of the distribution of actual values. In general, Our network has learned to approximate a sine curve.
View full article
In the SDK_2.7.0_EVKB-IMXRT1050, it contains some eIQ machine learning demo projects, there's the tensorflow_lite_kws among them. It's a keyword spotting example that is based on Keyword spotting for Microcontrollers and it deploys a deepwise separable convolutional neural network called MobileNet in this demo project. It can classify a one-second audio clip as either silence, an unknown word, "yes", "no", "up", "down", "left", "right", "on", "off", "stop", or "go". Figure 1 shows the components that comprise it. Fig 1 Training Our New Model The model we are using is trained with the TensorFlow script which is designed to demonstrate how to build and train a model for audio recognition using TensorFlow. The script makes it very easy to train an audio recognition model. Among other things, it allows us to do the following: Download a dataset with audio featuring 20 spoken words. Choose which subset of words to train the model on. Specify what type of preprocessing to use on the audio. Choose from several different types of the model architecture. Optimize the model for microcontrollers using quantization. When we run the script, it downloads the dataset, trains a model, and outputs a file representing the trained model. We then use some other tools to convert this file into the correct form for TensorFlow Lite. Training in virtual machine (VM) Preparation Make sure the TensorFlow has been installed, and since the script downloads over 2GB of training data, it'll need a good internet connection and enough free space on the machine. Note that: The training process itself can take several hours, be patient. Training To begin the training process, use the following commands to clone ML-KWS-for-MCU. git clone https://github.com/ARM-software/ML-KWS-for-MCU.git‍‍‍‍‍‍ The training scripts are configured via a bunch of command-line flags that control everything from the model’s architecture to the words it will be trained to classify. The following command runs the script that begins training. You can see that it has a lot of command-line arguments: python ML-KWS-for-MCU/train.py --model_architecture ds_cnn --model_size_info 5 64 10 4 2 2 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 \ --wanted_words=zero, one, two, three, four, five, six, seven, eight, nine \ --dct_coefficient_count 10 --window_size_ms 40 \ --window_stride_ms 20 --learning_rate 0.0005,0.0001,0.00002 \ --how_many_training_steps 10000,10000,10000 \ --data_dir=./speech_dataset --summaries_dir ./retrain_logs --train_dir ./speech_commands_train ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Some of these, like --wanted_words=zero, one, two, three, four, five, six, seven, eight, nine. By default, the selected words are yes, no, up, down, left, right, on, off, stop, go, but we can provide any combination of the following words, all of which appear in our dataset: Common commands: yes, no, up, down, left, right, on, off, stop, go, backward, forward, follow, learn Digits zero through nine: zero, one, two, three, four, five, six, seven, eight, nine Random words: bed, bird, cat, dog, happy, house, Marvin, Sheila, tree, wow Others set up the output of the script, such as --train_dir=/content/speech_commands_train, which defines where the trained model will be saved. Leave the arguments as they are, and run it. The script will start off by downloading the Speech Commands dataset (Figure 2), which consists of over 105,000 WAVE audio files of people saying thirty different words. This data was collected by Google and released under a CC BY license, and you can help improve it by contributing five minutes of your own voice. The archive is over 2GB, so this part may take a while, but you should see progress logs, and once it's been downloaded once you won't need to do this step again. You can find more information about this dataset in this Speech Commands paper. Fig 2 Once the downloading has completed, some more output will appear. There might be some warnings, which you can ignore as long as the command continues running. Later, you'll see logging information that looks like this (Figure 3). Fig 3 This shows that the initialization process is done and the training loop has begun. You'll see that it outputs information for every training step. Here's a break down of what it means: Step shows that we're on the step of the training loop. In this case, there are going to be 30,000 steps in total, so you can look at the step number to get an idea of how close it is to finishing. rate is the learning rate that's controlling the speed of the network's weight updates. Early on this is a comparatively high number (0.0005), but for later training cycles it will be reduced 5x, to 0.0001, then to 0.00002 at last. accuracy is how many classes were correctly predicted on this training step. This value will often fluctuate a lot, but should increase on average as training progresses. The model outputs an array of numbers, one for each label, and each number is the predicted likelihood of the input being that class. The predicted label is picked by choosing the entry with the highest score. The scores are always between zero and one, with higher values representing more confidence in the result. cross-entropy is the result of the loss function that we're using to guide the training process. This is a score that's obtained by comparing the vector of scores from the current training run to the correct labels, and this should trend downwards during training. checkpoint After a hundred steps, you should see a line like this: This is saving out the current trained weights to a checkpoint file (Figure 4). If your training script gets interrupted, you can look for the last saved checkpoint and then restart the script with --start_checkpoint=/tmp/speech_commands_train/best/ds_cnn_xxxx.ckpt-400 as a command line argument to start from that point . Fig 4 Confusion Matrix After four hundred steps, this information will be logged: The first section is a confusion matrix. To understand what it means, you first need to know the labels being used, which in this case are "silence", "unknown", "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", and "nine". Each column represents a set of samples that were predicted to be each label, so the first column represents all the clips that were predicted to be silence, the second all those that were predicted to be unknown words, the third "zero", and so on. Each row represents clips by their correct, ground truth labels. The first row is all the clips that were silence, the second clips that were unknown words, the third "zero", etc. This matrix can be more useful than just a single accuracy score because it gives a good summary of what mistakes the network is making. In this example you can see that all of the entries in the first row are zero (Figure 5), apart from the initial one. Because the first row is all the clips that are actually silence, this means that none of them were mistakenly labeled as words, so we have no false negatives for silence. This shows the network is already getting pretty good at distinguishing silence from words. If we look down the first column though, we see a lot of non-zero values. The column represents all the clips that were predicted to be silence, so positive numbers outside of the first cell are errors. This means that some clips of real spoken words are actually being predicted to be silence, so we do have quite a few false positives. A perfect model would produce a confusion matrix where all of the entries were zero apart from a diagonal line through the center. Spotting deviations from that pattern can help you figure out how the model is most easily confused, and once you've identified the problems you can address them by adding more data or cleaning up categories.                                                            Fig 5                                                             Validation After the confusion matrix, you should see a line like Figure 5 shows. It's good practice to separate your data set into three categories. The largest (in this case roughly 80% of the data) is used for training the network, a smaller set (10% here, known as "validation") is reserved for evaluation of the accuracy during training, and another set (the last 10%, "testing") is used to evaluate the accuracy once after the training is complete. The reason for this split is that there's always a danger that networks will start memorizing their inputs during training. By keeping the validation set separate, you can ensure that the model works with data it's never seen before. The testing set is an additional safeguard to make sure that you haven't just been tweaking your model in a way that happens to work for both the training and validation sets, but not a broader range of inputs. The training script automatically separates the data set into these three categories, and the logging line above shows the accuracy of model when run on the validation set. Ideally, this should stick fairly close to the training accuracy. If the training accuracy increases but the validation doesn't, that's a sign that overfitting is occurring, and your model is only learning things about the training clips, not broader patterns that generalize Training Finished In general, training is the process of iteratively tweaking a model’s weights and biases until it produces useful predictions. The training script writes these weights and biases to checkpoint files (Figure 6). Fig 6 A TensorFlow model consists of two main things: The weights and biases resulting from training A graph of operations that combine the model’s input with these weights and biases to produce the model’s output At this juncture, our model’s operations are defined in the Python scripts, and its trained weights and biases are in the most recent checkpoint file. We need to unite the two into a single model file with a specific format, which we can use to run inference. The process of creating this model file is called freezing—we’re creating a static representation of the graph with the weights frozen into it. To freeze our model, we run a script that is called as follows: python ML-KWS-for-MCU/freeze.py --model_architecture ds_cnn --model_size_info 5 64 10 4 2 2 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 \ --wanted_words=zero, one, two, three, four, five, six, seven, eight, nine \ --dct_coefficient_count 10 --window_size_ms 40 \ --window_stride_ms 20 --checkpoint ./speech_commands_train/best/ds_cnn_9490.ckpt-21600 \ --output_file=./ds_cnn.pb‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ To point the script toward the correct graph of operations to freeze, we pass some of the same arguments we used in training. We also pass a path to the final checkpoint file, which is the one whose filename ends with the total number of training steps. The frozen graph will be output to a file named ds_cnn.pb. This file is the fully trained TensorFlow model. It can be loaded by TensorFlow and used to run inference. That’s great, but it’s still in the format used by regular TensorFlow, not TensorFlow Lite. Convert to TensorFlow Lite Conversion is a easy step: we just need to run a single command. Now that we have a frozen graph file to work with, we’ll be using toco, the command-line interface for the TensorFlow Lite converter. toco --graph_def_file=./ds_cnn.pb --output_file=./ds_cnn.tflite \ --input_shapes=1,49,10,1 --input_arrays=Reshape_1 --output_arrays='labels_softmax' \ --inference_type=QUANTIZED_UINT8 --mean_values=227 --std_dev_values=1 \ --change_concat_input_ranges=false \ --default_ranges_min=-6 --default_ranges_max =6‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ In the arguments, we specify the model that we want to convert, the output location for the TensorFlow Lite model file, and some other values that depend on the model architecture. we also provide some arguments (inference_type, mean_values, and std_dev_values) that instruct the converter how to map its low-precision values into real numbers. The converted model will be written to ds_cnn.tflite, this a fully formed TensorFlow Lite model! Create a C array We’ll use the xxd command to convert a TensorFlow Lite model into a C array in the following. xxd -i ./ds_cnn.tflite > ./ds_cnn.h cat ./ds_cnn.h‍‍‍‍‍‍‍‍ The final part of the output is the file’s contents, which are a C array and an integer holding its length, as follows: Fig 7 Next, we’ll integrate this newly trained model with the tensorflow_lite_kws project. Using the Model in tensorflow_lite_kws Project To use the new model, we need to do two things: In source/ds_cnn_s_model.h, replace the original model data with our new model. Update the label names in source/kws.cpp with our new ''zero'', ''one'', ''two'', ''three'', ''four'', ''five'', ''six'', ''seven'', ''eight'' and ''nine'' labels. const std::string labels[] = {"Silence", "Unknown","zero", "one", "two", "three","four", "five", "six", "seven","eight", "nine"};‍‍‍ Before running the model in the EVKB-IMXRT1050 board (Figure 8), please refer to the readme.txt to do the preparation, in further, the file also demonstrates the steps of testing, please follow them. Fig 8 Figure 9 shows the testing I did, I've attached the model file, please give a try by yourself. Fig 9
View full article
Overview ======== The LPUART example for FreeRTOS demonstrates the possibility to use the LPUART driver in the RTOS with hardware flow control. The example uses two instances of LPUART IP and sends data between them. The UART signals must be jumpered together on the board. Toolchain supported =================== - MCUXpresso 11.0.0 Hardware requirements ===================== - Mini/micro USB cable - MIMXRT1050-EVKB board - Personal Computer Board settings ============== R278 and R279 must be populated, or have pads shorted. These resistors are under the display opposite side of board from uSD connector. The following pins need to be jumpered together: --------------------------------------------------------------------------------- | | UART3 (UARTA) | UART8 (UARTB) | |---|-------------------------------------|-------------------------------------| | # | Signal | Function | Jumper | Jumper | Function | Signal | |---|---------------|----------|----------|----------|----------|---------------| | 1 | GPIO_AD_B1_07 | RX | J22-pin1 | J23-pin1 | TX | GPIO_AD_B1_10 | | 2 | GPIO_AD_B1_06 | TX | J22-pin2 | J23-pin2 | RX | GPIO_AD_B1_11 | | 3 | GPIO_AD_B1_04 | CTS | J23-pin3 | J24-pin5 | RTS | GPIO_SD_B0_03 | | 4 | GPIO_AD_B1_05 | RTS | J23-pin4 | J24-pin4 | CTS | GPIO_SD_B0_02 | --------------------------------------------------------------------------------- Prepare the Demo ================ 1. Connect a USB cable between the host PC and the OpenSDA USB port on the target board. 2. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 3. Download the program to the target board. 4. Either press the reset button on your board or launch the debugger in your IDE to begin running the demo. Running the demo ================ You will see status of the example printed to the console. Customization options =====================
View full article
MCUXPRESSO SECURE PROVISIONING TOOL是官方今年上半年推出的一个针对安全的软件工具,操作起来非常的简单便捷而且稳定可靠,对于安全功能不熟悉的用户十分友好。但就是目前功能还不是很完善,只能支持HAB的相关操作,后续像BEE之类的需等待更新。 详细的介绍信息以及用户手册请参考官方网址:MCUXpresso Secure Provisioning Tool | Software Development for NXP Microcontrollers (MCUs) | NXP | NXP  目前似乎知道这个工具的客户还不是很多,大部分用的更多的还是MCU BOOT UTILITY。那么如果已经用了MCU BOOT UTILITY烧录了FUSE,现在想用官方工具了怎么办了?其实对两者进行研究对比后,他们最原始的执行部分都是一样的,所以我们按照如下步骤进行相应的简单替换就能把新工具用起来: 首先是crts可keys的替换, MCU BOOT UTILITY的路径是在: ..\NXP-MCUBootUtility-2.2.0\NXP-MCUBootUtility-2.2.0\tools\cst MCUXPRESSO SECURE PROVISIONING的对应路径是在对应workspace的根目录: 另外还有一个就是encrypted模式会用到的hab_cert,需要将下面这两个文件对应替换,而且两个工具的命名不同,注意修改。 MCU BOOT UTILITY的路径是在: ..\NXP-MCUBootUtility-2.2.0\NXP-MCUBootUtility-2.2.0\gen\hab_cert MCUXPRESSO SECURE PROVISIONING的路径是workspace里: ..\secure_provisioning_RT1050\gen_hab_certs MCU BOOT UTILITY里命名为:SRK_1_2_3_4_table.bin; SRK_1_2_3_4_fuse.bin MCUXPRESSO SECURE PROVISIONING里命名为:SRK_fuses.bin; SRK_hash.bin 至此,就能够在新工具上用起来了 最后提一下,就是这个新工具是可以建不同的workspace来相应存储不同秘钥的项目,能够方便用户区分。在新工具下建的项目也是可以互相替换秘钥的,参考上术步骤中的secure provisioning部分即可。
View full article
This application note describes how to develop an H.264 video decoding application with the NXP i.MX RT1050 processor. Click here to access the full application note. Click here to access the github repo of FFMPEG(code, no GPL). state: the code is for evaluation purpose only.
View full article
When design a project, sometimes CCM_CLKO1 needs to output different clocks to meet customer needs. This customer does not need to buy a separate crystal, which can reduce costs。The document describe how to make CCM_CLKO1 output different clock on I.MXRT1050. According to  selection of the clock to be generated on CCM_CLKO1(CLKO1_SEL) and setting the divider of CCM_CLKO1(CLKO1_DIV) in I.MXRT1050reference manual. CCM_CLKO1 can output different clock. If CCM_CLKO1 output different clock via SYS PLL clock. We can get the different clock for the application. CLKO1_DIV 000 001 010 011 100 101 110 111 Freq(MHz) 264 132 88 66 52.8 44 37.714 33 For example we want to get 88Mhz output via SYS PLL clock. We can follow the steps as the below(led_blinky project in SDK 😞       1. PINMUX GPIO_SD_B0_04 as CCM_CLKO1 signal.       IOMUXC_SetPinConfig(       IOMUXC_GPIO_SD_B0_04_CCM_CLKO1,              0x10B0u; 2.Enable CCM_CLKO1 signal. CCM->CCOSR |= CCM_CCOSR_CLKO1_EN_MASK; 3.Set CLKO1_DIV to get 88MHZ the clock for the application. CCM->CCOSR = (CCM->CCOSR & (~CCM_CCOSR_CLKO1_DIV_MASK)) | CCM_CCOSR_CLKO1_DIV(2); CCM->CCOSR = (CCM->CCOSR & (~CCM_CCOSR_CLKO1_SEL_MASK)) | CCM_CCOSR_CLKO1_SEL(1); 4 We will get the clock as the below. Note: In principle, it is not recommended to output CLOCK in CCM_CLKO1, if necessary, Please connect an 8-10pf capacitor to GPIO_SD_B0_04, and connect a 22 ohm resistor in series to prevent interference.
View full article
RT1050 SDRAM app code boot from SDcard burn with 3 tools Abstract       This document is about the RT series app running on the external SDRAM, but boot from SD card. The content contains SDRAM app code generate with the RT1050 SDK MCUXpresso IDE project, burn the code to the external SD card with flashloader MFG tool, and MCUXPresso Secure Provisioning. The MCUBootUtility method can be found from this post: https://community.nxp.com/docs/DOC-346194       Software and Hardware platform: SDK 2.7.0_EVKB-IMXRT1050 MCUXpresso IDE MXRT1050_GA MCUBootUtility MCUXPresso Secure Provisioning MIMXRT1050-EVKB 2 RT1050 SDRAM app image generation     Porting SDK_2.7.0_EVKB-IMXRT1050 iled_blinky project to the MCUXPresso IDE, to generate the code which is located in SDRAM, the configuration is modified like the following items:       2.1 Copy code to RAM 2.2  Modify memory location to SDRAM address 0X80002000 The code which boots from SD card and running in the SDRAM is the non-xip code, so the IVT offset is 0X400, in our test, we put the image from the SDRAM memory address 0x800002000, the configuration is: 2.3 Modify the symbol 2.4 Generate the .s19 file      After build has no problems, then generate the app.s19 file:   Rename the app.19 image file to evkbimxrt1050_iled_blinky_sdram_0x2000.s19, and copy it to the flashloader folder: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win   3, Flashloader configuration and download    This chapter will use flashloader to configure the image which can download the SDRAM app code to the external SD card with MFGTool.       We need to prepare the following files: SDRAM interface configuration file CFG_DCD.bin imx-sdram-unsigned-dcd.bd program_sdcard_image.bd 3.1 SDRAM DCD file preparation      MIMXRT1050-EVKB on board SDRAM is IS42S16160J, we can use the attached dcd_model\ISSI_IS42S16160J\dcd.cfg and dcdgen.exe tool to generate the CFG_DCD.bin, the commander is: dcdgen -inputfile=dcd.cfg -bout -cout   Copy CFG_DCD.bin file to the flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win 3.2 imx-sdram-unsigned-dcd.bd file Prepare the imx-sdram-unsigned-dcd.bd file content as: options {     flags = 0x00;     startAddress = 0x80000000;     ivtOffset = 0x400;     initialLoadSize = 0x2000;     DCDFilePath = "CFG_DCD.bin";     # Note: This is required if the default entrypoint is not the Reset_Handler     #       Please set the entryPointAddress to Reset_Handler address     entryPointAddress = 0x800022f1; }   sources {     elfFile = extern(0); }   section (0) { }  The above entrypointAddress data is from the .s19 reset handler(0X80002000+4 address data): Copy imx-sdram-unsigned-dcd.bd file to flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Open cmd, run the following command: elftosb.exe -f imx -V -c imx-sdram-unsigned-dcd.bd -o ivt_evkbimxrt1050_iled_blinky_sdram_0x2000.bin evkbimxrt1050_iled_blinky_sdram_0x2000.s19 After running the command, two app IVT files will be generated: 3.3 program_sdcard_image.bd file Prepare the program_sdcard_image.bd file content as: # The source block assign file name to identifiers sources {  myBootImageFile = extern (0); }   # The section block specifies the sequence of boot commands to be written to the SB file section (0) {       #1. Prepare SDCard option block     load 0xd0000000 > 0x100;     load 0x00000000 > 0x104;       #2. Configure SDCard     enable sdcard 0x100;       #3. Erase blocks as needed.     erase sdcard 0x400..0x14000;       #4. Program SDCard Image     load sdcard myBootImageFile > 0x400;         #5. Program Efuse for optimal read performance (optional)     # Note: It is just a template, please program the actual Fuse required in the application     # and remove the # to enable the command     #load fuse 0x00000000 > 0x07;   } Copy program_sdcard_image.bd to the flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Open cmd, run the following command: elftosb.exe -f kinetis -V -c program_sdcard_image.bd -o boot_image.sb ivt_evkbimxrt1050_iled_blinky_sdram_0x2000_nopadding.bin Copy the generated boot_image.sb file to the following flashloader path: \Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\mfgtools-rel\Profiles\MXRT105X\OS Firmware 3.4 MFGTool burn code to SD card    Prepare one SD card, insert it to J20, let the board enter the serial download mode, SW7:1-ON 2-OFF 3-OFF 4-ON. Find two USB cable, one is connected to J28, another is connected to J9, we use the HID to download the image.    Open MFGTool.exe, and click the start button:          Modify the boot mode to internal boot, and boot from the external SD card, SW7:1-ON 2-OFF 3-ON 4-OFF.      Power off and power on the board again, you will find the onboard LED D18 is blinking, it means the external SDRAM APP code is boot from external SD card successfully. 4, MCUBootUtility configuration and code download    Please check this community document: https://community.nxp.com/docs/DOC-346194     Here just give one image readout memory map, it will be useful to understand the image location information:     After download, we can readout the SD card image, from 0X400 is the IVT, BD, DCD data, from 0X1000 is the image which is the same as the app.s19 file.     5, MCUXpresso Secure Provisioning configuration and download   This software is released in the NXP official website, it is also the GUI version, which can realize the normal code and the secure code downloading, it will be more easy to use than the flashloader tool, customer don’t need to input the command, the tool help the customer to do it, the function is similar to the MCUBootUtility, MCUBootUtility tool is the opensource tool which is shared in the github, but is not released in the NXP official website.   Now, we use the new official realized tool to download the SDRAM app code to the external SD card, the board still need to enter the serial download mode, just like the flashloader and the MCUBootUtility too, the detail operation is:  We can find this tool is also very easy to use, customer still need to provide the app.19 and the dcd.bin, then give the related boot device configuration is OK.    After the code is downloaded successfully, modify the boot mode to internal boot, and boot from the external SD card, SW7:1-ON 2-OFF 3-ON 4-OFF.     Power off and power on the board again, you will find the onboard LED D18 is blinking, it means the external SDRAM APP code is boot from external SD card successfully.   Until now, all the three methods to download the SDRAM app code to the SD card is working, flashloader is the command based tool, MCUBootUtility and MCUXPresso Secure Provisioning is the GUI tool, which is more easy to use.        
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-342717 
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/community/imx/blog/2019/04/17/do-you-have-a-minute 
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-341317
View full article
Overview of i.MX RT1050         The i.MX RT1050 is the industry's first crossover processor and combines the high-performance and high level of integration on an applications processors with the ease of use and real-time functionality of a micro-controller. The i.MX RT1050 runs on the Arm Cortex-M7 core at 600 MHz, it means that it definitely has the ability to do some complicated computing, such as floating-point arithmetic, matrix operation, etc. For general MCU, they're hard to conquer these complicated operations.         It has a rich peripheral which makes it suit for a variety of applications, in this demo, the PXP (Pixel Pipeline), CSI (CMOS Sensor Interface), eLCDIF (Enhanced LCD Interface) allows me to build up camera display system easily Fig 1 i.MX RT series           It has a rich peripheral which makes it suit for a variety of applications, in this demo, the PXP (Pixel Pipeline), CSI (CMOS Sensor Interface), eLCDIF (Enhanced LCD Interface) allows me to build up camera display system easily Fig 2 i.MX RT1050 Block Diagram Basic concept of Compute Vision (CV)          Machine Learning (ML) is moving to the edge because of a variety of reasons, such as bandwidth constraint, latency, reliability, security, ect. People want to have edge computing capability on embedded devices to provide more advanced services, like voice recognition for smart speakers and face detection for surveillance cameras. Fig 3 Reason        Convolutional Neural Networks (CNNs) is one of the main ways to do image recognition and image classification. CNNs use a variation of multilayer perception that requires minimal pre-processing, based on their shared-weights architecture and translation invariance characteristics. Fig 4 Structure of a typical deep neural network         Above is an example that shows the original image input on the left-hand side and how it progresses through each layer to calculate the probability on the right-hand side. Hardware MIMXRT1050 EVK Board; RK043FN02H-CT(LCD Panel) Fig 5 MIMXRT1050 EVK board Reference demo code emwin_temperature_control: demonstrates graphical widgets of the emWin library. cmsis_nn_cifar10: demonstrates a convolutional neural network (CNN) example with the use of convolution, ReLU activation, pooling and fully-connected functions from the CMSIS-NN software library. The CNN used in this example is based on the CIFAR-10 example from Caffe. The neural network consists of 3 convolution layers interspersed by ReLU activation and max-pooling layers, followed by a fully-connected layer at the end. The input to the network is a 32x32 pixel color image, which is classified into one of the 10 output classes. Note: Both of these two demo projects are from the SDK library Deploy the neuro network mode Fig 6 illustrates the steps of deploying the neuro network mode on the embedded platform. In the cmsis_nn_cifar10 demo project, it has provided the quantized parameters for the 3 convolution layer, so in this implementation, I use these parameters directly, BTW, I choose 100 images randomly from the Test set as a round of input to evaluate the accuracy of this model. And through several rounds of testing, I get the model's accuracy is about 65% as the below figure shows. Fig 6 Deploy the neuro network mode Fig 7 cmsis_nn_cifar10 demo project test result The CIFAR-10 dataset is a collection of images that are commonly used to train ML and computer vision algorithms, it consists of 60000 32x32 color images in 10 classes, with 6000 images per class ("airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"). There are 50000 training images and 10000 test images. Embedded platform software structure         After POR, various components are initialized, like system clock, pin mux, camera, CSI, PXP, LCD and emWin, etc. Then control GUI will show up in the LCD, press the Play button will display the camera video in the LCD, once an object into the camera's window, you can press the Capture button to pause the display and run the model to identify the object. Fig8 presents the software structure of this demo. Fig 8 Embedded platform software structure Object identify Test The three figures present the testing result.   Fig 9 Fig 10 Fig 11 Furture work          Use the Pytorch framework to train a better and more complicated convolutional network for object recognition usage.
View full article
[中文翻译版] 见附件 原文链接: https://community.nxp.com/docs/DOC-342297
View full article
[中文翻译版] 见附件 原文链接: https://community.nxp.com/docs/DOC-341316
View full article
[中文翻译版] 见附件 原文链接: https://community.nxp.com/docs/DOC-340813
View full article
[中文翻译版] 见附件 原文链接: https://community.nxp.com/docs/DOC-341985
View full article
Source code: https://github.com/JayHeng/NXP-MCUBootUtility   【v2.0.0】 Features: > 1. Support i.MXRT5xx A0, i.MXRT6xx A0 >    支持i.MXRT5xx A0, i.MXRT6xx A0 > 2. Support i.MXRT1011, i.MXRT117x A0 >    支持i.MXRT1011, i.MXRT117x A0 > 3. [RTyyyy] Support OTFAD encryption secure boot case (SNVS Key, User Key) >     [RTyyyy] 支持基于OTFAD实现的安全加密启动(唯一SNVS key,用户自定义key) > 4. [RTxxx] Support both UART and USB-HID ISP modes >     [RTxxx] 支持UART和USB-HID两种串行编程方式(COM端口/USB设备自动识别) > 5. [RTxxx] Support for converting bare image into bootable image >     [RTxxx] 支持将裸源image文件自动转换成i.MXRT能启动的Bootable image > 6. [RTxxx] Original image can be a bootable image (with FDCB) >     [RTxxx] 用户输入的源程序文件可以包含i.MXRT启动头 (FDCB) > 7. [RTxxx] Support for loading bootable image into FlexSPI/QuadSPI NOR boot device >     [RTxxx] 支持下载Bootable image进主动启动设备 - FlexSPI/QuadSPI NOR接口Flash > 8. [RTxxx] Support development boot case (Unsigned, CRC) >     [RTxxx] 支持用于开发阶段的非安全加密启动(未签名,CRC校验) > 9. Add Execute action support for Flash Programmer >     在通用Flash编程器模式下增加执行(跳转)操作 > 10. [RTyyyy] Can show FlexRAM info in device status >       [RTyyyy] 支持在device status里显示当前FlexRAM配置情况 Improvements: > 1. [RTyyyy] Improve stability of USB connection of i.MXRT105x board >     [RTyyyy] 提高i.MXRT105x目标板USB连接稳定性 > 2. Can write/read RAM via Flash Programmer >    通用Flash编程器里也支持读写RAM > 3. [RTyyyy] Provide Flashloader resident option to adapt to different FlexRAM configurations >     [RTyyyy] 提供Flashloader执行空间选项以适应不同的FlexRAM配置 Bugfixes: > 1. [RTyyyy] Sometimes tool will report error "xx.bat file cannot be found" >     [RTyyyy] 有时候生成证书时会提示bat文件无法找到,导致证书无法生成 > 2. [RTyyyy] Editing mixed eFuse fields is not working as expected >     [RTyyyy] 可视化方式去编辑混合eFuse区域并没有生效 > 3. [RTyyyy] Cannot support 32MB or larger LPSPI NOR/EEPROM device >     [RTyyyy] 无法支持32MB及以上容量的LPSPI NOR/EEPROM设备 > 4. Cannot erase/read the last two pages of boot device via Flash Programmer >    在通用Flash编程器模式下无法擦除/读取外部启动设备的最后两个Page
View full article
Source code: https://github.com/JayHeng/NXP-MCUBootUtility 【v1.3.0】 Features: > 1. Can generate .sb file by actions in efuse operation utility window >    支持生成仅含自定义efuse烧写操作(在efuse operation windows里指定)的.sb格式文件 Improvements: > 1. HAB signed mode should not appliable for FlexSPI/SEMC NOR device Non-XIP boot with RT1020/1015 ROM >    HAB签名模式在i.MXRT1020/1015下应不支持从FlexSPI NOR/SEMC NOR启动设备中Non-XIP启动 > 2. HAB encrypted mode should not appliable for FlexSPI/SEMC NOR device boot with RT1020/1015 ROM >    HAB加密模式在i.MXRT1020/1015下应不支持从FlexSPI NOR/SEMC NOR启动设备中启动 > 3. Multiple .sb files(all, flash, efuse) should be generated if there is efuse operation in all-in-one action >    当All-In-One操作中包含efuse烧写操作时,会生成3个.sb文件(全部操作、仅flash操作、仅efuse操作) > 4. Can generate .sb file without board connection when boot device type is NOR >    当启动设备是NOR型Flash时,可以不用连接板子直接生成.sb文件 > 5. Automatic image readback can be disabled to save operation time >    一键操作下的自动程序回读可以被禁掉,用以节省操作时间 > 6. The text of language option in menu bar should be static and easy understanding >    菜单栏里的语言选项标签应该是静态且易于理解的(中英双语同时显示) Bugfixes: > 1. Cannot generate bootable image when original image (hex/bin) size is larger than 64KB >    当输入的源image文件格式为hex或者bin且其大小超过64KB时,生成可启动程序会失败 > 2. Cannot download large image file (eg 6.8MB) in some case >    当输入的源image文件非常大时(比如6.8MB),下载可能会超时失败 > 3. There is language switch issue with some dynamic labels >    当切换显示语言时,有一些控件标签(如Connect按钮)不能实时更新 > 4. Some led demos of RT1050 EVKB board are invalid >    /apps目录下RT1050 EVKB板子的一些LED demo是无效的 【v1.4.0】 Features: > 1. Support for loading bootable image into uSDHC SD/eMMC boot device >    支持下载Bootable image进主动启动设备 - uSDHC接口SD/eMMC卡 > 2. Provide friendly way to view and set mixed eFuse fields >    支持更直观友好的方式去查看/设置某些混合功能的eFuse区域 Improvements: > 1. Set default FlexSPI NOR device to align with NXP EVK boards >    默认FlexSPI NOR device应与恩智浦官方EVK板卡相匹配 > 2. Enable real-time gauge for Flash Programmer actions >    为通用Flash编程器里的操作添加实时进度条显示
View full article