i.MX RT Crossover MCUs Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX RT Crossover MCUs Knowledge Base

Discussions

Sort by:
In the SDK_2.7.0_EVKB-IMXRT1050, it contains some eIQ machine learning demo projects, there's the tensorflow_lite_kws among them. It's a keyword spotting example that is based on Keyword spotting for Microcontrollers and it deploys a deepwise separable convolutional neural network called MobileNet in this demo project. It can classify a one-second audio clip as either silence, an unknown word, "yes", "no", "up", "down", "left", "right", "on", "off", "stop", or "go". Figure 1 shows the components that comprise it. Fig 1 Training Our New Model The model we are using is trained with the TensorFlow script which is designed to demonstrate how to build and train a model for audio recognition using TensorFlow. The script makes it very easy to train an audio recognition model. Among other things, it allows us to do the following: Download a dataset with audio featuring 20 spoken words. Choose which subset of words to train the model on. Specify what type of preprocessing to use on the audio. Choose from several different types of the model architecture. Optimize the model for microcontrollers using quantization. When we run the script, it downloads the dataset, trains a model, and outputs a file representing the trained model. We then use some other tools to convert this file into the correct form for TensorFlow Lite. Training in virtual machine (VM) Preparation Make sure the TensorFlow has been installed, and since the script downloads over 2GB of training data, it'll need a good internet connection and enough free space on the machine. Note that: The training process itself can take several hours, be patient. Training To begin the training process, use the following commands to clone ML-KWS-for-MCU. git clone https://github.com/ARM-software/ML-KWS-for-MCU.git‍‍‍‍‍‍ The training scripts are configured via a bunch of command-line flags that control everything from the model’s architecture to the words it will be trained to classify. The following command runs the script that begins training. You can see that it has a lot of command-line arguments: python ML-KWS-for-MCU/train.py --model_architecture ds_cnn --model_size_info 5 64 10 4 2 2 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 \ --wanted_words=zero, one, two, three, four, five, six, seven, eight, nine \ --dct_coefficient_count 10 --window_size_ms 40 \ --window_stride_ms 20 --learning_rate 0.0005,0.0001,0.00002 \ --how_many_training_steps 10000,10000,10000 \ --data_dir=./speech_dataset --summaries_dir ./retrain_logs --train_dir ./speech_commands_train ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Some of these, like --wanted_words=zero, one, two, three, four, five, six, seven, eight, nine. By default, the selected words are yes, no, up, down, left, right, on, off, stop, go, but we can provide any combination of the following words, all of which appear in our dataset: Common commands: yes, no, up, down, left, right, on, off, stop, go, backward, forward, follow, learn Digits zero through nine: zero, one, two, three, four, five, six, seven, eight, nine Random words: bed, bird, cat, dog, happy, house, Marvin, Sheila, tree, wow Others set up the output of the script, such as --train_dir=/content/speech_commands_train, which defines where the trained model will be saved. Leave the arguments as they are, and run it. The script will start off by downloading the Speech Commands dataset (Figure 2), which consists of over 105,000 WAVE audio files of people saying thirty different words. This data was collected by Google and released under a CC BY license, and you can help improve it by contributing five minutes of your own voice. The archive is over 2GB, so this part may take a while, but you should see progress logs, and once it's been downloaded once you won't need to do this step again. You can find more information about this dataset in this Speech Commands paper. Fig 2 Once the downloading has completed, some more output will appear. There might be some warnings, which you can ignore as long as the command continues running. Later, you'll see logging information that looks like this (Figure 3). Fig 3 This shows that the initialization process is done and the training loop has begun. You'll see that it outputs information for every training step. Here's a break down of what it means: Step shows that we're on the step of the training loop. In this case, there are going to be 30,000 steps in total, so you can look at the step number to get an idea of how close it is to finishing. rate is the learning rate that's controlling the speed of the network's weight updates. Early on this is a comparatively high number (0.0005), but for later training cycles it will be reduced 5x, to 0.0001, then to 0.00002 at last. accuracy is how many classes were correctly predicted on this training step. This value will often fluctuate a lot, but should increase on average as training progresses. The model outputs an array of numbers, one for each label, and each number is the predicted likelihood of the input being that class. The predicted label is picked by choosing the entry with the highest score. The scores are always between zero and one, with higher values representing more confidence in the result. cross-entropy is the result of the loss function that we're using to guide the training process. This is a score that's obtained by comparing the vector of scores from the current training run to the correct labels, and this should trend downwards during training. checkpoint After a hundred steps, you should see a line like this: This is saving out the current trained weights to a checkpoint file (Figure 4). If your training script gets interrupted, you can look for the last saved checkpoint and then restart the script with --start_checkpoint=/tmp/speech_commands_train/best/ds_cnn_xxxx.ckpt-400 as a command line argument to start from that point . Fig 4 Confusion Matrix After four hundred steps, this information will be logged: The first section is a confusion matrix. To understand what it means, you first need to know the labels being used, which in this case are "silence", "unknown", "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", and "nine". Each column represents a set of samples that were predicted to be each label, so the first column represents all the clips that were predicted to be silence, the second all those that were predicted to be unknown words, the third "zero", and so on. Each row represents clips by their correct, ground truth labels. The first row is all the clips that were silence, the second clips that were unknown words, the third "zero", etc. This matrix can be more useful than just a single accuracy score because it gives a good summary of what mistakes the network is making. In this example you can see that all of the entries in the first row are zero (Figure 5), apart from the initial one. Because the first row is all the clips that are actually silence, this means that none of them were mistakenly labeled as words, so we have no false negatives for silence. This shows the network is already getting pretty good at distinguishing silence from words. If we look down the first column though, we see a lot of non-zero values. The column represents all the clips that were predicted to be silence, so positive numbers outside of the first cell are errors. This means that some clips of real spoken words are actually being predicted to be silence, so we do have quite a few false positives. A perfect model would produce a confusion matrix where all of the entries were zero apart from a diagonal line through the center. Spotting deviations from that pattern can help you figure out how the model is most easily confused, and once you've identified the problems you can address them by adding more data or cleaning up categories.                                                            Fig 5                                                             Validation After the confusion matrix, you should see a line like Figure 5 shows. It's good practice to separate your data set into three categories. The largest (in this case roughly 80% of the data) is used for training the network, a smaller set (10% here, known as "validation") is reserved for evaluation of the accuracy during training, and another set (the last 10%, "testing") is used to evaluate the accuracy once after the training is complete. The reason for this split is that there's always a danger that networks will start memorizing their inputs during training. By keeping the validation set separate, you can ensure that the model works with data it's never seen before. The testing set is an additional safeguard to make sure that you haven't just been tweaking your model in a way that happens to work for both the training and validation sets, but not a broader range of inputs. The training script automatically separates the data set into these three categories, and the logging line above shows the accuracy of model when run on the validation set. Ideally, this should stick fairly close to the training accuracy. If the training accuracy increases but the validation doesn't, that's a sign that overfitting is occurring, and your model is only learning things about the training clips, not broader patterns that generalize Training Finished In general, training is the process of iteratively tweaking a model’s weights and biases until it produces useful predictions. The training script writes these weights and biases to checkpoint files (Figure 6). Fig 6 A TensorFlow model consists of two main things: The weights and biases resulting from training A graph of operations that combine the model’s input with these weights and biases to produce the model’s output At this juncture, our model’s operations are defined in the Python scripts, and its trained weights and biases are in the most recent checkpoint file. We need to unite the two into a single model file with a specific format, which we can use to run inference. The process of creating this model file is called freezing—we’re creating a static representation of the graph with the weights frozen into it. To freeze our model, we run a script that is called as follows: python ML-KWS-for-MCU/freeze.py --model_architecture ds_cnn --model_size_info 5 64 10 4 2 2 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 \ --wanted_words=zero, one, two, three, four, five, six, seven, eight, nine \ --dct_coefficient_count 10 --window_size_ms 40 \ --window_stride_ms 20 --checkpoint ./speech_commands_train/best/ds_cnn_9490.ckpt-21600 \ --output_file=./ds_cnn.pb‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ To point the script toward the correct graph of operations to freeze, we pass some of the same arguments we used in training. We also pass a path to the final checkpoint file, which is the one whose filename ends with the total number of training steps. The frozen graph will be output to a file named ds_cnn.pb. This file is the fully trained TensorFlow model. It can be loaded by TensorFlow and used to run inference. That’s great, but it’s still in the format used by regular TensorFlow, not TensorFlow Lite. Convert to TensorFlow Lite Conversion is a easy step: we just need to run a single command. Now that we have a frozen graph file to work with, we’ll be using toco, the command-line interface for the TensorFlow Lite converter. toco --graph_def_file=./ds_cnn.pb --output_file=./ds_cnn.tflite \ --input_shapes=1,49,10,1 --input_arrays=Reshape_1 --output_arrays='labels_softmax' \ --inference_type=QUANTIZED_UINT8 --mean_values=227 --std_dev_values=1 \ --change_concat_input_ranges=false \ --default_ranges_min=-6 --default_ranges_max =6‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ In the arguments, we specify the model that we want to convert, the output location for the TensorFlow Lite model file, and some other values that depend on the model architecture. we also provide some arguments (inference_type, mean_values, and std_dev_values) that instruct the converter how to map its low-precision values into real numbers. The converted model will be written to ds_cnn.tflite, this a fully formed TensorFlow Lite model! Create a C array We’ll use the xxd command to convert a TensorFlow Lite model into a C array in the following. xxd -i ./ds_cnn.tflite > ./ds_cnn.h cat ./ds_cnn.h‍‍‍‍‍‍‍‍ The final part of the output is the file’s contents, which are a C array and an integer holding its length, as follows: Fig 7 Next, we’ll integrate this newly trained model with the tensorflow_lite_kws project. Using the Model in tensorflow_lite_kws Project To use the new model, we need to do two things: In source/ds_cnn_s_model.h, replace the original model data with our new model. Update the label names in source/kws.cpp with our new ''zero'', ''one'', ''two'', ''three'', ''four'', ''five'', ''six'', ''seven'', ''eight'' and ''nine'' labels. const std::string labels[] = {"Silence", "Unknown","zero", "one", "two", "three","four", "five", "six", "seven","eight", "nine"};‍‍‍ Before running the model in the EVKB-IMXRT1050 board (Figure 8), please refer to the readme.txt to do the preparation, in further, the file also demonstrates the steps of testing, please follow them. Fig 8 Figure 9 shows the testing I did, I've attached the model file, please give a try by yourself. Fig 9
View full article
Overview ======== The LPUART example for FreeRTOS demonstrates the possibility to use the LPUART driver in the RTOS with hardware flow control. The example uses two instances of LPUART IP and sends data between them. The UART signals must be jumpered together on the board. Toolchain supported =================== - MCUXpresso 11.0.0 Hardware requirements ===================== - Mini/micro USB cable - MIMXRT1050-EVKB board - Personal Computer Board settings ============== R278 and R279 must be populated, or have pads shorted. These resistors are under the display opposite side of board from uSD connector. The following pins need to be jumpered together: --------------------------------------------------------------------------------- | | UART3 (UARTA) | UART8 (UARTB) | |---|-------------------------------------|-------------------------------------| | # | Signal | Function | Jumper | Jumper | Function | Signal | |---|---------------|----------|----------|----------|----------|---------------| | 1 | GPIO_AD_B1_07 | RX | J22-pin1 | J23-pin1 | TX | GPIO_AD_B1_10 | | 2 | GPIO_AD_B1_06 | TX | J22-pin2 | J23-pin2 | RX | GPIO_AD_B1_11 | | 3 | GPIO_AD_B1_04 | CTS | J23-pin3 | J24-pin5 | RTS | GPIO_SD_B0_03 | | 4 | GPIO_AD_B1_05 | RTS | J23-pin4 | J24-pin4 | CTS | GPIO_SD_B0_02 | --------------------------------------------------------------------------------- Prepare the Demo ================ 1. Connect a USB cable between the host PC and the OpenSDA USB port on the target board. 2. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 3. Download the program to the target board. 4. Either press the reset button on your board or launch the debugger in your IDE to begin running the demo. Running the demo ================ You will see status of the example printed to the console. Customization options =====================
View full article
                                      配置RT600开发环境 RT600开发入门培训视频。 https://www.nxp.com/document/guide/getting-started-with-i-mx-rt600-evaluation-kit:GS-MIMXRT685-EVK?&tid=vanGS-MIMXRT685-EVK#title2.1   下载I.MX RT600 SDK。下载链接: https://mcuxpresso.nxp.com/en/select?device=EVK-MIMXRT685     下载MCUXpresso IDE。注意需要安装MCUXpresso IDE 11.1.1及最新版本。https://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=MCUXPRESSO               下载安装LPCScrypt,可以将默认板载的CMSIS-DAP固件升级改为J-LINK。通过J-LINK,可以下载调试HiFi4 DSP固件。下载链接https://www.nxp.com/design/microcontrollers-developer-resources/lpc-microcontroller-utilities/lpcscrypt-v2-1-1:LPCSCRYPT?&tab=Design_Tools_Tab     下载安装J-LINK驱动。下载链接https://www.segger.com/downloads/jlink/   下载安装Cadence HiFi 4 DSP IDE for MIMXRT600。 第一次下载,注册用户https://tensilicatools.com/register/。国内用户注册时,如果页面没有出现下面人机身份验证,说明IP被GW Firewall屏蔽了。需要通过代理或者其他特殊手段,否则用户注册将无法成功提交。   下载HiFi DSP Development Tools for i.MX RT600开发工具。 https://tensilicatools.com/download/rt600-download-page/   申请License for RT600 SDK。注意输入绑定网卡MAC地址时,需要去除中间‘:’等字符,否则提示失败。   申请成功后,可以下载License文件。   启动Xplorer 8.0.13后,在菜单Help -- Xplorer License Keys安装License文件。安装成功后显示如下:     Xplorer下载调试器配置。 将xt-ocd.exe所在目录加入到系统Path环境变量。   使能”Use XOCD Manager”,指定Topology File   设置Download binary为Always,取消每次下载前都弹出提示框,节省下载时间。     通过J-Link下载HiFi4 DSP固件,可以单步调试代码。    
View full article
MCUXPRESSO SECURE PROVISIONING TOOL是官方今年上半年推出的一个针对安全的软件工具,操作起来非常的简单便捷而且稳定可靠,对于安全功能不熟悉的用户十分友好。但就是目前功能还不是很完善,只能支持HAB的相关操作,后续像BEE之类的需等待更新。 详细的介绍信息以及用户手册请参考官方网址:MCUXpresso Secure Provisioning Tool | Software Development for NXP Microcontrollers (MCUs) | NXP | NXP  目前似乎知道这个工具的客户还不是很多,大部分用的更多的还是MCU BOOT UTILITY。那么如果已经用了MCU BOOT UTILITY烧录了FUSE,现在想用官方工具了怎么办了?其实对两者进行研究对比后,他们最原始的执行部分都是一样的,所以我们按照如下步骤进行相应的简单替换就能把新工具用起来: 首先是crts可keys的替换, MCU BOOT UTILITY的路径是在: ..\NXP-MCUBootUtility-2.2.0\NXP-MCUBootUtility-2.2.0\tools\cst MCUXPRESSO SECURE PROVISIONING的对应路径是在对应workspace的根目录: 另外还有一个就是encrypted模式会用到的hab_cert,需要将下面这两个文件对应替换,而且两个工具的命名不同,注意修改。 MCU BOOT UTILITY的路径是在: ..\NXP-MCUBootUtility-2.2.0\NXP-MCUBootUtility-2.2.0\gen\hab_cert MCUXPRESSO SECURE PROVISIONING的路径是workspace里: ..\secure_provisioning_RT1050\gen_hab_certs MCU BOOT UTILITY里命名为:SRK_1_2_3_4_table.bin; SRK_1_2_3_4_fuse.bin MCUXPRESSO SECURE PROVISIONING里命名为:SRK_fuses.bin; SRK_hash.bin 至此,就能够在新工具上用起来了 最后提一下,就是这个新工具是可以建不同的workspace来相应存储不同秘钥的项目,能够方便用户区分。在新工具下建的项目也是可以互相替换秘钥的,参考上术步骤中的secure provisioning部分即可。
View full article
Recently, we often encounter customers using i.MXRT for RS485 communication. Mostly the problem of receiving and sending direction conversion in the process of using. Taking iMXRT1050 and SN65HVD11QDR as examples, The document introduces the LPUART to RS485 circuit and the method of transceiver control. The working principle is as follows: LPUART TXD: Transmit Data LPUART RXD: Receive Date LPUART RTS_B: Request To Send   The main control methods are as follows: 1  Use TXD signal line to do hardware automatic transceiver control According to the UART protocol, when the line is idle, TX is logic high. After the NOT gate, the LOW level is added to the direction control terminal, so when the UART is not  transmitting data, RS485 is in the state of receiving data. 2   Use GPIO control & LPUART_RTS More detailed information, users can refer to the link: https://www.nxp.com/docs/en/application-note/AN12679.pdf Note: Using GPIO control, software needs to judge the timing of receiving and sending. If the control is not good, it is easy to lose data. In order to control it well, the software must respond to TX FIFO "empty" interrupt, or query the sending status register, and accurately grasp the control opportunity, so as to ensure that there is no error in sending and receiving. Combined with the above methods, some customers are using the following control: Best Regards
View full article
This application note describes how to develop an H.264 video decoding application with the NXP i.MX RT1050 processor. Click here to access the full application note. Click here to access the github repo of FFMPEG(code, no GPL). state: the code is for evaluation purpose only.
View full article
When design a project, sometimes CCM_CLKO1 needs to output different clocks to meet customer needs. This customer does not need to buy a separate crystal, which can reduce costs。The document describe how to make CCM_CLKO1 output different clock on I.MXRT1050. According to  selection of the clock to be generated on CCM_CLKO1(CLKO1_SEL) and setting the divider of CCM_CLKO1(CLKO1_DIV) in I.MXRT1050reference manual. CCM_CLKO1 can output different clock. If CCM_CLKO1 output different clock via SYS PLL clock. We can get the different clock for the application. CLKO1_DIV 000 001 010 011 100 101 110 111 Freq(MHz) 264 132 88 66 52.8 44 37.714 33 For example we want to get 88Mhz output via SYS PLL clock. We can follow the steps as the below(led_blinky project in SDK 😞       1. PINMUX GPIO_SD_B0_04 as CCM_CLKO1 signal.       IOMUXC_SetPinConfig(       IOMUXC_GPIO_SD_B0_04_CCM_CLKO1,              0x10B0u; 2.Enable CCM_CLKO1 signal. CCM->CCOSR |= CCM_CCOSR_CLKO1_EN_MASK; 3.Set CLKO1_DIV to get 88MHZ the clock for the application. CCM->CCOSR = (CCM->CCOSR & (~CCM_CCOSR_CLKO1_DIV_MASK)) | CCM_CCOSR_CLKO1_DIV(2); CCM->CCOSR = (CCM->CCOSR & (~CCM_CCOSR_CLKO1_SEL_MASK)) | CCM_CCOSR_CLKO1_SEL(1); 4 We will get the clock as the below. Note: In principle, it is not recommended to output CLOCK in CCM_CLKO1, if necessary, Please connect an 8-10pf capacitor to GPIO_SD_B0_04, and connect a 22 ohm resistor in series to prevent interference.
View full article
i.MX RT6xx The RT6xx is a crossover MCU family is a breakthrough product combining the best of MCU and DSP functionality for ultra-low power secure Machine Learning (ML) / Artificial Intelligence (AI) edge processing, performance-intensive far-field voice and immersive 3D audio playback applications. Fig 1 is the block diagram for the i.MX RT600. It consists of a Cortex-M33 core that runs up to 300 MHz with 32KB FlexSPI cache and an optional HiFi4 DSP that runs up to 600MHz with 96KB DSP cache and 128KB DSP TCM. It also contains a cryptography engine and DSP/Math accelerator in the PowerQuad co-processor. The device has 4.5MB on-chip SRAM. Key features include the rich audio peripherals, the high-speed USB with PHY and the advanced on-chip security. There is a Flexcomm peripheral that supports the configuration of numerous UARTs, SPI, I2C, I2S, etc. Fig 1 Create a eIQ (TensorFlow Lite library) demo In the latest version of SDK for the i.MX RT600, it still doesn't contain the demos about the Machine Learning (ML) / Artificial Intelligence (AI), so it needs the developers to create this kind of demo by themself. To implement it, port the eIQ demos cross from i.MX RT1050/1060 to i.MX RT685 is the quickest way. The below presents the steps of creating a eIQ (TensorFlow Lite library) demo. Greate a new C++ project Install SDK library Fig 2 Create a new C++ project using installed SDK Part In the MCUXpresso IDE User Guide, Chapter 5 Creating New Projects using installed SDK Part Support presents how to create a new project, please refer to it for details Porting tensorflow-lite Copy the tensorflow-lite library to the target project Copy the TensorFlow-lite library corresponding files to the target project Fig 3 Add the paths for the above files Fig 4 Fig 5 Fig 6 Porting main code The main() code is from the post: The “Hello World” of TensorFlow Lite Testing On the MIMXRT685 EVK Board (Fig 7), we record the input data: x_value and the inferenced output data: y_value via the Serial Port (Fig 8). Fig 7 Fig 8 In addition, we use Excel to display the received data against our actual values as the below figure shows. Fig 9 In general, In general, it has replicated the result of the The “Hello World” of TensorFlow Lite Troubleshoot In default, the created project doesn't support print float, so it needs to enable this feature by adding below symbols (Fig 10). Fig 10 When a neural network is executed, the results of one layer are fed into subsequent operations and so must be kept around for some time. The lifetimes of these activation layers vary depending on their position in the graph, and the memory size needed for each is controlled by the shape of the array that a layer writes out. These variations mean that it’s necessary to calculate a plan over time to fit all these temporary buffers into as small an area of memory as possible. Currently, this is done when the model is first loaded by the interpreter, so if the area is not big enough, you’ll see a crash event happen. Regard to this application demo, the default heap size is 4 KB, obviously, it's not big enough to store the model’s input, output, and intermediate tensors, as the codes will be stuck at hard-fault interrupt function (Fig 11). Fig 11 So, how large should we allocate the heap area? That’s a good question. Unfortunately, there’s not a simple answer. Different model architectures have different sizes and numbers of input, output, and intermediate tensors, so it’s difficult to know how much memory we’ll need. The number doesn’t need to be exact—we can reserve more memory than we need—but since microcontrollers have limited RAM, we should keep it as small as possible so there’s space for the rest of our program. We can do this through trial and error. For this application demo, the code works well after increasing ten times than the previous heap size (Fig 12). Fig 12
View full article
MIMXRT1010 EVK (Chinese Version)  Design Files and Hardwre User's Guide 
View full article
RT1050 SDRAM app code boot from SDcard burn with 3 tools Abstract       This document is about the RT series app running on the external SDRAM, but boot from SD card. The content contains SDRAM app code generate with the RT1050 SDK MCUXpresso IDE project, burn the code to the external SD card with flashloader MFG tool, and MCUXPresso Secure Provisioning. The MCUBootUtility method can be found from this post: https://community.nxp.com/docs/DOC-346194       Software and Hardware platform: SDK 2.7.0_EVKB-IMXRT1050 MCUXpresso IDE MXRT1050_GA MCUBootUtility MCUXPresso Secure Provisioning MIMXRT1050-EVKB 2 RT1050 SDRAM app image generation     Porting SDK_2.7.0_EVKB-IMXRT1050 iled_blinky project to the MCUXPresso IDE, to generate the code which is located in SDRAM, the configuration is modified like the following items:       2.1 Copy code to RAM 2.2  Modify memory location to SDRAM address 0X80002000 The code which boots from SD card and running in the SDRAM is the non-xip code, so the IVT offset is 0X400, in our test, we put the image from the SDRAM memory address 0x800002000, the configuration is: 2.3 Modify the symbol 2.4 Generate the .s19 file      After build has no problems, then generate the app.s19 file:   Rename the app.19 image file to evkbimxrt1050_iled_blinky_sdram_0x2000.s19, and copy it to the flashloader folder: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win   3, Flashloader configuration and download    This chapter will use flashloader to configure the image which can download the SDRAM app code to the external SD card with MFGTool.       We need to prepare the following files: SDRAM interface configuration file CFG_DCD.bin imx-sdram-unsigned-dcd.bd program_sdcard_image.bd 3.1 SDRAM DCD file preparation      MIMXRT1050-EVKB on board SDRAM is IS42S16160J, we can use the attached dcd_model\ISSI_IS42S16160J\dcd.cfg and dcdgen.exe tool to generate the CFG_DCD.bin, the commander is: dcdgen -inputfile=dcd.cfg -bout -cout   Copy CFG_DCD.bin file to the flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win 3.2 imx-sdram-unsigned-dcd.bd file Prepare the imx-sdram-unsigned-dcd.bd file content as: options {     flags = 0x00;     startAddress = 0x80000000;     ivtOffset = 0x400;     initialLoadSize = 0x2000;     DCDFilePath = "CFG_DCD.bin";     # Note: This is required if the default entrypoint is not the Reset_Handler     #       Please set the entryPointAddress to Reset_Handler address     entryPointAddress = 0x800022f1; }   sources {     elfFile = extern(0); }   section (0) { }  The above entrypointAddress data is from the .s19 reset handler(0X80002000+4 address data): Copy imx-sdram-unsigned-dcd.bd file to flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Open cmd, run the following command: elftosb.exe -f imx -V -c imx-sdram-unsigned-dcd.bd -o ivt_evkbimxrt1050_iled_blinky_sdram_0x2000.bin evkbimxrt1050_iled_blinky_sdram_0x2000.s19 After running the command, two app IVT files will be generated: 3.3 program_sdcard_image.bd file Prepare the program_sdcard_image.bd file content as: # The source block assign file name to identifiers sources {  myBootImageFile = extern (0); }   # The section block specifies the sequence of boot commands to be written to the SB file section (0) {       #1. Prepare SDCard option block     load 0xd0000000 > 0x100;     load 0x00000000 > 0x104;       #2. Configure SDCard     enable sdcard 0x100;       #3. Erase blocks as needed.     erase sdcard 0x400..0x14000;       #4. Program SDCard Image     load sdcard myBootImageFile > 0x400;         #5. Program Efuse for optimal read performance (optional)     # Note: It is just a template, please program the actual Fuse required in the application     # and remove the # to enable the command     #load fuse 0x00000000 > 0x07;   } Copy program_sdcard_image.bd to the flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Open cmd, run the following command: elftosb.exe -f kinetis -V -c program_sdcard_image.bd -o boot_image.sb ivt_evkbimxrt1050_iled_blinky_sdram_0x2000_nopadding.bin Copy the generated boot_image.sb file to the following flashloader path: \Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\mfgtools-rel\Profiles\MXRT105X\OS Firmware 3.4 MFGTool burn code to SD card    Prepare one SD card, insert it to J20, let the board enter the serial download mode, SW7:1-ON 2-OFF 3-OFF 4-ON. Find two USB cable, one is connected to J28, another is connected to J9, we use the HID to download the image.    Open MFGTool.exe, and click the start button:          Modify the boot mode to internal boot, and boot from the external SD card, SW7:1-ON 2-OFF 3-ON 4-OFF.      Power off and power on the board again, you will find the onboard LED D18 is blinking, it means the external SDRAM APP code is boot from external SD card successfully. 4, MCUBootUtility configuration and code download    Please check this community document: https://community.nxp.com/docs/DOC-346194     Here just give one image readout memory map, it will be useful to understand the image location information:     After download, we can readout the SD card image, from 0X400 is the IVT, BD, DCD data, from 0X1000 is the image which is the same as the app.s19 file.     5, MCUXpresso Secure Provisioning configuration and download   This software is released in the NXP official website, it is also the GUI version, which can realize the normal code and the secure code downloading, it will be more easy to use than the flashloader tool, customer don’t need to input the command, the tool help the customer to do it, the function is similar to the MCUBootUtility, MCUBootUtility tool is the opensource tool which is shared in the github, but is not released in the NXP official website.   Now, we use the new official realized tool to download the SDRAM app code to the external SD card, the board still need to enter the serial download mode, just like the flashloader and the MCUBootUtility too, the detail operation is:  We can find this tool is also very easy to use, customer still need to provide the app.19 and the dcd.bin, then give the related boot device configuration is OK.    After the code is downloaded successfully, modify the boot mode to internal boot, and boot from the external SD card, SW7:1-ON 2-OFF 3-ON 4-OFF.     Power off and power on the board again, you will find the onboard LED D18 is blinking, it means the external SDRAM APP code is boot from external SD card successfully.   Until now, all the three methods to download the SDRAM app code to the SD card is working, flashloader is the command based tool, MCUBootUtility and MCUXPresso Secure Provisioning is the GUI tool, which is more easy to use.        
View full article
The RT600 is a family of dual-core microcontrollers for embedded applications featuring an Arm® Cortex®-M33 CPU combined with a Cadence® Tensilica ® HiFi 4 audio DSP core.  Check out this latest app note to learn about communication and debugging of these two cores.  For list of all i.MX RT600 app notes, visit: nxp.com/imxrt600
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-342717 
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/community/imx/blog/2019/04/17/do-you-have-a-minute 
View full article
This document describes how to use I2S (Inter-IC Sound Bus) and DMA to record and playback audio using NXP's i.MX RT600 crossover MCUs. It also includes the process of how to use the codec chip to process audio data on the i.MX RT600 Evaluation Kit (EVK) based on the Cadence® Tensilica® HiFi4 Audio DSP. Click here to access the full application note.
View full article