i.MX RT Crossover MCUs Knowledge Base

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

i.MX RT Crossover MCUs Knowledge Base

讨论

排序依据:
The MIMXRT1050-EVK includes a CMSIS-DAP/DAP-Link interface that includes MSD drag and drop functionality for the HyperFlash on the board. The drag and drop programming functionality can be used to program applications compiled to execute-in-place (XIP) from the HyperFlash memory. In the early SDK versions for RT1050, the projects did not include the flash configuration block and IVT required to make a bootable image across all toolchains. Starting with the SDK 2.3.1 release, projects include XIP files that add this information to the project. This allows for programming a bootable application to the external flash memory directly from the debugger, so many customers might not even need to use the drag-and-drop programming feature any more. Because of the SDK changes, the DAP-Link application has also had changes: Early versions of the DAPLink firmware were setup to work with a raw application binary like those generated by the SDK 2.3.0 for toolchains other than the MCUXpresso IDE. These versions will take the raw application binary and prepend the flash configuration block for the HyperFlash/QSPI and an IVT to make a bootable image. Newer version of the DAPLink firmware are setup to work with a complete bootable binary like those generated by SDK 2.3.1 and later. These versions will not attempt to prepend a flash configuration block and IVT to the application, because these are assumed to already be present. The following table describes the versions of the DAPLink application that have been released. NOTE: the firmware can be updated on the board, so the version on a given board might not match what was originally programmed at manufacture time. The latest version of firmware can be downloaded from www.nxp.com/opensda Board Rev DAPLink MCU GIT SHA from details.txt file NOTE EVK_A2 MK20 34182e2cce4ca99073443ef29fbcfaab9e18caec DAPLink will add FCB and IVT EVK_A3-EVK-A5 MK20 853df431d81359e822f49363891f877f17d31efb DAPLink will add FCB and IVT EVKB_A MK20 853df431d81359e822f49363891f877f17d31efb DAPLink will add FCB and IVT EVKB_A1 MK20 853df431d81359e822f49363891f877f17d31efb DAPLink will add FCB and IVT EVKB_A1 MK20 b3435dbed0ba4f09680e49d2fcfdaab32c7a4c71 DAPLink will NOT add FCB and IVT To use the drag and drop programming: 1. Configure the board for serial downloader mode by setting SW7 to OFF-ON-OFF-ON.  2. Press SW3 to reset the processor. 3. Drag the application binary to the RT1050-EVK drive.  4. Put the board back in internal boot mode by setting SW7 to OFF-ON-ON-OFF. 5. Press SW3 to reset the processor and your application should boot.  There are some limitations to the drag and drop programming to keep in mind: - Only works for Hyperflash/QSPI XIP applications. Doesn't support copying the code from HyperFlash to another memory (like ITCM) for execution - Application initial stack pointer must be located in DTCM - Doesn't support DCD files The flashloader and ROM tools offer a second external memory programming method where the limitations above do not apply: https://www.nxp.com/downloads/en/initialization-boot-device-driver-code-generation/Flashloader_i.MXRT1050_1.0_GA.zip  Refer to AN12107 for more information: https://www.nxp.com/docs/en/application-note/AN12107.pdf?fsrch=1&sr=2&pageNum=1 
查看全文
The document will introduce how to configure LPSPI clock on I.MXRT1050. The purpose is to help IMXRT customers better understand the clock tree and configure LPSPI clock in SDK.    Customer can configure LPSPI clock according to the following steps: 1 Select Source according to the clock tree. 2   Set LPSPI_CKL_SEL according to the register CCM_CBCMR. 3 Enable LPSPIn clock according to the register CCM_CCGR1. 4 Set clock gate according to register CCM_ANALOG_PFD_480n[PFDn_CLKGATE]. 5 Set LPSPI_PODF according to register CCM_CBCMR. 6 Set TCR[PRESCALE] according to LPSPIx module. 7  Set CCR[SCKDIV] according to LPSPIx module. The customer can get the value LPSPI_CLK according to the above steps
查看全文
In the SDK_2.7.0_EVKB-IMXRT1050, it contains some eIQ machine learning demo projects, there's the tensorflow_lite_kws among them. It's a keyword spotting example that is based on Keyword spotting for Microcontrollers and it deploys a deepwise separable convolutional neural network called MobileNet in this demo project. It can classify a one-second audio clip as either silence, an unknown word, "yes", "no", "up", "down", "left", "right", "on", "off", "stop", or "go". Figure 1 shows the components that comprise it. Fig 1 Training Our New Model The model we are using is trained with the TensorFlow script which is designed to demonstrate how to build and train a model for audio recognition using TensorFlow. The script makes it very easy to train an audio recognition model. Among other things, it allows us to do the following: Download a dataset with audio featuring 20 spoken words. Choose which subset of words to train the model on. Specify what type of preprocessing to use on the audio. Choose from several different types of the model architecture. Optimize the model for microcontrollers using quantization. When we run the script, it downloads the dataset, trains a model, and outputs a file representing the trained model. We then use some other tools to convert this file into the correct form for TensorFlow Lite. Training in virtual machine (VM) Preparation Make sure the TensorFlow has been installed, and since the script downloads over 2GB of training data, it'll need a good internet connection and enough free space on the machine. Note that: The training process itself can take several hours, be patient. Training To begin the training process, use the following commands to clone ML-KWS-for-MCU. git clone https://github.com/ARM-software/ML-KWS-for-MCU.git‍‍‍‍‍‍ The training scripts are configured via a bunch of command-line flags that control everything from the model’s architecture to the words it will be trained to classify. The following command runs the script that begins training. You can see that it has a lot of command-line arguments: python ML-KWS-for-MCU/train.py --model_architecture ds_cnn --model_size_info 5 64 10 4 2 2 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 \ --wanted_words=zero, one, two, three, four, five, six, seven, eight, nine \ --dct_coefficient_count 10 --window_size_ms 40 \ --window_stride_ms 20 --learning_rate 0.0005,0.0001,0.00002 \ --how_many_training_steps 10000,10000,10000 \ --data_dir=./speech_dataset --summaries_dir ./retrain_logs --train_dir ./speech_commands_train ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Some of these, like --wanted_words=zero, one, two, three, four, five, six, seven, eight, nine. By default, the selected words are yes, no, up, down, left, right, on, off, stop, go, but we can provide any combination of the following words, all of which appear in our dataset: Common commands: yes, no, up, down, left, right, on, off, stop, go, backward, forward, follow, learn Digits zero through nine: zero, one, two, three, four, five, six, seven, eight, nine Random words: bed, bird, cat, dog, happy, house, Marvin, Sheila, tree, wow Others set up the output of the script, such as --train_dir=/content/speech_commands_train, which defines where the trained model will be saved. Leave the arguments as they are, and run it. The script will start off by downloading the Speech Commands dataset (Figure 2), which consists of over 105,000 WAVE audio files of people saying thirty different words. This data was collected by Google and released under a CC BY license, and you can help improve it by contributing five minutes of your own voice. The archive is over 2GB, so this part may take a while, but you should see progress logs, and once it's been downloaded once you won't need to do this step again. You can find more information about this dataset in this Speech Commands paper. Fig 2 Once the downloading has completed, some more output will appear. There might be some warnings, which you can ignore as long as the command continues running. Later, you'll see logging information that looks like this (Figure 3). Fig 3 This shows that the initialization process is done and the training loop has begun. You'll see that it outputs information for every training step. Here's a break down of what it means: Step shows that we're on the step of the training loop. In this case, there are going to be 30,000 steps in total, so you can look at the step number to get an idea of how close it is to finishing. rate is the learning rate that's controlling the speed of the network's weight updates. Early on this is a comparatively high number (0.0005), but for later training cycles it will be reduced 5x, to 0.0001, then to 0.00002 at last. accuracy is how many classes were correctly predicted on this training step. This value will often fluctuate a lot, but should increase on average as training progresses. The model outputs an array of numbers, one for each label, and each number is the predicted likelihood of the input being that class. The predicted label is picked by choosing the entry with the highest score. The scores are always between zero and one, with higher values representing more confidence in the result. cross-entropy is the result of the loss function that we're using to guide the training process. This is a score that's obtained by comparing the vector of scores from the current training run to the correct labels, and this should trend downwards during training. checkpoint After a hundred steps, you should see a line like this: This is saving out the current trained weights to a checkpoint file (Figure 4). If your training script gets interrupted, you can look for the last saved checkpoint and then restart the script with --start_checkpoint=/tmp/speech_commands_train/best/ds_cnn_xxxx.ckpt-400 as a command line argument to start from that point . Fig 4 Confusion Matrix After four hundred steps, this information will be logged: The first section is a confusion matrix. To understand what it means, you first need to know the labels being used, which in this case are "silence", "unknown", "zero", "one", "two", "three", "four", "five", "six", "seven", "eight", and "nine". Each column represents a set of samples that were predicted to be each label, so the first column represents all the clips that were predicted to be silence, the second all those that were predicted to be unknown words, the third "zero", and so on. Each row represents clips by their correct, ground truth labels. The first row is all the clips that were silence, the second clips that were unknown words, the third "zero", etc. This matrix can be more useful than just a single accuracy score because it gives a good summary of what mistakes the network is making. In this example you can see that all of the entries in the first row are zero (Figure 5), apart from the initial one. Because the first row is all the clips that are actually silence, this means that none of them were mistakenly labeled as words, so we have no false negatives for silence. This shows the network is already getting pretty good at distinguishing silence from words. If we look down the first column though, we see a lot of non-zero values. The column represents all the clips that were predicted to be silence, so positive numbers outside of the first cell are errors. This means that some clips of real spoken words are actually being predicted to be silence, so we do have quite a few false positives. A perfect model would produce a confusion matrix where all of the entries were zero apart from a diagonal line through the center. Spotting deviations from that pattern can help you figure out how the model is most easily confused, and once you've identified the problems you can address them by adding more data or cleaning up categories.                                                            Fig 5                                                             Validation After the confusion matrix, you should see a line like Figure 5 shows. It's good practice to separate your data set into three categories. The largest (in this case roughly 80% of the data) is used for training the network, a smaller set (10% here, known as "validation") is reserved for evaluation of the accuracy during training, and another set (the last 10%, "testing") is used to evaluate the accuracy once after the training is complete. The reason for this split is that there's always a danger that networks will start memorizing their inputs during training. By keeping the validation set separate, you can ensure that the model works with data it's never seen before. The testing set is an additional safeguard to make sure that you haven't just been tweaking your model in a way that happens to work for both the training and validation sets, but not a broader range of inputs. The training script automatically separates the data set into these three categories, and the logging line above shows the accuracy of model when run on the validation set. Ideally, this should stick fairly close to the training accuracy. If the training accuracy increases but the validation doesn't, that's a sign that overfitting is occurring, and your model is only learning things about the training clips, not broader patterns that generalize Training Finished In general, training is the process of iteratively tweaking a model’s weights and biases until it produces useful predictions. The training script writes these weights and biases to checkpoint files (Figure 6). Fig 6 A TensorFlow model consists of two main things: The weights and biases resulting from training A graph of operations that combine the model’s input with these weights and biases to produce the model’s output At this juncture, our model’s operations are defined in the Python scripts, and its trained weights and biases are in the most recent checkpoint file. We need to unite the two into a single model file with a specific format, which we can use to run inference. The process of creating this model file is called freezing—we’re creating a static representation of the graph with the weights frozen into it. To freeze our model, we run a script that is called as follows: python ML-KWS-for-MCU/freeze.py --model_architecture ds_cnn --model_size_info 5 64 10 4 2 2 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 64 3 3 1 1 \ --wanted_words=zero, one, two, three, four, five, six, seven, eight, nine \ --dct_coefficient_count 10 --window_size_ms 40 \ --window_stride_ms 20 --checkpoint ./speech_commands_train/best/ds_cnn_9490.ckpt-21600 \ --output_file=./ds_cnn.pb‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ To point the script toward the correct graph of operations to freeze, we pass some of the same arguments we used in training. We also pass a path to the final checkpoint file, which is the one whose filename ends with the total number of training steps. The frozen graph will be output to a file named ds_cnn.pb. This file is the fully trained TensorFlow model. It can be loaded by TensorFlow and used to run inference. That’s great, but it’s still in the format used by regular TensorFlow, not TensorFlow Lite. Convert to TensorFlow Lite Conversion is a easy step: we just need to run a single command. Now that we have a frozen graph file to work with, we’ll be using toco, the command-line interface for the TensorFlow Lite converter. toco --graph_def_file=./ds_cnn.pb --output_file=./ds_cnn.tflite \ --input_shapes=1,49,10,1 --input_arrays=Reshape_1 --output_arrays='labels_softmax' \ --inference_type=QUANTIZED_UINT8 --mean_values=227 --std_dev_values=1 \ --change_concat_input_ranges=false \ --default_ranges_min=-6 --default_ranges_max =6‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ In the arguments, we specify the model that we want to convert, the output location for the TensorFlow Lite model file, and some other values that depend on the model architecture. we also provide some arguments (inference_type, mean_values, and std_dev_values) that instruct the converter how to map its low-precision values into real numbers. The converted model will be written to ds_cnn.tflite, this a fully formed TensorFlow Lite model! Create a C array We’ll use the xxd command to convert a TensorFlow Lite model into a C array in the following. xxd -i ./ds_cnn.tflite > ./ds_cnn.h cat ./ds_cnn.h‍‍‍‍‍‍‍‍ The final part of the output is the file’s contents, which are a C array and an integer holding its length, as follows: Fig 7 Next, we’ll integrate this newly trained model with the tensorflow_lite_kws project. Using the Model in tensorflow_lite_kws Project To use the new model, we need to do two things: In source/ds_cnn_s_model.h, replace the original model data with our new model. Update the label names in source/kws.cpp with our new ''zero'', ''one'', ''two'', ''three'', ''four'', ''five'', ''six'', ''seven'', ''eight'' and ''nine'' labels. const std::string labels[] = {"Silence", "Unknown","zero", "one", "two", "three","four", "five", "six", "seven","eight", "nine"};‍‍‍ Before running the model in the EVKB-IMXRT1050 board (Figure 8), please refer to the readme.txt to do the preparation, in further, the file also demonstrates the steps of testing, please follow them. Fig 8 Figure 9 shows the testing I did, I've attached the model file, please give a try by yourself. Fig 9
查看全文
Testing Boot times – Generating a bootable image Bootable Image Structure: The bootable image consists of: FlexSPI Configuration Block (FCB) Image Vector Table (IVT) Boot Data Device Configuration Data (DCD) Program image CSF, Certificates, and signatures (this stuff is optional and only comes with high-assurance boot I’ll briefly explain each below: FlexSPI Configuration Block (FCB) The FCB will configure the settings of the FlexSPI communication. It will establish how many ports will be used, what clock speed to run the FlexSPI controller at, etc. This is the first thing that happens, as everything else is stored in Flash memory. In order to read anything else, the flash must first be configured.  Image Vector Table (IVT)  The IVT is a table that tells the memory the addresses of where everything is stored. Boot Data The Boot Data contains pointers to the start address of the Memory. Device Configuration Data (DCD) The DCD contains configuration data to configure any peripherals. Program image The program image contains the code you write to go into the application. CSF, Certificates, and signatures These things are optional. They come with high-assurance boot sequence and will not be covered in this writeup. Below is a rough table outlining these different parts of the boot image: Bootable Image generation software / tools: MCUXpresso – download the MCUXpresso IDE Flash Loader – download from RT1050 page > software and tools > flashloader This entire folder will include the MFGTool, elftosb tool, and more documentation. DCD.bin file: I found mine included in this document: https://community.nxp.com/docs/DOC-340655 Bootable Image Generation Overview MCUXpresso can generate bootable images for Hyperflash and QSPI XiP, but if trying to boot and execute in place of SDRAM or intern SRAM, or from your own memory module, then you must generate a bootable image by through a combination of generating an s record or elf image on MCUXpresso, and then using the flashloader tool suite (elftosb and mfgtool) to create a bootable image and then a bootable program to upload to the board. The entire general process is described below. In subsequent sections, a more detailed step-by-step guide is provided for Hyperflash XiP, SDRAM, and SRAM, each. MCUXpresso Configurations and output: Begin with your application code in MCUXpresso. Change memory allocation in the memory configuration editor of the MCU Settings part of MCUXpresso according to where you would like to boot from. If applicable, be sure to specify in the preprocessor settings if you would like to enable the dcd in the boot image. Generate an s-record (.s19) file from MCUXpresso using binary utilities. Elftosb tool – .bin file generation You can find this tool in the flashloader. Call an imx command to it which will take as input the s-record file containing the program image, a dcd.bin file (may or may not be in the s-record), a FCB?, and a bd_file (given to you, specific to your memory configuration). The elftosb tool will generate as an output a .bin file. Elftosb tool – Bootable image generation Then call the elftosb tool again with a kinetis command. This will take the .bin file and convert it to a boot_image.sb file. MFG tool Then we can use the mfg tool to generate the program image for the board, and upload it. In order to do this, the board SW7 must be set to OFF-ON-OFF-ON, and switch the jumper from J1  (5-6) to J1 (3-4). Connect the USB cable to J9 to power the board. Then connect the board to your computer and run the Mfg tool. Click start and the image will be uploaded.      After this the bootable image will be uploaded to the board and you’ll be good to go! Remember to switch the pin headers back, so the board can boot normally. J1 (5-6) and SW7 to OFF-ON -ON -OFF to boot from Hyperflash. Now in the following sections I’ll break down specific examples of what to do depending on the specific memory location you’d like to boot from. Bootable Image Generation – XiP Hyperflash Please note that this can also be done in MCUXpresso, but In order to use my own dcd file, I did this project using the traditional flashloader tools.   MCUXpresso Configurations and output: Begin by changing memory allocation in the memory configuration editor (edit project settings >> C/C++ Build >> MCU settings >>edit. It should look like the one below. Then go to C/C++ Build >>settings > MCU C compiler > preprocessor and set the XIP_boot_header enable and and XIP_boot_header_DCD_enable both to 0 if you have a dcd.bin file to use. Otherwise set these fields to 1. Hit apply and close. Go to the debug portion and click on binary utilities, then generate an s-record (.s19) file from MCUXpresso.  Click build to re-build the project. Copy this .s19 file. Elftosb tool – .bin file generation In the flashloader tool, go to Tools > elftosb > Win, and drop the .s19 file in this folder, as well as the dcd.bin file. Fire up command prompt in administrator mode, and use command cd [elftosb directory] to get to this folder To generate the .bin file, call the command:   elftosb.exe -f imx -V -c ../../bd_file/imx10xx/imx-flexspinor-normal-unsigned-dcd.bd -o ivt_flexspi_nor_led_blinky.bin HYPERFLASH_led_blinky.s19   This command essentially uses the imx-flexspinor bd linker file to generate a .bin file that will include the dcd.bin header inside. Note that HYPERFLASH_led_blinky.s19 should be renamed to your actual file name acquired from MCUXpresso   Elftosb tool – Bootable image generation Then we must generate the boot_image.sb Then call the elftosb tool again with a kinetis command. This will take the .bin file and convert it to a boot_image.sb file. The command is:   elftosb.exe -f kinetis -V -c ../../bd_file/imx10xx/program_flexspinor_image_hyperflash.bd -o boot_image.sb ivt_flexspi_nor_led_blinky_nopadding.bin   Note: The output of the first imx command we made must match the input of this one (ivt_flexspi_nor_led_blinky_nopadding.bin). Also, the output must remain named boot_image.sb, otherwise it will not work. MFG tool Then we can use the mfg tool to generate the program image for the board, and upload it. In order to do this, the board SW7 must be set to OFF-ON-OFF-ON, and switch the jumper from J1  (5-6) to J1 (3-4). Connect the USB cable to J9 to power the board. Then connect the board to your computer and run the Mfg tool. Click start and the image will be uploaded.        Copy the boot_image.sb file that we created in the last step. Move it to the directory Tools > mfgtools-rel > Profiles > MXRT105X > OS Firmware, and drop it in here. Next, set SW7 on the board to OFF-ON-OFF-ON, and switch the jumper at the bottom to power the board. Then connect the USB cable to J9. This will allow you to program the board. Go back to the folder mfgtools-rel and Open the Mfg Tool. It should say HID- compliant vendor-defined device. Otherwise double check SW7 and the power pin at the bottom right.   Click start and if successful it should look like: Please note that this success only means that it was able to upload the bootable image to the board. This usually works well. However, if you find later that the board doesn’t do what it’s supposed to, then there might be a problem in the other steps of the processor. Check to make sure you used the right linker file, and that the settings in MCUXpresso are configured correctly. Press stop and exit Remember to switch the pin headers back, so the board can boot normally. J1 (5-6) and SW7 to OFF-ON -ON -OFF to boot from Hyperflash.   Bootable Image Generation – SDRAM The guide is based off this link: https://community.nxp.com/docs/DOC-340655 . Download the files provided there, and unzip. Everything is good except the first step of that pdf document is a little unclear. MCUXpresso Configurations and output: Begin by changing memory allocation in the memory configuration editor of the MCU Settings part of MCUXpresso. It should look like the one below. Then go to settings > MCU C compiler > preprocessor and set the XIP_boot_header enable and and XIP_boot_header_DCD_enable both to 0 Hit apply and close. Go to C/C++ Build >> Settings >> MCU Linker >> Manage linker script, and check the box Link application to RAM. Go to the debug portion and click on binary utilities, then generate an s-record (.s19) file from MCUXpresso.  Click build to re-build the project. You can check the console to verify the correct allocations of memory: Copy this .s19 file. Elftosb tool – .bin file generation In the flashloader tool, go to Tools > elftosb > Win, and drop the .s19 file in this folder, as well as the dcd.bin file. There’s another step for booting from SDRAM. Take the file imx-sdram-normal-unsigned-dcd.bd (included in the link posted) and drag it into Tools > bd_file > imx10xx Fire up command prompt in administrator mode, and use command cd [elftosb directory] to get to this folder To generate the .bin file, call the command:   elftosb.exe -f imx -V -c ../../bd_file/imx10xx/imx-sdram-normal-unsigned-dcd.bd -o evkbimxrt1050_led_blinky.bin SDRAM1_led_blinky.s19   This command essentially uses the imx-flexspinor bd linker file to generate a .bin file that will include the dcd.bin header inside. Note that HYPERFLASH_led_blinky.s19 should be renamed to your actual file name acquired from MCUXpresso   Elftosb tool – Bootable image generation Then we must generate the boot_image.sb Then call the elftosb tool again with a kinetis command. This will take the .bin file and convert it to a boot_image.sb file. The command is:   elftosb.exe -f kinetis -V -c ../../bd_file/imx10xx/program_flexspinor_image_hyperflash.bd -o boot_image.sb evkbimxrt1050_led_blinky_nopadding.bin   The output of the first imx command we made must match the input of this one. Also, the output must remain named boot_image.sb, otherwise it will not work. Also note that this says flexspinor image hyperflash, but that is correct since we are still technically needing to boot from Hyperflash initially before we copy everything over to SDRAM. Copy the boot_image.sb file. MFG tool Then we can use the mfg tool to generate the program image for the board, and upload it. In order to do this, the board SW7 must be set to OFF-ON-OFF-ON, and switch the jumper from J1  (5-6) to J1 (3-4). Connect the USB cable to J9 to power the board. Then connect the board to your computer and run the Mfg tool. Click start and the image will be uploaded.        Copy the boot_image.sb file that we created in the last step. Move it to the directory Tools > mfgtools-rel > Profiles > MXRT105X > OS Firmware, and drop it in here. Next, set SW7 on the board to OFF-ON-OFF-ON, and switch the jumper at the bottom to power the board. Then connect the USB cable to J9. This will allow you to program the board. Go back to the folder mfgtools-rel and Open the Mfg Tool. It should say HID- compliant vendor-defined device. Otherwise double check SW7 and the power pin at the bottom right.   Click start and if successful it should look like: Please note that this success only means that it was able to upload the bootable image to the board. This usually works well. However, if you find later that the board doesn’t do what it’s supposed to, then there might be a problem in the other steps of the processor. Check to make sure you used the right linker file, and that the settings in MCUXpresso are configured correctly. Press stop and exit Remember to switch the pin headers back, so the board can boot normally. J1 (5-6) and SW7 to OFF-ON -ON -OFF to boot from Hyperflash.   Bootable Image Generation – SRAM This one was a little trickier as there was no appnote, but not too bad. MCUXpresso Configurations and output: Begin by changing memory allocation in the memory configuration editor of the MCU Settings part of MCUXpresso. It should look like the one below. We are linking to copy everything to dtc RAM Then go to settings > MCU C compiler > preprocessor and set the XIP_boot_header enable and and XIP_boot_header_DCD_enable both to 0 Hit apply and close. Go to C/C++ Build >> Settings >> MCU Linker >> Manage linker script, and check the box Link application to RAM.   Go to the debug portion and click on binary utilities, then generate an s-record (.s19) file from MCUXpresso.  Click build to re-build the project. You can verify that application is linked to SRAM in the console: Copy this .s19 file. Elftosb tool – .bin file generation In the flashloader tool, go to Tools > elftosb > Win, and drop the .s19 file in this folder, as well as the dcd.bin file Fire up command prompt in administrator mode, and use command cd [elftosb directory] to get to this folder This step is specific to dtcm ram: Go to Tools >>bd_file >> imx10xx, and open up the imx-dtcm-unsigned-dcd.bd file and change the ivtOffset address to 0x1000. 0x400 is the default for SDCARD booting. To be clear, the bd file should look as below: To generate the .bin file, call the command:   elftosb.exe -f imx -V -c ../../bd_file/imx10xx/imx-dtcm-unsigned-dcd.bd -o evkbimxrt1050_led_blinky.bin SRAM1_led_blinky.s19   This command essentially uses the imx-flexspinor bd linker file to generate a .bin file that will include the dcd.bin header inside. Note that SRAM_led_blinky.s19 should be renamed to your actual file name acquired from MCUXpresso. Elftosb tool – Bootable image generation Then we must generate the boot_image.sb Then call the elftosb tool again with a kinetis command. This will take the .bin file and convert it to a boot_image.sb file. The command is:   elftosb.exe -f kinetis -V -c ../../bd_file/imx10xx/program_flexspinor_image_hyperflash.bd -o boot_image.sb evkbimxrt1050_led_blinky_nopadding.bin   The output of the first imx command we made must match the input of this one. Also, the output must remain named boot_image.sb, otherwise it will not work. Also note that this says flexspinor image hyperflash, but that is correct since we are still technically needing to boot from Hyperflash initially before we copy everything over to SDRAM. MFG tool Then we can use the mfg tool to generate the program image for the board, and upload it. In order to do this, the board SW7 must be set to OFF-ON-OFF-ON, and switch the jumper from J1  (5-6) to J1 (3-4). Connect the USB cable to J9 to power the board. Then connect the board to your computer and run the Mfg tool. Click start and the image will be uploaded.    Copy the boot_image.sb file that we created in the last step. Move it to the directory Tools > mfgtools-rel > Profiles > MXRT105X > OS Firmware, and drop it in here. Next, set SW7 on the board to OFF-ON-OFF-ON, and switch the jumper at the bottom to power the board. Then connect the USB cable to J9. This will allow you to program the board. Go back to the folder mfgtools-rel and Open the Mfg Tool. It should say HID- compliant vendor-defined device. Otherwise double check SW7 and the power pin at the bottom right.   Click start and if successful it should look like: Please note that this success only means that it was able to upload the bootable image to the board. This usually works well. However, if you find later that the board doesn’t do what it’s supposed to, then there might be a problem in the other steps of the processor. Check to make sure you used the right linker file, and that the settings in MCUXpresso are configured correctly. Press stop and exit Remember to switch the pin headers back, so the board can boot normally. J1 (5-6) and SW7 to OFF-ON -ON -OFF to boot from Hyperflash. Please note that this success only means that it was able to upload the bootable image to the board. This usually works well. However, if you find later that the board doesn’t do what it’s supposed to, then there might be a problem with the other steps of the processor. Check to make sure you used the right linker file, and that the settings in MCUXpresso are configured correctly. After this the bootable image will be uploaded to the board and you’ll be good to go!
查看全文
1 Introduction   NXP-MCUBootUtility is a GUI tool specially designed for NXP MCU secure boot. Its features correspond to the BootROM function in NXP MCU. Currently, it mainly supports i.MXRT series MCU chips, Compared to NXP official security enablement toolset (OpenSSL, CST, sdphost, blhost, elftosb, BD, MfgTool2), NXP-MCUBootUtility is a real one-stop tool, a tool that includes all the features of NXP's official security enablement toolset, and what's more, it supports full graphical user interface operation. With NXP-MCUBootUtility, you can easily get started with NXP MCU secure boot.   The main features of NXP-MCUBootUtility include: Support i.MXRT1021, i.MXRT1051/1052, i.MXRT1061/1062, i.MXRT1064 SIP Support both UART and USB-HID serial downloader modes Support various user application image file formats (elf/axf/srec/hex/bin) Can validate the range and applicability of user application image Support for converting bare image into bootable image Support for loading bootable image into FlexSPI NOR and SEMC NAND boot devices Support for loading bootable image into LPSPI NOR/EEPROM recovery boot device Support DCD which can help load image to SDRAM Support HAB encryption (Signed only, Signed and Encrypted) Can back up certificate with time stamp Support BEE encryption (SNVS Key, User Keys) Support common eFuse memory operation (eFuse Programmer) Support common boot device memory operation (Flash Programmer) Support for reading back and marking bootable image(NFCB/DBBT/FDCB/EKIB/EPRDB/IVT/Boot Data/DCD/Image/CSF/DEK KeyBlob) from boot device   2 Download   NXP-MCUBootUtility is developed in Python, and it is open source. The development environment is Python 2.7.15 (32bit), wxPython 4.0.3, pySerial 3.4, pywinusb 0.4.2, bincopy 15.0.0, PyInstaller 3.3.1 (or higher). Source code: https://github.com/nxp-mcuxpresso/mcu-boot-utility Feedback: https://www.cnblogs.com/henjay724/p/10159925.html   NXP-MCUBootUtility is packaged by PyInstaller, all Python dependencies have been packaged into an executable file (\NXP-MCUBootUtility\bin\NXP-MCUBootUtility.exe), so if you do not want to develop NXP-MCUBootUtility for new feature, there is no need to install any Python software or related libraries. Note1: Before using NXP-MCUBootUtility, you need to download HAB Code Signing Tool from NXP website,upzip it and put it in the \NXP-MCUBootUtility\tools\cst\ directory, then modify the code to enable AES function and rebuild \NXP-MCUBootUtility\tools\cst\mingw32\bin\cst.exe, or HAB related encryption function can not be used properly。See more details in 《The step-by-step guide to rebuild cst.exe for HAB encryption》 Note2: Before using NXP-MCUBootUtility, you need to rebuild the source in \NXP-MCUBootUtility\tools\image_enc\code directory to generate image_enc.exe and put it in \NXP-MCUBootUtility\tools\image_enc\win directory, or BEE related encryption function can not be used properly。See more details in 《The step-by-step guide to build image_enc.exe for BEE encryption》   3 Installation   NXP-MCUBootUtility is a pure green free installation tool. After downloading the source code package, double-click "\NXP-MCUBootUtility\bin\NXP-MCUBootUtility.exe" to use it. No additional software is required.   Before the NXP-MCUBootUtility.exe graphical interface is displayed, a console window will pop up first. The console will work along with the NXP-MCUBootUtility.exe graphical interface. The console is mainly for the purpose of showing error information of NXP-MCUBootUtility.exe. At present, NXP-MCUBootUtility is still in development stage, and the console will be removed when the NXP-MCUBootUtility is fully validated.   4 Interface   The following figure shows the main interface of the NXP-MCUBootUtility tool. The interface consists of six parts.
查看全文
Introduction NXP i.MXRT106x has two USB2.0 OTG instance. And the RT1060 EVK has both of the USB interface on the board. But the RT1060 SDK only has single USB host example. Although RT1060’s USB host stack support multiple devices, but we still need a USB HUB when user want to connect two device. This article will show you how to make both USB instance as host. RT1060 SDK has single host examples which support multiple devices, like host_hid_mouse_keyboard_bm. But this application don’t use these examples. Instead, MCUXpresso Config Tools is used to build the demo from beginning. The config tool is a very powerful tool which can configure clock, pin and peripherals, especially the USB. In this application demo, it can save 95% coding work. Hardware and software tools RT1060 EVK MCUXpresso 11.4.0 MIMXRT1060 SDK 2.9.1 Step 1 This project will support USB HID mouse and USB CDC. First, create an empty project named MIMXRT1062_usb_host_dual_port. When select SDK components, select “USB host CDC” and “”USB host HID” in Middleware label. IDE will select other necessary component automatically.     After creating the empty project, clock should be configured first. Both of the USB PHY need 480M clock.   Step 2 Next step is to configure USB host in peripheral config tool. Due to the limitation of config tool, only one host instance of the USB component is allowed. In this project, CDC VCOM is added first.   Step 3 After these settings, click “Update Code” in control bar. This will turn all the configurations into code and merge into project. Then click the “copy to clipboard” button. This will copy the host task call function. Paste it in the forever while loop in the project’s main(). Besides that, it also need to add BOARD_InitBootPeripherals() function call in main(). At this point, USB VCOM is ready. The tool will not only copy the file and configure USB, but also create basic implementation framework. If compile and download the project to RT1060 EVK, it can enumerate a USB CDC VCOM device on USB1. If characters are send from CDC device, the project can send it out to DAPLink UART port so that you can see the character on a terminal interface in computer. Step 4 To get USB HID mouse code, it need to create another USB HID project. The workflow is similar to the first project. Here is the screenshot of the USB HID configuration.   Click “Update code”, the HID mouse code will be generated. The config tool generate two files, usb_host_interface_0_hid_mouse.c and usb_host_interface_0_hid_mouse. Copy them to the “source” folder in dual host project.     Step 5 Next step is to modify some USB macro definitions. <usb_host_config.h> #define USB_HOST_CONFIG_EHCI 2 /*means there are two host instance*/ #define USB_HOST_CONFIG_MAX_HOST 2 /*The USB driver can support two ehci*/ #define USB_HOST_CONFIG_HID (1U) /*for mouse*/ Next step is merge usb_host_app.c. The project initialize USB hardware and software in USB_HostApplicationInit(). usb_status_t USB_HostApplicationInit(void) { usb_status_t status; USB_HostClockInit(kUSB_ControllerEhci0); USB_HostClockInit(kUSB_ControllerEhci1); #if ((defined FSL_FEATURE_SOC_SYSMPU_COUNT) && (FSL_FEATURE_SOC_SYSMPU_COUNT)) SYSMPU_Enable(SYSMPU, 0); #endif /* FSL_FEATURE_SOC_SYSMPU_COUNT */ status = USB_HostInit(kUSB_ControllerEhci0, &g_HostHandle[0], USB_HostEvent); status = USB_HostInit(kUSB_ControllerEhci1, &g_HostHandle[1], USB_HostEvent); /*each usb instance have a g_HostHandle*/ if (status != kStatus_USB_Success) { return status; } else { USB_HostInterface0CicVcomInit(); USB_HostInterface0HidMouseInit(); } USB_HostIsrEnable(); return status; } In USB_HostIsrEnable(), add code to enable USB2 interrupt.   irqNumber = usbHOSTEhciIrq[1]; NVIC_SetPriority((IRQn_Type)irqNumber, USB_HOST_INTERRUPT_PRIORITY); EnableIRQ((IRQn_Type)irqNumber); Then add and modify USB interrupt handler. void USB_OTG1_IRQHandler(void) { USB_HostEhciIsrFunction(g_HostHandle[0]); } void USB_OTG2_IRQHandler(void) { USB_HostEhciIsrFunction(g_HostHandle[1]); } Since both USB instance share the USB stack, When USB event come, all the event will call USB_HostEvent() in usb_host_app.c. HID code should also be merged into this function. static usb_status_t USB_HostEvent(usb_device_handle deviceHandle, usb_host_configuration_handle configurationHandle, uint32_t eventCode) { usb_status_t status1; usb_status_t status2; usb_status_t status = kStatus_USB_Success; /* Used to prevent from multiple processing of one interface; * e.g. when class/subclass/protocol is the same then one interface on a device is processed only by one interface on host */ uint8_t processedInterfaces[USB_HOST_CONFIG_CONFIGURATION_MAX_INTERFACE] = {0}; switch (eventCode & 0x0000FFFFU) { case kUSB_HostEventAttach: status1 = USB_HostInterface0CicVcomEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); status2 = USB_HostInterface0HidMouseEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); if ((status1 == kStatus_USB_NotSupported) && (status2 == kStatus_USB_NotSupported)) { status = kStatus_USB_NotSupported; } break; case kUSB_HostEventNotSupported: usb_echo("Device not supported.\r\n"); break; case kUSB_HostEventEnumerationDone: status1 = USB_HostInterface0CicVcomEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); status2 = USB_HostInterface0HidMouseEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); if ((status1 != kStatus_USB_Success) && (status2 != kStatus_USB_Success)) { status = kStatus_USB_Error; } break; case kUSB_HostEventDetach: status1 = USB_HostInterface0CicVcomEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); status2 = USB_HostInterface0HidMouseEvent(deviceHandle, configurationHandle, eventCode, processedInterfaces); if ((status1 != kStatus_USB_Success) && (status2 != kStatus_USB_Success)) { status = kStatus_USB_Error; } break; case kUSB_HostEventEnumerationFail: usb_echo("Enumeration failed\r\n"); break; default: break; } return status; } USB_HostTasks() is used to deal with all the USB messages in the main loop. At last, HID work should also be added in this function. void USB_HostTasks(void) { USB_HostTaskFn(g_HostHandle[0]); USB_HostTaskFn(g_HostHandle[1]); USB_HostInterface0CicVcomTask(); USB_HostInterface0HidMouseTask(); }   After all these steps, the dual USB function is ready. User can insert USB mouse and USB CDC device into any of the two USB port simultaneously. Conclusion All the RT/LPC/Kinetis devices with two OTG or HOST can support dual USB host. With the help of MCUXpresso Config Tool, it is easy to implement this function.
查看全文
One-stop secure boot tool: NXP-MCUBootUtility v1.0.0 is released Source code: https://github.com/JayHeng/NXP-MCUBootUtility 【v1.1.0】 Feature:   1. Support i.MXRT1015   2. Add Language option in Menu/View and support Chinese Improvement:   1. USB device auto-detection can be disabled   2. Original image can be a bootable image (with IVT&BootData/DCD)   3. Show boot sequence page dynamically according to action Interest:   1. Add sound effect (Mario) 【v1.2.0】 Feature:   1. Can generate .sb file for MfgTool and RT-Flash   2. Can show cost time along with gauge Improvement:   1. Non-XIP image can also be supported for BEE Encryption case   2. Display guage in real time Bug:   1. Region count cannot be set more than 1 for Fixed OTPMK Key case   2. Option1 field is not implemented for FlexSPI NOR configuration
查看全文
RT1170 flexSPI1 secondary QSPI flash debug flashdriver 1. Abstract RT1170 has two groups of flexSPI: FlexSPI1, FlexSPI2. Each group of flexSPI is also divided into primary option group and secondary pin group. For specific chip connections, please refer to the following article: https://www.cnblogs.com/henjay724/p/15139381.html NXP provided burning algorithms are started from the flexSPI1 primary group for RT1170. However, in actual use, some customers need to start from the flash of the FlexSPI1 secondary pin group and use the debugger to program the chip. How to prepare the corresponding flash algorithm? Moreover, different debuggers correspond to different program algorithms. This article will take RT1170 flexSPI1 secondary pin group flash boot as an example to explain how to use CMSIS DAP and JLINK as debugger to prepare the corresponding burning algorithm and do debug, as well as the necessary conditions for booting from the secondary port, and provide modified flash algorithm which can be directly used to debug and programming. Here, I would like to thank the customer who are providing the test platform, because the official MIMXRT1170-EVK is an external QSPI flash connected from the FlexSPI1 primary interface and does not provide a secondary group interface. 2 Related prepare To test the FlexSPI1 Secondary pin group, you must first prepare a board that connects the QSPI flash to the Secondary pin, and then configure the FlexSPI_PIN_GROUP_SEL fuse to 1. Since the FlexSPI1 secondary pin group is already connected to the GPIO_AD port, the maximum speed limit is 104Mhz.    Fig 1 2.1 Hardware prepare Fig 2 2.2 fuse FLEXSPI_PIN_GROUP_SEL burn   FLEXSPI_PIN_GROUP_SEL fuse address is 0X9A0[10]: Fig 3 To burn fuse, let the chip to enter serial download mode and connect through MCUBootutility. When connecting, you need to select the FlexSPI1 secondary option, as follows:   Fig 4 Burn fuse result is:   Fig 5 After the fuse is burned successfully, change the board boot mode to internal boot mode, then we can program the app and boot from the flash which is connecting to the FlexSPI1 secondary pin group. 3 Flash algorithm modification and debug test Regarding the flash algorithm of RT1170 FlexSPI1 for secondary pin group, this article mainly focuses on the preparation and testing of MCUxpresso IDE, two debuggers: CMSIS DAP's .cfx, and JLINK's RT-UFL flash algorithm. Here is a brief explanation of the principle of modifying the flash algorithm. It is actually based on the ROM API. Therefore, the main modification point is : option0 =0xc1000005, option1=0x00010000.   Fig 6 3.1 CMSIS DAP .cfx flash algorithm prepare and test The algorithm source code of RT1170 CMSIS DAP can be found in the path of MCUXpressoIDE: C:\nxp\MCUXpressoIDE_11.8.0_1165\ide\Examples\Flashdrivers\NXP\iMXRT\iMXRT117x_FlexSPI_SFDP.zip To the new MCUXpresso IDE 11.10, use this path: C:\NXP\MCUXpressoIDE_11.10.0_3148\ide\LinkServer\Examples\Flashdrivers\NXP\iMXRT\iMXRT117x_FlexSPI_SFDP.zip After importing the algorithm source code, first compile the LPCXFlashDriverLib<Release_SectorHashing> version to get the lib that needs to be called. For iMXRTt117x_FlexSPI_SFDP, select the MIMXRT1170_SFDP_QSPI (FlexSPI1 Port A QSPI) version, and modify Imxrt117xFlexSPI_SFDP, FlashConfig.h: #define CONFIG_OPTION0 0xc1 000005 #define CONFIG_OPTION1 0x00010000     Then compile the lib and compile Imxrt117xFlexSPI_SFDP to generate .cfx,    Fig 7 After build, the .cfx file can be found in project folder: iMXRT117x_FlexSPI_SFDP\builds Rename it: MIMXRT1170_SFDP_QSPI1_Secondary.cfx, and copy it to the IDE installation directory: C:\nxp\MCUXpressoIDE_11.8.0_1165\ide\binaries\Flash    This is done for the later app, you can directly select the corresponding .cfx in the list.    For details on how to compile the algorithm source code to obtain .cfx, you can refer to the article: https://www.cnblogs.com/henjay724/p/14190485.html    After preparing the APP and CMSIS DAP debugger, select compiled .cfx flash algorithm in the app project:   Fig 8 To the app, it should be noted that the serialClkFreq in the FCB is configured as 100MHz, and the LUT commander matches the flash command used. The FCB of W25Q128 is as follows: const flexspi_nor_config_t qspiflash_config = { .memConfig = { .tag = FLEXSPI_CFG_BLK_TAG, .version = FLEXSPI_CFG_BLK_VERSION, .readSampleClksrc=kFlexSPIReadSampleClk_LoopbackFromDqsPad, .csHoldTime = 3u, .csSetupTime = 3u, // Enable DDR mode, Wordaddassable, Safe configuration, Differential clock .controllerMiscOption = 0x10, .deviceType = kFlexSpiDeviceType_SerialNOR, .sflashPadType = kSerialFlash_4Pads, .serialClkFreq = kFlexSpiSerialClk_100MHz,//kFlexSpiSerialClk_133MHz, .sflashA1Size = 16u * 1024u * 1024u, .lookupTable = { // Read LUTs [0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xEC, RADDR_SDR, FLEXSPI_4PAD, 0x20), [1] = FLEXSPI_LUT_SEQ(DUMMY_SDR, FLEXSPI_4PAD, 0x06, READ_SDR, FLEXSPI_4PAD, 0x04), // Read Status LUTs [4 * 1 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x05, READ_SDR, FLEXSPI_1PAD, 0x04), // Write Enable LUTs [4 * 3 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x06, STOP, FLEXSPI_1PAD, 0x0), // Erase Sector LUTs [4 * 5 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x21, RADDR_SDR, FLEXSPI_1PAD, 0x20), // Erase Block LUTs [4 * 8 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0xD8, RADDR_SDR, FLEXSPI_1PAD, 0x18), // Pape Program LUTs [4 * 9 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x12, RADDR_SDR, FLEXSPI_1PAD, 0x20), [4 * 9 + 1] = FLEXSPI_LUT_SEQ(WRITE_SDR, FLEXSPI_1PAD, 0x04, STOP, FLEXSPI_1PAD, 0x0), // Erase Chip LUTs [4 * 11 + 0] = FLEXSPI_LUT_SEQ(CMD_SDR, FLEXSPI_1PAD, 0x60, STOP, FLEXSPI_1PAD, 0x0), }, }, .pageSize = 256u, .sectorSize = 4u * 1024u, .ipcmdSerialClkFreq = 0x1, .blockSize = 64u * 1024u, .isUniformBlockSize = false, }; Debug result is:   Fig 9 It can be seen that the modified flexSPI1 secondary group flash algorithm has been successfully called, the downloading is successful, and the app function runs normally. 3.2 JLINK RT-UFL flash algorithm prepare and test Some customers like to use JLINK, but the Segger JLINK driver algorithm source code are not open, so you can use the RT-UFL algorithm, modify it to match the option of RT1170 FlexSPI1 secondray group, and then call it.     For information on the RT-UFL algorithm, please see the following link:    https://www.cnblogs.com/henjay724/p/13951686.html https://www.cnblogs.com/henjay724/p/14942574.html https://www.cnblogs.com/henjay724/p/15430619.html    To the RT UFL modification points: ufl_main.c case kChipId_RT116x: case kChipId_RT117x: uflTargetDesc->flexspiInstance = MIMXRT117X_1st_FLEXSPI_INSTANCE; uflTargetDesc->flexspiBaseAddr = MIMXRT117X_1st_FLEXSPI_BASE; uflTargetDesc->flashBaseAddr = MIMXRT117X_1st_FLEXSPI_AMBA_BASE; uflTargetDesc->configOption.option0.U = 0xc1000005; uflTargetDesc->configOption.option1.U = 0x00010000; Build the code, generate: MIMXRT_FLEXSPI_UV5_UFL_Flexspi1secondary_qspi.FLM,copy To: C:\Program Files\SEGGER\JLINKV768B\Devices\NXP\iMXRT_UFL Please note, to the SEGGER DRIVER path, it determined by your own JLINK driver version install path. In file C:\Program Files\SEGGER\JLINKV768B\JLinkDevices.xml, add the new flash algorithm file calling code: <!------------------------> <Device> <ChipInfo Vendor="NXP" Name="MIMXRT1170_UFL_flexspi1_2nd" WorkRAMAddr="0x20240000" WorkRAMSize="0x00040000" Core="JLINK_CORE_CORTEX_M7" JLinkScriptFile="Devices/NXP/iMXRT_UFL/iMXRT117x_CortexM7.JLinkScript" Aliases="MIMXRT1176xxx8_M7; MIMXRT1176xxxA_M7" /> <FlashBankInfo Name="QSPI Flash" BaseAddr="0x30000000" MaxSize="0x01000000" Loader="Devices/NXP/iMXRT_UFL/MIMXRT_FLEXSPI_UV5_UFL_Flexspi1secondary_qspi.FLM" LoaderType="FLASH_ALGO_TYPE_OPEN" /> </Device> <!------------------------> In the app project, debug configuration,configure the JLINK device as: MIMXRT1170_UFL_flexspi1_2nd   Note: uncheck “reset before running”, otherwise, it will be stopped in ROM after entering the debug mode.   Fig 10 Test the debug result is:   Fig 11 We can see, when use the modified RT-UFL, the JLINK debug also works OK. 4. summarize This article mainly provides the algorithm modification of RT1170 FlexSPI1 secondary group. The algorithm modification of other Flash interfaces and port is similar. The main attention is paid to fuse and algorithm matching. This article provides two modified FlexSPI1 secondary group burning algorithms: MIMXRT1170_SFDP_QSPI1_Secondary.cfx RT-UFL modified MIMXRT_FLEXSPI_UV5_UFL_Flexspi1secondary_qspi.FLM attachments: CMSIS DAP: RT117X FlexSPI1 2nd flashalgo->CMSIS DAP->MIMXRT1170_SFDP_QSPI1_Secondary.cfx JLINK RT-UFL: RT117X FlexSPI1 2nd flashalgo\JLINK RT-UFL Copy JLINK RT-UFL folder to Segger JLINK install path: C:\Program Files\SEGGER\JLINKV768B In this way, the relevant flash algorithm can be called according to the above content.    
查看全文
Introduction i.MX RT ROM bootloader provides a wealth of options to enable user programs to start in various ways. In some cases, people want to copy application image from Flash or other storage device to SDRAM and run there. In this article, I record three ways to realize this. Section 2 and 3 shows load image from NOR flash. Section 4 shows load image from SD card. Software and Tools: MCUXpresso IDE v11.1 NXP-MCUBootUtility 2.2.0   MIMXRT1060-EVK   RT1060 SDK v2.7.0   Win10   Add DCD by MCUxpresso IDE If customers use MCUXpresso to develop the project, they can add DCD head by MCUXpresso. To show the work flow, we take evkmimxrt1020_iled_blinky as the example. Step 1: Add the following to Compiler options: XIP_BOOT_HEADER_DCD_ENABLE=1 SKIP_SYSCLK_INIT Step 2: Modify the Memory Configuration Step 3: Select link application to RAM Step 4. Compile the project. MCUXpresso will generate linker script automatically. Step 5. Since the code should be linked to RAM, MCUXpresso will not prefix IVT and DCD. We can add these link information to linker script manually. Add below code to .ld file’s head.     .boot_hdr : ALIGN(4)     {         FILL(0xff)         __boot_hdr_start__ = ABSOLUTE(.) ;         KEEP(*(.boot_hdr.conf))         . = 0x1000 ;         KEEP(*(.boot_hdr.ivt))         . = 0x1020 ;         KEEP(*(.boot_hdr.boot_data))         . = 0x1030 ;         KEEP(*(.boot_hdr.dcd_data))         __boot_hdr_end__ = ABSOLUTE(.) ;         . = 0x2000 ;     } >BOARD_SDRAM   Then deselect “Manage linker script” in last screenshot. Step 6. Recompile the project, IVT/DCD/BOOT_DATA will be add to your project. Then right click the project axf file->Binary Utilities->Create S-record.   After all these step, you can open MCUBootUtility and download the .s19 file to NOR flash.   Add DCD by MCUBootUtility We can also keep the linker script managed by IDE. MCUBootUtility can add head too. Sometimes it is more flexible than other manners. Step 1. This time BOARD_SDRAM location should be changed to 0x80002000 while the size should be 0x1cff000. This is because the start 8k space in bootable image is saved for IVT and DCD. Step 2. compile the project and generate the .s19 file. Step 3. Open MCUBootUtility. In MCUBootUtility, we should first set the Device Configuration Data. Here I use MIMXRT1060_EVK. So I select the DCD bin file in NXP-MCUBootUtility-2.2.0\src\targets\MIMXRT1062. After that, select the application image file and click All-in-one Action button. MCUBootUtility can do all the work without any manual operation.   Boot from SD card to SDRAM In some application, customer don’t want XIP. They want to use SD card to keep application image and run the code in RAM. But if the code size is bigger than OCRAM size, they have to copy image into SDRAM when startup. With MCUBootUtility’s help, this work is very easy too.  User just need to change the memory map which is located to 0x80001000.   In MCUBootUtility, select the Boot Device to “uSDHC SD” and insert SD card. Then connect the target board. If RT1060 can read the SD card, it will display the SD card information. Then same as last section, set the DCD file and application image file. Click the All-in-One Action button, MCUBootUtility will generate the bootable image and write it to SD card.   SD card has huge capacity. It's too wasteful to only store boot image. People may ask that can they also create a FAT32 system and store more data file in it? Yes, but you need tool's help. When booting from SD card, ROM code read IVT and DCD from SD card address 0x400. To FAT32, the first 512 bytes in SD is for MBR(MAIN BOOT RECORD). Data in address 0x1c6 in MBR reords the partition start address. If the space from MBR to partition start address is big enough to store boot image, then FAT32 system and boot image can live in peace. .   Conclusions:      To help Boot ROM initialize SDRAM, DCD must be placed at right place and indexed by IVT correctly. When our code seems not work, we should first check IVT and DCD.          The IVT offset from the base address for each boot device type is defined in the table below. The location of the IVT is the only fixed requirement by the ROM. The remainder or the image memory map is flexible and is determined by the contents of the IVT.
查看全文
Source code: https://github.com/JayHeng/NXP-MCUBootUtility 【v1.3.0】 Features: > 1. Can generate .sb file by actions in efuse operation utility window >    支持生成仅含自定义efuse烧写操作(在efuse operation windows里指定)的.sb格式文件 Improvements: > 1. HAB signed mode should not appliable for FlexSPI/SEMC NOR device Non-XIP boot with RT1020/1015 ROM >    HAB签名模式在i.MXRT1020/1015下应不支持从FlexSPI NOR/SEMC NOR启动设备中Non-XIP启动 > 2. HAB encrypted mode should not appliable for FlexSPI/SEMC NOR device boot with RT1020/1015 ROM >    HAB加密模式在i.MXRT1020/1015下应不支持从FlexSPI NOR/SEMC NOR启动设备中启动 > 3. Multiple .sb files(all, flash, efuse) should be generated if there is efuse operation in all-in-one action >    当All-In-One操作中包含efuse烧写操作时,会生成3个.sb文件(全部操作、仅flash操作、仅efuse操作) > 4. Can generate .sb file without board connection when boot device type is NOR >    当启动设备是NOR型Flash时,可以不用连接板子直接生成.sb文件 > 5. Automatic image readback can be disabled to save operation time >    一键操作下的自动程序回读可以被禁掉,用以节省操作时间 > 6. The text of language option in menu bar should be static and easy understanding >    菜单栏里的语言选项标签应该是静态且易于理解的(中英双语同时显示) Bugfixes: > 1. Cannot generate bootable image when original image (hex/bin) size is larger than 64KB >    当输入的源image文件格式为hex或者bin且其大小超过64KB时,生成可启动程序会失败 > 2. Cannot download large image file (eg 6.8MB) in some case >    当输入的源image文件非常大时(比如6.8MB),下载可能会超时失败 > 3. There is language switch issue with some dynamic labels >    当切换显示语言时,有一些控件标签(如Connect按钮)不能实时更新 > 4. Some led demos of RT1050 EVKB board are invalid >    /apps目录下RT1050 EVKB板子的一些LED demo是无效的 【v1.4.0】 Features: > 1. Support for loading bootable image into uSDHC SD/eMMC boot device >    支持下载Bootable image进主动启动设备 - uSDHC接口SD/eMMC卡 > 2. Provide friendly way to view and set mixed eFuse fields >    支持更直观友好的方式去查看/设置某些混合功能的eFuse区域 Improvements: > 1. Set default FlexSPI NOR device to align with NXP EVK boards >    默认FlexSPI NOR device应与恩智浦官方EVK板卡相匹配 > 2. Enable real-time gauge for Flash Programmer actions >    为通用Flash编程器里的操作添加实时进度条显示
查看全文
The iMX RT1050 ROM will allow you to copy an application image from a serial NOR flash memory on the FlexSPI controller to SDRAM at boot time. If you want to run your application from SDRAM, then when debugging and developing your application you should use an initialization script for the debugger to setup the SDRAM so the application can be downloaded directly to the SDRAM for debugging. When you are ready to have your application boot without the debugger, then you'll need to use the RT flashloader tools to program the application to the flash and configure it to copy to the SDRAM. The attached document contains instructions on how to program a boot image to serial NOR flash (in this case the hyperflash that is on the EVK) that will be copied to the SDRAM at boot time.
查看全文
INTRODUCTION REQUIREMENTS INTEGRATION     1. INTRODUCTION   This document provides an step-by-step guide to migrate the webcam application explained on AN12103 "Developing a simple UVC device based on i.MX RT1050" to EVKB-MIMXRT1050. The goal is getting the application working on rev. B silicon, using the current SDK components (v2.4.2) and with MCUXpresso IDE (v10.2.1), because the original implementation from the application note is using rev. A silicon and is developed on IAR IDE.   2. REQUIREMENTS   A) Download and install MCUXpresso IDE v10.2.1. B) Build an MCUXpresso SDK v2.4.2 for EVKB-MIMXRT1050 from the "SDK Builder web page", ensuring that CSI and USB components are included, and MCUXpresso IDE is selected, and install it. For A) and B) steps, you could refer to the following Community document: https://community.nxp.com/docs/DOC-341985  C) Download the source code related to AN12103. D) Having the EVKB-MIMXRT1050 board, with MT9M1114 camera module. 3. INTEGRATION   a) Open MCUXpresso IDE, and click on "Import SDK example" shortcut, select the "evkbimxrt1050" board and click on "Next" button. b) Select the "driver_examples->csi->csi_rgb565" and "usb_examples->dev_video_virtual_camera_bm" examples, and click on "Finish" button. c) Copy the "fsl_csi.h", "fsl_csi.c", "fsl_lpi2c.h" and "fsl_lpi2c.c" files from the "drivers" folder of CSI project, to the "drivers" folder of the Virtual_Camera project. d) Copy the "pin_mux.h" and "pin_mux.c" files from the "board" folder of CSI project, to the "board->src" folder of the Virtual_Camera project, replacing the already included files. e) Copy the "camera" folder from AN12103 software package from the path below, to the Virtual_Camera project: <AN12103SW\boards\evkmimxrt1050\user_apps\uvc_demo\src\camera> Also copy the "main.c" file from AN12103 software package to the "sources" folder of the Virtual_Camera project. Ensure selecting the option "Copy files and folders" when copying folders/files. f) Right click on the recently added "camera" folder, and select "Properties". Then, on the "C/C++ Build" menu, remove the checkbox "Exclude resource from build" option, and then click on "Apply and Close" button. g) Right click on the Virtual_Camera project, and select "Properties". Then, select the "C/C++ Build -> Settings -> MCU C Compiler -> Preprocessor" menu, and click on the "+" button to add the following value: "SDK_I2C_BASED_COMPONENT_USED=1", and click on "OK" button. h) Now, move to the "Includes" menu of the same window, and click on the "+" button to add the following value: "../camera". Repeat the same procedure on "MCU Assembler -> General" menu, and then, click on "Apply and Close" button. i) Refer to "usb" folder from AN12103 software package from the path below, and copy "video_camera.h", "video_camera.c", "usb_device_descriptor.h" and "usb_device_descriptor.c" files to the "sources" folder of Virtual_Camera project, ensuring selecting the option "Copy files and folders" and overwriting the already included files: <AN12103SW\boards\evkmimxrt1050\user_apps\uvc_demo\src\usb> j) Select "video_data.h", "video_data.c", "virtual_camera.h" and "virtual_camera.c" files and "doc" folder, then right click and select "Delete". Click on "OK" button of the confirmation window to remove these resources from the Virtual_Camera project. k) Refer to "fsl_mt9m114.c" file from "camera" folder of Virtual_Camera project, and delete the "static" definition from functions "MT9M114_Init", "MT9M114_Deinit", "MT9M114_Start", "MT9M114_Stop", "MT9M114_Control" and "MT9M114_InitExt". l) Refer to "main.c" file from "sources" folder of Virtual_Camera project, and comment out the call to the function "BOARD_InitLPI2C1Pins". Also, refer to "board.c" file from "board->src" folder of Virtual_Camera project, and comment out the call to the function "SCB_EnableDCache". m) Refer to "camera_device.c" file from "camera" folder of Virtual_Camera project, and comment out the line "AT_NONCACHEABLE_SECTION_ALIGN(static uint16_t s_cameraFrameBuffer[CAMERA_FRAME_BUFFER_COUNT][CAMERA_VERTICAL_POINTS * CAMERA_HORIZONTAL_POINTS + 32u], FRAME_BUFFER_ALIGN);" and add the following line: static uint16_t __attribute__((section (".noinit.$BOARD_SDRAM"))) s_cameraFrameBuffer[CAMERA_FRAME_BUFFER_COUNT][CAMERA_VERTICAL_POINTS * CAMERA_HORIZONTAL_POINTS + 32u] __attribute__ ((aligned (FRAME_BUFFER_ALIGN))); n) Compile and download the application into the EVKB-MIMXRT1050 board. The memory usage is shown below: o) When running the application, if you also have the serial terminal connected, you should see the print message. Additionally, if connected to Windows OS, you could find it as "CSI Camera Device" under the "Imaging devices" category. p) Optionally, you could rename the Virtual_Camera project to any other desired name, with rigth click on Project, and selecting "Rename" option, and finally, click on "OK" button. It is also attached the migrated MCUXpresso IDE project including all the steps mentioned on the present document. Hope this will be useful for you. Best regards! Some additional references: https://community.nxp.com/thread/321587  Defining Variables at Absolute Addresses with gcc | MCU on Eclipse   
查看全文
The i.MXRT1060 provides tightly coupled GPIOs to be accessed with high frequency. RT1060 provides two sets of GPIOs registers to control pad output. GPIO1 to GPIO3 are general GPIOs, and GPIO6 to GPIO8 are tightly GPIOs, but they share the same pad, which means the gpio pin can select from GPIO1/2/3 or GPIO6/7/8. The registers IOMUXC_GPR_GPR26, IOMUXC_GPR_GPR27, and IOMUXC_GPR_GPR28 are for GPIO selection. To select the gpio pin between GPIO1/2/3 or GPIO6/7/8 you can use MCUXpresso Config Tools. For example, if you select pin G10 you can select either GPIOI_IO11 for normal GPIO or GPIO6_IO11 for fast GPIO.  I made an example based on the SDK v2.7.0 to compare the speed of Normal GPIO and Fast GPIO. For this, I used pin G10 (GPIOI_IO11 and GPIO6_IO11). Firstly, I used the normal GPIO pin (GPIOI_IO11). I will toggle the pin by writing directly to the GPIO_DR register. Notice that you can access this pin through J22 pin 3 in the evaluation board, so you can measure the performance of the pin. Here are the results: With the normal GPIO pin, we reach a period of 160ns when writing directly to the GPIO_DR register. Now, if we change to the fast GPIO and use the same instructions we have the following results. As you can see when using the fast GPIO pin, the period of the signal it's almost one-third of the period when using a normal GPIO. Now, The A1 silicon of the RT1060 has a new GPIO toggle feature. If we toggle the pin with the new register DR_TOGGLE instead of the GPIO_DR we will get better performance with both pins, normal GPIO and fast. Here are the results of the normal GPIO with the DR_TOGGLE register. As you can see when using the register DR_TOGGLE along with the normal GPIO pin we get a period of around 53 ns while when writing to the GPIO_DR register we got 160 ns. When using the register DR_TOGGLE and the fast GPIO we will get the best performance of the pin. Results are shown below. Many thanks to @jorge_a_vazquez for his valuable help with this document. Hope this helps! Best regards, Victor.
查看全文
1 Introduction    With the quick development of science and technology, the Internet of Things(IoT) is widely used in various areas, such as industry, agriculture, environment, transportation, logistics, security, and other infrastructure. IoT usage makes our lives more colorful and intelligent. The explosive development of the IoT cannot be separated from the cloud platform. At present, there are many types of cloud services on the market, such as Amazon's AWS, Microsoft's Azure, google cloud, China's Alibaba Cloud, Baidu Cloud, OneNet, etc.    Amazon AWS Cloud is a professional cloud computing service that is provided by Amazon. It provides a complete set of infrastructure and cloud solutions for customers in various countries and regions around the world. It is currently a cloud computing with a large number of users. AWS IoT is a managed cloud platform that allows connected devices to easily and securely interact with cloud applications and other devices.    NXP crossover MCU RT product has launched a series of AWS sample codes. This article mainly explains the remote_control_wifi_nxp code in the official MIMXRT1060-EVK SDK as an example to realize the data interaction with AWS IoT cloud, Android mobile APP, and MQTTfx client. The cloud topology of this article is as follows: Fig.1-1 2 AWS cloud operation 2.1 Create an AWS account Prepare a credit card, and then go to the below amazon link to create an AWS account:    https://console.aws.amazon.com/console/home   2.2 Create a Thing    Open the AWS IOT link: https://console.aws.amazon.com/iot    Choose the Things item under manage, if it is the first time usage, customer can choose “register a thing” to create the thing. If it is used in the previous time, customers can click the “create” button in the right corner to create the thing. Choose “create a single thing” to create the new thing, more details check the following picture. Fig. 2-1 Fig.2-2 Fig.2-3 2.3 Create certificate    Create a certificate for the newly created thing, click the “create certificate” button under the following picture: Fig.2-4    After the certificate is built, it will have the information about the certificate created, it means the certificate is generated and can be used. Fig. 2-5 Please note, download files: certificate for this thing, public key, private key. It will be used in the mqttfx tool configuration. Click “A root CA for AWS for Download”, download the root CA for AWS IoT, the mqttfx tool setting will also use it. Open the root CA download link, can download the CA certificate. RSA 2048 bit key: VeriSign Class 3 Public Primary G5 root CA certificate Fig. 2-6 At last, we can get these files: 7abfd7a350-certificate.pem.crt 7abfd7a350-private.pem.key 7abfd7a350-public.pem.key AmazonRootCA1.pem Save it, it will be used later. Click “active” button to active the certificate, and click “Done” button. The policy will be attached later.   2.4 Create Policies     Back to the iot view page: https://console.aws.amazon.com/iot/     Select the policies under Secure item, to create the new policies.  Fig. 2-7 Input the policy name, in the action area, fill: iot:*, Resource ARN area fill: * Check Allow item, click the create button to finish the new policy creation. Fig. 2-8 2.5 Things attach relationship     After the thing, certificate, policies creation, then will attach the policy to the certificate, and attach the certificate to the Things. Fig. 2-9 Choose the certificates under Secure item, in the related certificate item, choose “…”, you will find the down list, click “attach policy”, and choose the newly created policy. Then click attach thing, choose the newly created thing. Fig. 2-10 Fig. 2-11 Fig. 2-12 Now, open the Things under Mange item, check the detail things related information. Fig.2-13 Double click the thing, in the Interact item, we can find the Rest API Endpoint, the RT code and the mqttfx tool will use this endpoint to realize the cloud connection. Fig. 2-14 Check the security, you will find the previously created certificate, it means this thing already attach the new certificates: Fig. 2-15 Until now, we already finish the Things related configuration, then it will be used for the MQTT fx, Android app, RT EVK board connections, and testing, we also can check the communication information through the AWS shadow in the webpage directly.       3 Android related configuration 3.1 AWS cognito configuration    If use the Android app to communicate with the AWS IoT clould, the AWS side still needs to use the cognito service to authorize the AWS IoT, then access the device shadows. Create a new identity pools at first from the following link: https://console.aws.amazon.com/cognitohttps://console.aws.amazon.com/cognito Fig. 3-1 Click “manage Identity pools”, after enter it, then click “create new identity pool” Fig. 3-2 Fig. 3-3 Fig. 3-4 Here, it will generate two Roles: Cognito_PoolNameAuth_Role Cognito_PoolNameUnauth_Role Click Allow, to finish the identity pool creation. Fig. 3-5 Please record the related Identity pool ID, it will be used in the Android app .properties configuration files. 3.2 Create plicies in IAM for cognito   Open https://console.aws.amazon.com/iam   Click the “policies” item under “access management” Fig. 3-6 Choose “create policy”, create a IAM policies, in the Policy JSON area, write the following content: Fig. 3-7 { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "iot:Connect" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "iot:Publish" ], "Resource": [ "arn:aws:iot:us-east-1:965396684474:topic/$aws/things/RTAWSThing/shadow/update", "arn:aws:iot:us-east-1:965396684474:topic/$aws/things/RTAWSThing/shadow/get" ] }, { "Effect": "Allow", "Action": [ "iot:Subscribe", "iot:Receive" ], "Resource": [ "*" ] } ] }‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Please note, in the JSON content: "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/update", "arn:aws:iot:<REGION>:<ACCOUNT ID>:topic/$aws/things/<THING NAME>/shadow/get" Region:the us-east-1 inFig. 3-5 ACCOUNT ID, it can be found in the upper right corner my account side. Fig 3-8 Fig 3-9 After finished the IAM policy creation, then back to IAM policies page, choose Filter policies as customer managed, we can find the new created customer’s policy. Fig. 3-10 3.3 Attach policy for the cognito role in IAM   In IAM, choose roles item: Fig. 3-11 Double click the cognito_PoolNameUnauth_Role which is generated when creating the pool in cognito, click attach policies, select the new created policy. Fig. 3-12 Fig. 3-13 Until now, we already finish the AWS cognito configuration.   3.4  Android properties file configuration Create a file with .properties, the content is:     customer_specific_endpoint=<REST API ENDPOINT>     cognito_pool_id=<COGNITO POOL ID>     thing_name=<THING NAME>     region=<REGION> Please fill the correct content: REST API ENDPOINT:Fig 2-14 COGNITO POOL ID:fig 3-5 THING NAME:fig 2-14,upper left corner REGION:Fig 3-5, the region data in COGNITO POOL ID Take an example, my properties file content is:  customer_specific_endpoint=a215vehc5uw107-ats.iot.us-east-1.amazonaws.com  cognito_pool_id=us-east-1:c5ca6d11-f069-416c-81f9-fc1ec8fd8de5  thing_name=RTAWSThing  region=us-east-1 In the real usage, please use your own configured data, otherwise, it will connect to my cloud endpoint. 4. MQTTfx configuration and testing MQTT.fx is an MQTT client tool which is based on EclipsePaho and written in Java language. It supports subscribe and publish of messages through Topic. You can download this tool from the following link:   http://mqttfx.jensd.de/index.php/download    The new version is:1.7.1.   4.1 MQTT.fx configuration     Choose connect configuration button, then enter the connection configuration page: Fig. 4-1 Profile Name: Enter the configuration name Broker Address: it is REST API ENDPOINT。 Broker Port:8883 Client ID: generate it freely CA file: it is the downloaded CA certificate file Client Certificate File: related certificate file Client key File: private key file Check PEM formatted。 Click apply and OK to finish the configuration. 4.2 Use the AWS cloud to test connection   In order to test whether it can be connected to the event cloud, a preliminary connection test can be performed. Open the aws page: https://console.aws.amazon.com/iot here is a Test button under this interface, which can be tested by other clients or by itself.Both AWS cloud and MQTTfx subscribe topic: $aws/things/RTAWSThing/shadow/update MQTTfx publishes data to the topic: $aws/things/RTAWSThing/shadow/update It can be found that both the cloud test port and the MQTTfx subscribe can receive data: Fig. 4-2 Below, the Publish data is tested by the cloud, and then you can see that both the MQTTFX subscribe and the cloud subscribe can receive data: Fig. 4-3 Until now, the AWS cloud can transfer the data between the AWS iot cloud and the client. 5 RT1060 and wifi module configuration   We mainly use the RT1060 SDK2.8.0 remote_control_wifi_nxp as the RT test code: SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\aws_examples\remote_control_wifi_nxp Test platform is:MIMXRT1060-EVK Panasonic PAN9026 SDIO ADAPTER + SD to uSD adapter The project is using Panasonic PAN9026 SDIO ADAPTER in default. 5.1 WIFI and the AWS code configuration    The project need the working WIFI SSID and the password, so prepare a working WIFI for it. Then add the SSID and the password in the aws_clientcredential.h #define clientcredentialWIFI_SSID       "Paste WiFi SSID here." #define clientcredentialWIFI_PASSWORD   "Paste WiFi password here." The connection for AWS also in file: aws_clientcredential.h #define clientcredentialMQTT_BROKER_ENDPOINT "a215vehc5uw107-ats.iot.us-east-1.amazonaws.com" #define clientcredentialIOT_THING_NAME       "RTAWSThing" #define clientcredentialMQTT_BROKER_PORT      8883   5.2 certificate and the key configuration Open the SDK following link: SDK_2.8.0_EVK-MIMXRT1060\rtos\freertos\tools\certificate_configuration\CertificateConfigurator.html Fig. 5-1 Generate the new aws_clientcredential_keys.h, and replace the old one. Take the MCUXPresso IDE project as an example, the file location is: Fig. 5-2 Build the project and download it to the MIMXRT1060-EVK board. 6 Test result Androd mobile phone download and install the APK under this folder: SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\aws_examples\remote_control_android\AwsRemoteControl.apk SDK can be downloaded from this link: Welcome | MCUXpresso SDK Builder  Then, we can use the Android app to remote control the RT EVK on board LED, the test result is 6.1 APP and EVK test result MIMXRT1060-EVK printf information: Fig. 6-1 Turn on and turn off the led:   Fig. 6-2                                        Fig. 6-3 6.2 MQTTfx subscribe result MQTTfx subscribe data Turn on the led, we can subscribe two messages: Fig. 6-4 Fig. 6-5   Turn off the led, we also can subscribe two messages: Fig. 6-6 Fig. 6-7 In the two message, the first one is used to set the led status. The second one is the EVK used to report the EVK led information. MQTTfx also can use the publish page, publish this data: {"state":{"desired":{"LEDstate":1}}} or {"state":{"desired":{"LEDstate":0}}} To topic: $aws/things/RTAWSThing/shadow/update It also can realize the on board LED turn on or off. 6.3 AWS cloud shadows display result Turn on the led: Fig. 6-8 Turn off the led: Fig. 6-9 In conclusion, after the above configuration and testing, it can finish the Android mobile phone to remote control the RT EVK on board LED and get the information. Also can use the MQTTFX client tool and the AWS shadow page to check the communication data.
查看全文
1. Abstract This article aims to implement the simultaneous input of 4 groups of 48Khz 32bit 2ch audio data on the RT685 platform, and then assemble the received data into a 48Khz 32bit 8ch audio and output it through I2S. This solution is also done at the request of customers, because there are always harmonic problems when customers make it. After analyzing the customer's situation, it is found that the customer has two main problems: (1) Harmonic problem: After receiving 4 channels of 8 bytes, it is directly copied to the sending buffer. This will cause timing problems. It does not take into account the problem of buffering data in the audio data storage pool. The time required to receive enough audio data is at least greater than the time required for copying and sending. Therefore, the problem is reflected in the problem that the customer found harmonic problems when testing the output audio waveform. (2) Audio synchronization problem: After the customer received 4 channels of audio data, he tested the receiving buffer and found that the 4 channels of data were out of sync. Therefore, in order to help customers, I helped customers make this application demo directly, and made a matching test audio source to send a set of 48Khz sampling rate 32bit dual-channel, fixed increment audio data in a loop, such as 0X00-0XFF in a loop. The following is the block diagram of this application platform:   Figure 1 System Block Diagram In the above figure, a MIMXRT685-EVK implements the function of outputting 48Khz, 32bit*2ch, and sends data in a loop: 0X00, 0X01….0XFF. Another MIMXRT685-EVK is the focus of this article, which implements 4 groups of I2S to receive data at a sampling rate of 48khz and 32bit*2ch, and then assembles the received data into audio data with a sampling rate of 48Khz and 32bit*8ch and sends it out. In the above figure, in order to reduce the connection of external lines, for BCLK and WS signals, only one group is directly connected to I2S3, and the other I2S2, I2S4, and I2S5 share the I2S3 signal internally. Then, for DATA data, a line is made externally with 4 heads, and connected to the data pins of each group of audio interfaces respectively. The following is a detailed description of this solution. 2. Hardware platform establish The pinouts of the two boards are given below. Because the platform uses many pins, specific allocation is required. 2.1 Audio source board A MIMXRT685-EVK is used as an audio source, and the pinouts for sending 48Khz 32bit*2ch are as follows:   Figure 2 Pinout of audio source board 2.2 Audio transceiver board Another MIMXRT685-EVK is used as an audio transceiver board to receive 4-channel audio synchronization data sent by the audio source and assemble it into a 48Khz 32bit*8ch waveform for transmission.   Figure 3 Pin assignment of audio transceiver board 2.3 Dual-board hardware connection The source and target connections of the two boards are as follows:   Figure 4 Two board pin connection   Figure 5 two board connection After the hardware is ready, the software solution and code are provided. 3. Software solution and software implementation In the process of writing the code, we tried many solutions, such as: (1) When receiving, directly assemble it into the TDM format buffer to be sent, and then send it. However, since it is assembled into TDM, one I2S needs to receive 8 byte per frame, and then do the offset to receive the next one. If the reception is carried out according to the 8-byte DMA, the callback of the 4 groups of I2S will enter frequently, resulting in a large CPU load, so this solution is abandoned. (2) The 4 groups of I2S are connected to each other, and the 10ms buffer is connected, and then the DMA method is used to copy from memory to memory. However, since the DMA of RT685 is relatively weak, it can only achieve a maximum offset of 32bit 4word=16byte, that is, a 16-byte offset. However, in fact, a group of audio data is 32bit*2, and 4 groups are 32bit*8=32byte offset, so DMA cannot meet the requirements. Therefore, the DMA memory-to-memory copy solution is abandoned and memcpy is used instead. (3) Use the I2S_RxTransferReceiveDMA function to perform DMA reception. However, in fact, when one group is called, it starts receiving directly, and waits until the next group of I2S interfaces calls I2S_RxTransferReceiveDMA. This has caused an asynchronous situation. Even if the I2S enable is turned off in I2S_RxTransferReceiveDMA, the 4th group of I2S is enabled after the several groups of I2S of I2S_RxTransferReceiveDMA are called. This method can only achieve the synchronization of the reception of the first group of data, because later, it is necessary to go to the callback to re-trigger the reception of the second frame of data. Therefore, the callback of the 4 groups of I2S calls I2S_RxTransferReceiveDMA, which will inevitably cause new synchronization problems. Therefore, this method is abandoned and it is considered to use two groups of DMA descriptors to do ping-pong. In this way, the reception will continue in a loop without the intervention of CPU code. 3.1 Solution Implementation Several solutions have been described above. Finally, we choose to use 4 audio channels to receive audio data and cache 10ms audio data buffer. The conversion from receiving buffer to sending buffer adopts memcpy method, and test whether this copy time can meet the actual needs, to ensure that the buffer pool of receiving buffer is greater than this copy time, which is enough to prepare the sending buffer. The solution for receiving data transfer is as follows:   Figure 6 Data buffer transfer The above is 4 groups of I2S receiving their own 10ms data respectively. The buffer is actually prepared for 20ms. A single DMA receives a frame for 10ms, and the other 10ms is used for pingpong buffer. The sending buffer is used to copy the received 4 groups of I2S buffers into a 32bit*8ch array in TDM format, and then two groups of ping-pong buffers are also made. In fact, it is to cache 10ms data. The buffer prepares two groups of 10ms. When the first 10ms frame is received, the second buffer is used to receive it. At the same time, the data of the first buffer is copied to the first buffer of the sending buffer, and the first buffer is used for sending. After the sending is completed, it is transferred to the second buffer to receive and send. In this way, as long as the time is controlled well, there will be no data error problem. The data volume of 10ms is 3840Byte, because the receiving frequency is 48Khz, that is, there are 48000 frames in 1s, and each frame is 32bit*2=8Byte, then 10ms=>4800*8Byte=38400Byte. 3.2 Software code implementation The software code implementation part is mainly divided into 4 I2S receiving signal sharing, I2S DMA pingpong configuration, data transfer, sending I2S and other parts. The details are given below 3.2.1 4-way I2S receiving From the above, we can know that the 4 I2S receiving signal is not completely connected with wires, but adopts the method of sharing BCLK and WS signals and receiving DATA separately. I2S2, I2S4, I2S5 share the BCLK of I2S3, and the WS code is as follows: /* Set shared signal set 0: SCK, WS from Flexcomm1 */ I2S_BRIDGE_SetShareSignalSrc(kI2S_BRIDGE_ShareSet0, kI2S_BRIDGE_SignalSCK, kI2S_BRIDGE_Flexcomm3); I2S_BRIDGE_SetShareSignalSrc(kI2S_BRIDGE_ShareSet0, kI2S_BRIDGE_SignalWS, kI2S_BRIDGE_Flexcomm3); /* Set flexcomm3 SCK, WS from shared signal set 0 */ I2S_BRIDGE_SetFlexcommSignalShareSet(kI2S_BRIDGE_Flexcomm2, kI2S_BRIDGE_SignalSCK, kI2S_BRIDGE_ShareSet0); I2S_BRIDGE_SetFlexcommSignalShareSet(kI2S_BRIDGE_Flexcomm2, kI2S_BRIDGE_SignalWS, kI2S_BRIDGE_ShareSet0); I2S_BRIDGE_SetFlexcommSignalShareSet(kI2S_BRIDGE_Flexcomm4, kI2S_BRIDGE_SignalSCK, kI2S_BRIDGE_ShareSet0); I2S_BRIDGE_SetFlexcommSignalShareSet(kI2S_BRIDGE_Flexcomm4, kI2S_BRIDGE_SignalWS, kI2S_BRIDGE_ShareSet0); I2S_BRIDGE_SetFlexcommSignalShareSet(kI2S_BRIDGE_Flexcomm5, kI2S_BRIDGE_SignalSCK, kI2S_BRIDGE_ShareSet0); I2S_BRIDGE_SetFlexcommSignalShareSet(kI2S_BRIDGE_Flexcomm5, kI2S_BRIDGE_SignalWS, kI2S_BRIDGE_ShareSet0); 3.2.2 I2S DMA pingpong configuration In order to achieve 4-channel audio synchronization and receive 10ms audio buffer, two I2S DMA descriptors are used to implement the ping-pong function to collect data to two ping-pong buffers in turn. The code is as follows: #define I2S_BUFFER_SIZE 3840 //10ms SDK_ALIGN(static dma_descriptor_t I2S2_s_rxDmaDescriptors[2U], FSL_FEATURE_DMA_LINK_DESCRIPTOR_ALIGN_SIZE); SDK_ALIGN(static dma_descriptor_t I2S3_s_rxDmaDescriptors[2U], FSL_FEATURE_DMA_LINK_DESCRIPTOR_ALIGN_SIZE); SDK_ALIGN(static dma_descriptor_t I2S4_s_rxDmaDescriptors[2U], FSL_FEATURE_DMA_LINK_DESCRIPTOR_ALIGN_SIZE); SDK_ALIGN(static dma_descriptor_t I2S5_s_rxDmaDescriptors[2U], FSL_FEATURE_DMA_LINK_DESCRIPTOR_ALIGN_SIZE); SDK_ALIGN(static uint8_t I2S2_s_Buffer[2][I2S_BUFFER_SIZE], sizeof(uint32_t)); SDK_ALIGN(static uint8_t I2S3_s_Buffer[2][I2S_BUFFER_SIZE], sizeof(uint32_t)); SDK_ALIGN(static uint8_t I2S4_s_Buffer[2][I2S_BUFFER_SIZE], sizeof(uint32_t)); SDK_ALIGN(static uint8_t I2S5_s_Buffer[2][I2S_BUFFER_SIZE], sizeof(uint32_t)); static i2s_transfer_t I2S2_s_RxTransfer[2] = {{ .data = I2S2_s_Buffer[0], .dataSize = I2S_BUFFER_SIZE, }, { .data = I2S2_s_Buffer[1], .dataSize = I2S_BUFFER_SIZE, }}; static i2s_transfer_t I2S3_s_RxTransfer[2] = {{ .data = I2S3_s_Buffer[0], .dataSize = I2S_BUFFER_SIZE, }, { .data = I2S3_s_Buffer[1], .dataSize = I2S_BUFFER_SIZE, }}; static i2s_transfer_t I2S4_s_RxTransfer[2] = {{ .data = I2S4_s_Buffer[0], .dataSize = I2S_BUFFER_SIZE, }, { .data = I2S4_s_Buffer[1], .dataSize = I2S_BUFFER_SIZE, }}; static i2s_transfer_t I2S5_s_RxTransfer[2] = {{ .data = I2S5_s_Buffer[0], .dataSize = I2S_BUFFER_SIZE, }, { .data = I2S5_s_Buffer[1], .dataSize = I2S_BUFFER_SIZE, }}; I2S_RxGetDefaultConfig(&I2S2_s_RxConfig); I2S2_s_RxConfig.divider = DEMO_I2S_CLOCK_DIVIDER; I2S2_s_RxConfig.masterSlave = DEMO_I2S_TX_MODE;//DEMO_I2S_RX_MODE I2S_RxInit(DEMO_I2S2_RX, &I2S2_s_RxConfig); I2S_RxGetDefaultConfig(&I2S3_s_RxConfig); I2S3_s_RxConfig.divider = DEMO_I2S_CLOCK_DIVIDER; I2S3_s_RxConfig.masterSlave = DEMO_I2S_TX_MODE;//DEMO_I2S_RX_MODE I2S_RxInit(DEMO_I2S3_RX, &I2S3_s_RxConfig); I2S_RxGetDefaultConfig(&I2S4_s_RxConfig); I2S4_s_RxConfig.divider = DEMO_I2S_CLOCK_DIVIDER; I2S4_s_RxConfig.masterSlave = DEMO_I2S_TX_MODE;//DEMO_I2S_RX_MODE I2S_RxInit(DEMO_I2S4_RX, &I2S4_s_RxConfig); I2S_RxGetDefaultConfig(&I2S5_s_RxConfig); I2S5_s_RxConfig.divider = DEMO_I2S_CLOCK_DIVIDER; I2S5_s_RxConfig.masterSlave = DEMO_I2S_TX_MODE;//DEMO_I2S_RX_MODE I2S_RxInit(DEMO_I2S5_RX, &I2S5_s_RxConfig); DMA_Init(DEMO_DMA); DMA_EnableChannel(DEMO_DMA, DEMO_I2S2_RX_CHANNEL); DMA_SetChannelPriority(DEMO_DMA, DEMO_I2S2_RX_CHANNEL, kDMA_ChannelPriority1); DMA_CreateHandle(&I2S2_s_DmaRxHandle, DEMO_DMA, DEMO_I2S2_RX_CHANNEL); I2S_RxTransferCreateHandleDMA(DEMO_I2S2_RX, &I2S2_s_RxHandle, &I2S2_s_DmaRxHandle, I2S2_RxCallback, (void *)&I2S2_s_RxTransfer); DMA_EnableChannel(DEMO_DMA, DEMO_I2S3_RX_CHANNEL); DMA_SetChannelPriority(DEMO_DMA, DEMO_I2S3_RX_CHANNEL, kDMA_ChannelPriority1); DMA_CreateHandle(&I2S3_s_DmaRxHandle, DEMO_DMA, DEMO_I2S3_RX_CHANNEL); I2S_RxTransferCreateHandleDMA(DEMO_I2S3_RX, &I2S3_s_RxHandle, &I2S3_s_DmaRxHandle, I2S3_RxCallback, (void *)&I2S3_s_RxTransfer); DMA_EnableChannel(DEMO_DMA, DEMO_I2S4_RX_CHANNEL); DMA_SetChannelPriority(DEMO_DMA, DEMO_I2S4_RX_CHANNEL, kDMA_ChannelPriority1); DMA_CreateHandle(&I2S4_s_DmaRxHandle, DEMO_DMA, DEMO_I2S4_RX_CHANNEL); I2S_RxTransferCreateHandleDMA(DEMO_I2S4_RX, &I2S4_s_RxHandle, &I2S4_s_DmaRxHandle, I2S4_RxCallback, (void *)&I2S4_s_RxTransfer); DMA_EnableChannel(DEMO_DMA, DEMO_I2S5_RX_CHANNEL); DMA_SetChannelPriority(DEMO_DMA, DEMO_I2S5_RX_CHANNEL, kDMA_ChannelPriority2); DMA_CreateHandle(&I2S5_s_DmaRxHandle, DEMO_DMA, DEMO_I2S5_RX_CHANNEL); I2S_RxTransferCreateHandleDMA(DEMO_I2S5_RX, &I2S5_s_RxHandle, &I2S5_s_DmaRxHandle, I2S5_RxCallback, (void *)&I2S5_s_RxTransfer); I2S_TransferInstallLoopDMADescriptorMemory(&I2S2_s_RxHandle, I2S2_s_rxDmaDescriptors, 2U); I2S_TransferInstallLoopDMADescriptorMemory(&I2S3_s_RxHandle, I2S3_s_rxDmaDescriptors, 2U); I2S_TransferInstallLoopDMADescriptorMemory(&I2S4_s_RxHandle, I2S4_s_rxDmaDescriptors, 2U); I2S_TransferInstallLoopDMADescriptorMemory(&I2S5_s_RxHandle, I2S5_s_rxDmaDescriptors, 2U); if (I2S_TransferReceiveLoopDMA(DEMO_I2S2_RX, &I2S2_s_RxHandle, &I2S2_s_RxTransfer[0], 2U) != kStatus_Success) { assert(false); } if (I2S_TransferReceiveLoopDMA(DEMO_I2S3_RX, &I2S3_s_RxHandle, &I2S3_s_RxTransfer[0], 2U) != kStatus_Success) { assert(false); } if (I2S_TransferReceiveLoopDMA(DEMO_I2S4_RX, &I2S4_s_RxHandle, &I2S4_s_RxTransfer[0], 2U) != kStatus_Success) { assert(false); } if (I2S_TransferReceiveLoopDMA(DEMO_I2S5_RX, &I2S5_s_RxHandle, &I2S5_s_RxTransfer[0], 2U) != kStatus_Success) { assert(false); } I2S_Enable(DEMO_I2S2_RX); I2S_Enable(DEMO_I2S3_RX); I2S_Enable(DEMO_I2S4_RX); I2S_Enable(DEMO_I2S5_RX); Here, the code has been modified, mainly the I2S_TransferLoopDMA function in fsl_i2s_dma.c, which is blocked: I2S_Enable(base); In order to realize the function of 4-channel synchronous reception. 3.2.3 Audio data received and transferred Because when receiving, each audio interface takes turns to receive its own 2ch data, but when sending, it is necessary to send 4-channel received audio dual-channel data, that is, 32bit*8ch data, so after receiving ping, the ping data needs to be transferred to the sending ping buffer. The code for transfer is as follows: #define I2S_BUFFER_SIZE 3840 //10ms SDK_ALIGN(static uint8_t I2S2_s_Buffer[2][I2S_BUFFER_SIZE], sizeof(uint32_t)); SDK_ALIGN(static uint8_t I2S3_s_Buffer[2][I2S_BUFFER_SIZE], sizeof(uint32_t)); SDK_ALIGN(static uint8_t I2S4_s_Buffer[2][I2S_BUFFER_SIZE], sizeof(uint32_t)); SDK_ALIGN(static uint8_t I2S5_s_Buffer[2][I2S_BUFFER_SIZE], sizeof(uint32_t)); SDK_ALIGN(static uint8_t I2S1_s_Buffer[2][I2S_BUFFER_SIZE*4], sizeof(uint32_t)); if( s_pingpong == 1) { for(ch = 0;ch < 480; ch++) //480=I2S_BUFFER_SIZE(3840)/8 { memcpy(&I2S1_s_Buffer[0][0 + (32*ch)], &I2S2_s_Buffer[0][8*ch], 8); memcpy(&I2S1_s_Buffer[0][8 + (32*ch)], &I2S3_s_Buffer[0][8*ch], 8); memcpy(&I2S1_s_Buffer[0][16 + (32*ch)], &I2S4_s_Buffer[0][8*ch], 8); memcpy(&I2S1_s_Buffer[0][24 + (32*ch)], &I2S5_s_Buffer[0][8*ch], 8); } } else { for(ch = 0;ch < 480; ch++) { memcpy(&I2S1_s_Buffer[1][0 + (32*ch)], &I2S2_s_Buffer[1][8*ch], 8); memcpy(&I2S1_s_Buffer[1][8 + (32*ch)], &I2S3_s_Buffer[1][8*ch], 8); memcpy(&I2S1_s_Buffer[1][16 + (32*ch)], &I2S4_s_Buffer[1][8*ch], 8); memcpy(&I2S1_s_Buffer[1][24 + (32*ch)], &I2S5_s_Buffer[1][8*ch], 8); } } 3.2.4 Send TDM audio code The sending code also uses the I2S DMA method, but because there is no need to send multiple channels at the same time, only a single channel, there is no need to consider the synchronization problem, and no DMA descriptor is used. After the sending buffer is ready, the I2S_TxTransferSendDMA method is used. The code is as follows: I2S_TxGetDefaultConfig(&I2S1_s_TxConfig); I2S1_s_TxConfig.divider = DEMO_I2S1_CLOCK_DIVIDER; I2S1_s_TxConfig.masterSlave = kI2S_MasterSlaveNormalMaster; I2S1_s_TxConfig.wsPol = true; I2S1_s_TxConfig.mode = kI2S_ModeDspWsLong;//kI2S_ModeDspWsShort; I2S1_s_TxConfig.dataLength = 32U; I2S1_s_TxConfig.frameLength = 32 * 8U; I2S1_s_TxConfig.position = DEMO_TDM_DATA_START_POSITION; I2S1_s_TxConfig.pack48 = true; I2S_TxInit(DEMO_I2S1_TX, &I2S1_s_TxConfig); I2S_EnableSecondaryChannel(DEMO_I2S1_TX, kI2S_SecondaryChannel1, false, 64 + DEMO_TDM_DATA_START_POSITION); I2S_EnableSecondaryChannel(DEMO_I2S1_TX, kI2S_SecondaryChannel2, false, 128 + DEMO_TDM_DATA_START_POSITION); I2S_EnableSecondaryChannel(DEMO_I2S1_TX, kI2S_SecondaryChannel3, false, 192 + DEMO_TDM_DATA_START_POSITION); DMA_EnableChannel(DEMO_DMA, DEMO_I2S1_TX_CHANNEL); DMA_SetChannelPriority(DEMO_DMA, DEMO_I2S1_TX_CHANNEL, kDMA_ChannelPriority3); DMA_CreateHandle(&I2S1_s_DmaTxHandle, DEMO_DMA, DEMO_I2S1_TX_CHANNEL); I2S_TxTransferCreateHandleDMA(DEMO_I2S1_TX, &I2S1_s_TxHandle, &I2S1_s_DmaTxHandle, I2S1_TxCallback, (void *)&I2S1_s_TxTransfer); if( s_pingpong == 1) { I2S1_s_TxTransfer.data = I2S1_s_Buffer[0]; I2S1_s_TxTransfer.dataSize = I2S_BUFFER_SIZE*4; I2S_TxTransferSendDMA(DEMO_I2S1_TX, &I2S1_s_TxHandle, I2S1_s_TxTransfer); } else { I2S1_s_TxTransfer.data = I2S1_s_Buffer[1]; I2S1_s_TxTransfer.dataSize = I2S_BUFFER_SIZE*4; I2S_TxTransferSendDMA(DEMO_I2S1_TX, &I2S1_s_TxHandle, I2S1_s_TxTransfer); } 3.2.5 Send and receive I2S callback processing For the receiving I2S2, 3, 4, 5, there are 4 channels in total. Each time 10ms of data is received, a callback will be entered. In the callback, you only need to record the flag. When all 4 flags are recorded, it means that the 4 channels of the same 10ms data have been received, and the data can be copied to the sending buffer. Of course, in order to test whether the callback entry frequency is once every 10ms, this article makes a GPIO callback for testing. The following is the code for recording I2S callback static void I2S2_RxCallback(I2S_Type *base, i2s_dma_handle_t *handle, status_t completionStatus, void *userData) { s_allRXTriggerred |= 0x01; } static void I2S3_RxCallback(I2S_Type *base, i2s_dma_handle_t *handle, status_t completionStatus, void *userData) { s_allRXTriggerred |= 0x02; } static void I2S4_RxCallback(I2S_Type *base, i2s_dma_handle_t *handle, status_t completionStatus, void *userData) { s_allRXTriggerred |= 0x04; } static void I2S5_RxCallback(I2S_Type *base, i2s_dma_handle_t *handle, status_t completionStatus, void *userData) { /* Enqueue the same original buffer all over again */ s_allRXTriggerred |= 0x08; GPIO_PortToggle(GPIO, 1, 1<<0); if( s_pingpong == 0) { s_pingpong = 1; } else { s_pingpong = 0; } } static void I2S1_TxCallback(I2S_Type *base, i2s_dma_handle_t *handle, status_t completionStatus, void *userData) { GPIO_PortToggle(GPIO, 1, 1<<8); //__NOP(); } So far, all functions of a MIMXRT685-EVK for 4 I2S reception and 1 I2S TDM transmission have been completed. 3.2.6 Audio source code The audio source is made on another MIMXRT685-EVK to send 48Khz, 32bit*2ch audio data, and the data is sent in a loop from 0X00 to 0XFF. The code is as follows: int main(void) { BOARD_InitBootPins(); BOARD_InitBootClocks(); BOARD_InitDebugConsole(); BOARD_I3C_ReleaseBus(); BOARD_InitI3CPins(); CLOCK_EnableClock(kCLOCK_InputMux); /* attach main clock to I3C (500MHz / 20 = 25MHz). */ CLOCK_AttachClk(kMAIN_CLK_to_I3C_CLK); CLOCK_SetClkDiv(kCLOCK_DivI3cClk, 20); /* attach AUDIO PLL clock to FLEXCOMM1 (I2S1) */ CLOCK_AttachClk(kAUDIO_PLL_to_FLEXCOMM1); /* attach AUDIO PLL clock to FLEXCOMM3 (I2S3) */ CLOCK_AttachClk(kAUDIO_PLL_to_FLEXCOMM3); /* attach AUDIO PLL clock to MCLK */ CLOCK_AttachClk(kAUDIO_PLL_to_MCLK_CLK); CLOCK_SetClkDiv(kCLOCK_DivMclkClk, 1); SYSCTL1->MCLKPINDIR = SYSCTL1_MCLKPINDIR_MCLKPINDIR_MASK; wm8904Config.i2cConfig.codecI2CSourceClock = CLOCK_GetI3cClkFreq(); wm8904Config.mclk_HZ = CLOCK_GetMclkClkFreq(); /* Set shared signal set 0: SCK, WS from Flexcomm1 */ I2S_BRIDGE_SetShareSignalSrc(kI2S_BRIDGE_ShareSet0, kI2S_BRIDGE_SignalSCK, kI2S_BRIDGE_Flexcomm1); I2S_BRIDGE_SetShareSignalSrc(kI2S_BRIDGE_ShareSet0, kI2S_BRIDGE_SignalWS, kI2S_BRIDGE_Flexcomm1); /* Set flexcomm3 SCK, WS from shared signal set 0 */ I2S_BRIDGE_SetFlexcommSignalShareSet(kI2S_BRIDGE_Flexcomm3, kI2S_BRIDGE_SignalSCK, kI2S_BRIDGE_ShareSet0); I2S_BRIDGE_SetFlexcommSignalShareSet(kI2S_BRIDGE_Flexcomm3, kI2S_BRIDGE_SignalWS, kI2S_BRIDGE_ShareSet0); #if 1 PRINTF("Configure codec\r\n"); /* protocol: i2s * sampleRate: 48K * bitwidth:16 */ if (CODEC_Init(&codecHandle, &boardCodecConfig) != kStatus_Success) { PRINTF("codec_Init failed!\r\n"); assert(false); } /* Initial volume kept low for hearing safety. * Adjust it to your needs, 0-100, 0 for mute, 100 for maximum volume. */ if (CODEC_SetVolume(&codecHandle, kCODEC_PlayChannelHeadphoneLeft | kCODEC_PlayChannelHeadphoneRight, DEMO_CODEC_VOLUME) != kStatus_Success) { assert(false); } PRINTF("Configure I2S\r\n"); #endif /* * masterSlave = kI2S_MasterSlaveNormalMaster; * mode = kI2S_ModeI2sClassic; * rightLow = false; * leftJust = false; * pdmData = false; * sckPol = false; * wsPol = false; * divider = 1; * oneChannel = false; * dataLength = 16; * frameLength = 32; * position = 0; * watermark = 4; * txEmptyZero = true; * pack48 = false; */ I2S_TxGetDefaultConfig(&s_TxConfig); s_TxConfig.divider = DEMO_I2S_CLOCK_DIVIDER; s_TxConfig.masterSlave = DEMO_I2S_TX_MODE; I2S_TxInit(DEMO_I2S_TX, &s_TxConfig); DMA_Init(DEMO_DMA); DMA_EnableChannel(DEMO_DMA, DEMO_I2S_TX_CHANNEL); DMA_SetChannelPriority(DEMO_DMA, DEMO_I2S_TX_CHANNEL, kDMA_ChannelPriority3); DMA_CreateHandle(&s_DmaTxHandle, DEMO_DMA, DEMO_I2S_TX_CHANNEL); StartSoundPlayback(); while (1) { } } static void StartSoundPlayback(void) { PRINTF("Setup looping playback of sine wave\r\n"); s_TxTransfer.data = &g_Music[0]; s_TxTransfer.dataSize = sizeof(g_Music); I2S_TxTransferCreateHandleDMA(DEMO_I2S_TX, &s_TxHandle, &s_DmaTxHandle, TxCallback, (void *)&s_TxTransfer); /* need to queue two transmit buffers so when the first one * finishes transfer, the other immediatelly starts */ I2S_TxTransferSendDMA(DEMO_I2S_TX, &s_TxHandle, s_TxTransfer); I2S_TxTransferSendDMA(DEMO_I2S_TX, &s_TxHandle, s_TxTransfer); } static void TxCallback(I2S_Type *base, i2s_dma_handle_t *handle, status_t completionStatus, void *userData) { /* Enqueue the same original buffer all over again */ i2s_transfer_t *transfer = (i2s_transfer_t *)userData; I2S_TxTransferSendDMA(base, handle, *transfer); } Audio data buffer:   Figure 7 Audio source sends buffer The corresponding test results are given:   Figure 8 Audio source sending data test It can be seen that the data sent by the audio source is cyclical and can be sent in an increasing loop. 4. Test results There are several points to verify about the test results: (1) 4-channel audio receives pingpong buffer, whether a single buffer is 10ms, that is, a 10ms audio data pool. (2) How long is the data memory copy time, whether it will exceed the length of the receiving audio data pool. (3) Whether the received 4-channel data is synchronized, whether the assembled send buffer data is the 32bit*8ch data assembled from the corresponding 4-channel 2ch data. (4) Whether the sent audio waveform is the correct 32bit*8ch TDM data. The following are the verification test results for these points. 4.1 4 I2S audio 10ms data pool This verification is very simple. Define a pin GPIO, initialize the output to 0, and then reverse it in the received callback interrupt. This article chooses to reverse it in the I2S5 callback. The test results are as follows:   Figure 9 ch1 10ms duration Channel 1 is the exact 10ms because of the callback reversal received. Here is a general picture of the test:   Figure 10 Time test overview Ch1: I2S5 callback entry frequency Ch2: memory copy time Ch3: Send callback entry frequency It can be seen that the frequency of sending and receiving is 10ms, because the sending frequency is also 48Khz, but because it is 8ch, the data volume is 4 times that of receiving, and all the data of the 4 receiving channels need to be stuffed in. 4.2 Time consumption of receiving and copying to sending buffer For data copying, that is, assembling the data received from 4 I2S channels into 4 buffers into the sending buffer, this time test is on the second channel of the oscilloscope, and the results are as follows:   Figure 11 copy data time It can be seen that the copying time is less than 500us, which is much shorter than the 10ms of the audio receiving data pool. Therefore, you can use memcpy casually without worrying about the copying time being too long. This also makes up for the regret that I wanted to use DMA for memory to memory copying before, but it could not be realized due to DMA performance issues. 4.3 Verification of the synchronization of the data received on the 4 I2S In order to verify the synchronization, this article closes the 4 I2S receiving channel after receiving 100times 10ms, and prints out the corresponding 4 I2S audio receiving buffer. The results of the 4 I2S buffer are as follows:   Figure12 I2S2 receive buffer   Figure 13 I2S3 receive buffer   Figure 14 I2S4 receive buffer   Figure 15 I2S5 receive buffer It can be seen that the receive buffer data of the 4 I2S are completely synchronized, and all start from 0XB8.   4.4 Send buffer corresponding to 4-channel audio TDM The send buffer is printed after 100 receptions, and then memcpy is performed to the send buffer, and the printout of the send buffer data is as follows:   Figure 16 I2S1 transfer buffer It can be seen that the buffer also starts from 0XB8, and the 4 groups of received data are copied to the send buffer and assembled into 32bit*8ch data. It can be seen that the TDM send buffer is also correct.   4.5 Sending 48Khz 32bit 8ch audio data waveform   Figure 17 Transmitting and receiving audio waveforms The upper group is the waveform of the audio source, and the lower group is the waveform of TDM transmission. Due to the limitation of the analysis software of the logic analyzer, it can only analyze 2ch 64bit data at most, so only part of the data can be seen here, but from the waveform, it can be seen that the waveform of sending TDM can achieve 32bit*8ch, and every 8byte data in a frame is the same, which also explains the synchronization of 4-channel audio reception. In the above figure, the data of ch2 is actually 00, 01, 02, 03, 04, 05, 06, 07, 4 groups of the same data in one frame, and the waveform can also be seen that there are 4 groups of the same data, and 4 groups of 2ch are enough to form 32bit 8ch TDM. Finally, here is another TDM waveform tested on the oscilloscope:   Figure 18 Sending TDM waveform It can be seen that BCLK=12.28Mhz is consistent with the expected 48khz*32bit*8=12.288Mhz. The WS signal is also measured to be 48Khz, which meets the set 48Khz sampling rate. DATA is also transmitting with data changes, and it can be seen that the waveform pattern within a frame is repeated by about 4 groups, which also shows that the 4 groups of received data are synchronized. So far, the function of RT600 4-channel 48KHZ 32bit*2ch input and assembling into 48Khz 32bit*8ch output has been realized!  
查看全文
1.    Abstract When using the RT600 SDK USB composite example, the customer found that the default HS interval was 125us, and the code was: mimxrt685audevk_dev_composite_hid_audio_unified_bm. However, in actual applications, the 125us packet interval for data transmission will cause a large interrupt load on the CPU, so the customer hopes to change the interval to a larger during, such as 1ms. After changing interval to 1ms, it is found that the data packet can be sent to the RT chip, but there is indeed a problem with the playback on the RT side, speaker no voice. Changing it to 500us in synchronous mode is possible work, but at 500us, the customer's CPU load still reaches more than 80%, which is not convenient for subsequent application code expansion, so it is still hoped to achieve a 1ms method. This article will give the transmission of different intervals of UAC and provide a solution for 1ms intervals. 2.    Test situation    Board:MIMXRT685-AUD-EVK    SDK:SDK_2_15_000_MIMXRT685-AUD-EVK    USB Analyzer:Lecroy USB-TOS2-A01-X 2.1 Platform connection situation First, given the connection between MIMXRT685-AUD-EVK, USB analyzer and audio source. This article mainly tests the RT685 UAC speaker function, that is, USB transmits audio source data to RT685, and plays the audio source through a player (which can be headphones), or speaker. The J7 USB port of EVK is connected to the A port of Lecroy, and the B port of Lecroy is connected to the audio source, which can be a PC or a mobile phone. If using a PC, the speaker needs to be selected as USB AUDIO+HID DEMO, because the name of the board after UAC enumeration is USB AUDIO+HID DEMO. The other end of Lecroy is connected to the PC for transmitting USB bus data. The specific connection diagram is as follows:      Fig 1 EVK USB analyzer connection 2.2 Different audio interval data package situation This chapter gives the modifications under different intervals, as well as the results of USB analyzer packet capture. The clock system of the audio synchronous/isochronous audio endpoint can be synchronized through SOF. The USB 1.0 sampling rate must be locked to the 1ms SOF beat. The USB2.0 high-speed HS endpoint can be locked to the 125us SOF beat. If you want to change the interval, it is usually a multiple of 125us, that is, it can be 250us, 500us, 1ms, etc. The modification method is usually to directly modify the USB stack's usb_audio_config.h:   #define HS_ISO_OUT_ENDP_INTERVAL (0x01) The relationship between HS_ISO_OUT_ENDP_INTERVAL and the real during time is: 125us*2^( HS_ISO_OUT_ENDP_INTERVAL -1) So: HS_ISO_OUT_ENDP_INTERVAL=1 : 125us HS_ISO_OUT_ENDP_INTERVAL=2 : 250us HS_ISO_OUT_ENDP_INTERVAL=3 : 500us HS_ISO_OUT_ENDP_INTERVAL=4 : 1000us The following are the four situations mentioned above respectively through USB analyzer packet capture test. 2.2.1 125us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x01) Fig 2 125us interval It can be seen that when the interval is configured to 125us, the SOF beat interval in front of each OUT packet is 125us, and the length of the OUT data packet is 24Byte. The data in this 24Byte packet is the actual audio data. In this case, the playback is normal   2.2.2 250us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x02) The packet capture configured as 250us is shown in the figure below. It can be seen that the SOF beat interval in front of the two OUT packets is 250us, and the length of the OUT data packet is 48 Byte. In other words, as the interval increases, the length of the data packet also increases proportionally, that is, the transmission time is longer, but a packet contains more data packets. At this time, the RT side also needs to provide more USB receiving buffers to receive data. Fig 3 250us interval This situation, the speaker play normally, have the audio sound. 2.2.3 500us interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x03) Fig 4 500us interval SYNC mode At this time, you can see that the data packet has become 96 Bytes, and the data content is also normal audio data, but the playback has problems and no sound can be heard. However, if you configure: #define USB_DEVICE_AUDIO_USE_SYNC_MODE (0U) It is not the SYNC mode, it can hear the audio sound, the packet situation is: Fig 5 500us interval no SYNC mode However, the asynchronous mode also has problems. After a long period of operation, it may become out of sync and the playback may become stuck. Therefore, it is still necessary to use the 1ms method in the synchronous mode. 2.2.4 1ms interval packet situation #define HS_ISO_OUT_ENDP_INTERVAL (0x04) Fig 6 interval 1ms It can be seen that the data packet length has become 192 bytes, and the SOF beat interval in front of the OUT packet has indeed become 1ms. The audio data packet also looks like normal audio data, so the audio data here is successfully transmitted from USB to RT. However, the playback is abnormal and no sound can be heard. 3. 1ms interval solution Now let's debug the project to find the problem. Because the USB analyzer can show that the audio data of the USB bus is actually transmitted, we must first check whether the USB receives the data packet in the kUSB_DeviceAudioEventStreamRecvResponse event in USB_DeviceAudioCompositeCallback of audio_unified.c, whether the data packet length is correct, and whether the data content looks like audio data. The following is the debug result: Fig 7 data packet received situation So from this point of view, the USB interface data packet is received, and the data looks like real audio data. If it is abnormal data, it is either all 0 or irregular data that changes, but the test result cannot be played. So now to check the I2S playback function, composite.h file, TxCallback function: Fig 8 I2S play data buffer We can see, the real data transfer to the I2S data buffer are always 0, and the reality should be g_composite.audioUnified.startPlayFlag none zero, and also use: s_TxTransfer.dataSize = g_composite.audioUnified.audioPlayTransferSize; s_TxTransfer.data=audioPlayDataBuff + g_composite.audioUnified.tdReadNumberPlay; But, after testing, audioPlayDataBuff data are also 0. So the issue should be the USB received side, receive the data, but when save the data to the audioPlayDataBuff have issues, go back to audio_unified.c Fig 9 USB_DeviceAudioCompositeCallback Fig10 USB_AudioSpeakerPutBuffer After testing, in the Fig 9, audioUnified.tdWriteNumberPlay never larger than g_deviceAudioComposite->audioUnified.audioPlayTransferSize * AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT=192*6   #define AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT \ (0x06U) /* 6 means 6 mico frames (6*125us), make sure the latency is smaller than 1ms for sync mode */ The low latency here is a delay buffer. Therefore, the USB cache data must at least be able to hold the delayed data packet. Unit 1 represents an interval frame. Now it is changed to an interval of 1ms, 6 delay units, which means that the delay here is 6ms. Here, the receiving buffer size needs to be larger, which is related to the following definition: Fig 11 USB device callback g_composite.audioUnified.audioPlayBufferSize = AUDIO_PLAY_BUFFER_SIZE_ONE_FRAME * AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT; #define AUDIO_PLAY_BUFFER_SIZE_ONE_FRAME AUDIO_OUT_TRANSFER_LENGTH_ONE_FRAME #define AUDIO_OUT_FORMAT_CHANNELS (0x02U) #define AUDIO_OUT_FORMAT_SIZE (0x02) #define AUDIO_OUT_SAMPLING_RATE_KHZ (48) /* transfer length during 1 ms */ #define AUDIO_OUT_TRANSFER_LENGTH_ONE_FRAME \ (AUDIO_OUT_SAMPLING_RATE_KHZ * AUDIO_OUT_FORMAT_CHANNELS * AUDIO_OUT_FORMAT_SIZE) #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT \ (2U) /* 2 units size buffer (1 unit means the size to play during 1ms) */ Here, try to set larger AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT, let it should be at least can put 6ms data, as 1 unit is 1ms play data size, and it needs to have more than 6ms, so, set to 12 unit, the modified definition: #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT 12 Build the project and download it, now capture the USB bus data again: Fig 12 1ms interval data packet play successfully   This data waveform is the data that can be played normally. Below is the video of the MIMXRT685-AUD-EVK test. The attachment provides two demos of RT685 and RT1170, both modified to 1ms interval, and the modification method of RT1170 is exactly the same 4.    Summarize Whether it is RT600, RT1170, or more precisely the UAC code of the entire RT series, if you need to modify the interval, the main focus is on the following macros in usb_audio_config.h, the following is for 1ms interval: #define HS_ISO_OUT_ENDP_INTERVAL (0x04)//(0x01)//(0X04) #define AUDIO_CLASS_2_0_HS_LOW_LATENCY_TRANSFER_COUNT \ (0x06U) /* 6 means 16 mico frames (6*125us), make sure the latency is smaller than 1ms for ehci high speed */ #define AUDIO_SPEAKER_DATA_WHOLE_BUFFER_COUNT \ (12U)//(2U) /* 2 units size buffer (1 unit means the size to play during 1ms) */ Test video:
查看全文
RT1050 Boundary Scan test based on lauterbach 1. Abstract Boundary Scan is a method of testing interconnections on circuit boards or internal sub-blocks of circuits. You can also debug and observe the pin status of the integrated circuit, measure the voltage or analyze the sub-modules inside the integrated circuit, and test based on the JTAG interface. NXP officials have provided two good application notes: AN13507 (LPC) and AN12919 (RT). Based on the reference application note test method, this article provides the boundary scan test results for NXP MIMXRT1050-EVK revA1. It can use Lauterbach to connect the chip and perform boundary scan to control the external pins. A script file is also provided. It can realize one-click connection to boundary scan and achieve level control of external pins. 2. RT1050 test details 2.1 Hardware platform Lauterbach:LA3050 MIMXRT1050-EVK rev A1 hardware modification point are as follows: (1)Modify fuse bit 0X460[19], which is DAP_SJC_SWD_SEL, from 0-SWD to 1-JTAG. To modify Fuse, you can enter serial download mode and use MCUbootUtility to connect and modify it. Fig 1 (2)DNP R38 ,R323,R309,R152,R303   (3)  JTAG_MODE connect to 3.3V= on board TP11 connect to J24_8 (4)R35 connect 100K resistor (5)ONOFF pin pull up external 100K resistor to 3.3V,board modification point is SW2 pin3 or pin4 connect 100K resistor and pull up to J24_8.    (6) disconnect J32,J33 which will disconnect the on board debugger, because this test need to use the external Lauterbach.    (7) Use the external Lauterbach connect to JTAG interface J21, the connection picture is:     Fig 2 2.2 Software operation Download Lauderbach's supporting software and install it. After installation, open the TRACE32 ICD Arm USB. If the Lauderbach device is connected, the interface will open successfully. Fig 3 At this time, you can enter the relevant commands in the yellow box in the picture above. Here you need to prepare the .bsdl file of the chip, which is usually placed on the chip introduction page of nxp.com. For example, the link to the bsdl file of RT1050 is https://www.nxp.com/downloads/en/bsdl/RT1050.bsdl You can copy the RT1050.bsdl file to the Lauderbach installation path: C:\T32 Next, enter the following command in the window to open the boundary scan window SYStem.Mode Down BSDL.RESet BSDL.ParkState Select-DR-Scan BSDL.state Here, it will open the window: Fig 4 Click FILE item, input the downloaded RT1050.bsdl , then in the window input the commander: BSDL.SOFTRESET     Fig 5 Click check->BYPASSall,IDCODEall,SAMPLEall, make sure the 3 methods can be passed. It is found here that the following problems are encountered when clicking IDCODEall:   Fig 6 It prompts that the IDCODE read is 188c301d, but the expected IDCODE is 088c301d. So what is the correct IDCODE? You can view RT1050RM:   Fig 7 It can be seen that the currently read 188c301d is in line with RM and is correct. Therefore, the version of bsdl downloaded from the official website needs to be modified. Open the RT1050.bsdl file:   Fig 8 Modify line 408 version from 0000 to 0001,Fig 8 is the modified result. Save, run the above commands again, we can see the current BYPASSall,IDCODEall,SAMPLEall connection result is:   Fig 9 Fig 10 Fig 11 To test the output control situation you need to do: BSDLSET 1.: instructions->EXTEXT, DR mode->Set Write, Filter data-> uncheck intern BSDL.state->Run: check SetAndRun, TwoStepDR,  Click RUN button. BSDLSET 1. Window, you can control the pin output status, eg, control GPIO_AD_B1_06 which is J22_2, control the output level: 1 high, 0 low.   Fig 12 2.3 Automation control command script As can be seen from Section 2.2, single-step operation requires manual typing of commands. In actual testing, the efficiency is very low, so scripting language can be used to directly implement automated command control. Below, we take RT1050 as an example to control the level of the onboard GPIO_AD_B1_06 and J22_2 pins, and use a multimeter to test the high and low levels. In this way, when the TRACE32 software is opened, you only need to open the script directly, enter the debug mode, run it to the end with one click, and view the board Just turn on the light and control the status. Script language, suffix .cmm, step: File->New Script, enter the following script command: ;system setup SYStem.Mode Down SYStem.CPU CortexM7 SYSTEM.CONFIG.DEBUGPORTTYPE JTAG SYStem.JtagClock 1MHz ;BSDL Settings BSDL.RESet BSDL.ParkState Select-DR-Scan BSDL.state ;configure boundary scan chain BSDL.FILE RT1050.bsdl ;Check boundary scan chain BSDL.SOFTRESET BSDL.BYPASSall BSDL.IDCODEall BSDL.SAMPLEall ;Perform Sample test BSDL.RUN BSDL.SetAndRun ON BSDL.TwoStepDR ON BSDL.SET 1. BSDL.SET 1. IR EXTEST BSDL.RUN BSDL.SET 1. PORT GPIO_AD_B1_06 0 BSDL.SET 1. PORT GPIO_AD_B1_06 1 BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 6.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 6.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 0 WAIT 2.s BSDL.SET 1. PORT GPIO_AD_B1_06 1 WAIT 2.s Function: Pull the GPIO_AD_B1_06 pin high and low 6 times, with no delay, delay 5s, delay 2s. After the script is written, save it and debug it.   Fig 13 This is the video for the testing: It can be seen that automatic control of the onboard GPIO_AD_B1_06 and J22_2 pins can be achieved, and there is no disconnection issue when the test delay is greater than 5S, indicating that the BSDL automatic test has been completed so far. If you encounter problems, be sure to pay attention to whether the hardware modification points of the board have been completely modified.   At last, thanks so much for my colleague @leilei_du  and @albert_li 's endless help!    
查看全文
  RT1176 has two core. Normally, the CM7 core is the main core. When boot up, CM7 will boot up first. Then it will copy CM4’s image to RAM and kick off CM4 core. The details can be found in AN13264. In RT1176, CM4 has 128K ITCM and 128K DTCM. This space is not big. It is enough for CM4 to do some auxiliary work. But sometimes, customer need CM4 to do more. For example, the CM7 may run an ML algorithm and CM4 to deal with USB/ENET and camera. That need more code space than 128K. In this case, CM4 image should be moved to OCRAM1/2 or SDRAM. OCRAM and SDRAM are both connect to a NIC-301 AXI bus arbiter IP. They have similar performance and character. This article will try to use SDRAM because it is more difficult to move to. Before moving, customer should know an important thing. the R/W speed of CM4 accessing OCRAM/SDRAM is slow. Because the CM4 requests data from SDRAM through XB (LPSR domain - AHB protocol) and then through NIC (WAKEUPMIX domain AXI protocol) and the clock limitation is BUS / BUS_LPSR. If both code and data placed in SDRAM the performance will significantly be reduced. SDRAM is accessible only via SYSTEM bus (so, in such case no harward possible). If any other bus masters are accessing the same memory the performance is even more degraded due to arbitration (on XB or NIC). So, user should arrange the whole memory space very well to eliminate access conflict.   Prepare for the work 1.1 Test environment: • SDK: 2.9.1 for i.MX RT1170 • MCUXpresso: 11.4.0 Example: SDK_root\boards\evkmimxrt1170\multicore_examples\hello_world. Set hardware and software Set the board to XIP boot mode by setting SW1 to OFF OFF ON OFF. Import the hello_world example from SDK. Build the project.   Moving to SDRAM for debug option In CM7 project, add SDRAM space in properties->MCU settings->Memory detail   Then in properties->settings->Multicore, change the CM4 master memory region to SDRAM.   Unlike other IDE, MCUXpresso can wake up M4 project thread and download M4 image by IDE itself. it calls implicit. This field tell IDE where to place CM4 image. In properties->settings->Preprocessor, add XIP_BOOT_HEADER_DCD_ENABLE. This is to add DCD to image's head. DCD can be used to program the SDRAM controller for optimal settings, improving the boot performance. Next, switch to CM4 project. Add SDRAM space to memory table, just like what we do in CM7 project.     In properties-> Managed Linker Script, it’s better to announce Heap and stack space in DTCM. It also can be placed in SDRAM, but this should be careful. After that, we can start debugging. You can see that the SDRAM has been filled with CM4 image. Click Resume button, the CM4 project will stop at the beginning of the Main.   Moving to SDRAM for release option To compile the project in release mode, CORE1_IMAGE_COPY_TO_RAM should be added to Defined symbols table. But that is not enough. The CM7 project of SDK doesn’t copy the CM4 image. We must add this to CM7 project. 3.1 Create a new file named incbin.S. This is the code .section .core_m4slave , "ax" @progbits @preinit_array .global dsp_text_image_start .type dsp_text_image_start, %object .align 2 dsp_text_image_start: .incbin "evkmimxrt1170_hello_world_cm4.bin" .global dsp_text_image_end .type dsp_text_image_end, %object dsp_text_image_end: .global dsp_data_image_start .type dsp_data_image_start, %object .align 2 dsp_ncache_image_end: .global dsp_text_image_size .type dsp_text_image_size, %object .align 2 dsp_text_image_size: .int dsp_text_image_end - dsp_text_image_start 3.2 In hello_world_core0.c, add these code #ifdef CORE1_IMAGE_COPY_TO_RAM extern const char dsp_text_image_start[]; extern int dsp_text_image_size; #define CORE1_IMAGE_START ((uint32_t *)dsp_text_image_start) #define CORE1_IMAGE_SIZE ((int32_t)dsp_text_image_size) #endif 3.3 Right click the evkmimxrt1170_hello_world_cm4.axf in Project Explorer window, select Binary Utilities->Create Binary. A binary file called evkmimxrt1170_hello_world_cm4.bin will be created. Copy it to the release folder of CM7 project. If you want this work to be done automatically, you can add the command to properties->settings->Build stepes->Post-build steps   Compile the project and download. Press reset button. After a while, you will see a small led blinking. This led is driven by CM4. Debug the CM4 project on SDRAM only As we know that CM4 can debug alone without CM7 starting. But that is in ITCM and DTCM. Can it also work in SDRAM. Yes, but the original debug script file is not support this function. I attached a new script file. It can initialize SDRAM before downloading CM4 code. Replace the old .scp file with this one, nothing else need to be changed.
查看全文
1 Introduction RT1060 MCUXpresso SDK provide the ota_bootloader project, download link: https://mcuxpresso.nxp.com/en/welcome ota_bootloader path: SDK_2_10_0_EVK-MIMXRT1060\boards\evkmimxrt1060\bootloader_examples\ota_bootloader  this ota_bootloader can let the customer realize the on board app update, the RT1060 OTA bootloader mainly have the two functions: ISP method update APP Provide the API to the customer, can realize the different APP swap and rollback function ISP method is from the flashloader, the flashloader put the code in the internal RAM, when use it, normally RT chip need to enter the serial download mode, and use the sdphost download the flashloader to the internal RAM, then run it to realize the ISP function. But OTA_bootloader will put the code in the flash directly, RT chip can run it in the internal boot mode directly, each time after chip reset, the code will run the ota_bootloader at first, this time, customer can use the blhost to communicate the RT chip with UART/USB HID directly to download the APP code. So, to the ISP function in the OTA_bootloader, customer also can use it as the ISP secondary bootloader to update the APP directly. Then reset after the 5 seconds timeout, if the APP area is valid, code will jump to the APP and run APP. To the ota_bootloader swap and rollback function, customer can use the provided API in the APP directly to realize the APP update, run the different APP, swap and rollback to the old APP.    This document will give the details about how to use the ota_bootloader ISP function to update the APP, how to modify the customer application to match the ota_bootloader, how to resolve the customer APP issues which need to use the SDRAM with the ota_bootloader, how to use the ota_bootloader API to realize the swap and rollback function, and the related APP prepare. 2 OTA bootloader ISP usage Some customers need the ISP secondary bootloader function, as the flashloader need to put in the internal RAM, so customer can use the ota_bootloader ISP function realize the secondary bootloader requirement, to this method, customer mainly need to note two points: 1) APP need to add the ota_header to meet the ota_bootloader demand. 2) APP use the SDRAM, ota_bootloader need to add DCD memory  2.1 application modification OTA_bootloader located in the flash from 0x60000000, APP locate from 0x60040000. We can find this from the ota_bootloader bootloader_config.h file: #define BL_APP_VECTOR_TABLE_ADDRESS (0x60040000u) From 0x60040000, the first 0X400 should put the ota_header, then the real APP code is put from 0x60040400. Now, take the SDK evkmimxrt1060_iled_blinky project as an example, to modify it to match the ota_bootloader. Application code need to add these files: ota_bootloader_hdr.c, ota_bootloader_board.h, ota_bootloader_supp.c, ota_bootloader_supp.h These files can be found from the evkmimxrt1060_lwip_httpssrv_ota project, put the above files in the led_blinky source folder. 2.1.1 memory modification    APP memory start address modify from 0x60000000 to 0x60040000, which is the ota_bootloader defined address。 Fig 1 2.1.2 ota_hdr related files add   In the led_blinky project source folder, add the mentioned ota_bootloader related files. Fig 2 2.1.3 linker file modification In the evkmimxrt1060_iled_blinky_debug.ld, add boot_hdr, length is 0X400. We can use the MCUXpresso IDE linkscripts folder, add the .ldt files to modify the linker file. Here, in the project, add one linkscripts folder, add the boot_hdr_MIMXRT1060.ldt file, build . Fig 3 After the build, we can find the ld file already add the ota_header in the first 0x400 range. Fig 4 The above is the generated evkmimxrt1060_iled_blinky_ota_0x60040000.bin file, we can find already contains the boot_hdr. From offset 0x400, will put the real APP code. 2.2 ISP test related command Test board is MIMXRT1060-EVK, download the evkmimxrt1060_ota_bootloader project at first, can use the mcuxpresso IDE download it directly, then press the reset button, put the blhost and evkmimxrt1060_iled_blinky_ota_0x60040000.bin in the same folder, use the following command:   blhost.exe -t 50000 -u 0x15a2,0x0073 -j -- get-property 1 0 blhost.exe -t 2048000 -u 0x15a2,0x0073 -j -- flash-erase-region 0x60040000 0x6000 9 blhost.exe -t 5242000 -u 0x15a2,0x0073 -j -- write-memory 0x60040000 evkmimxrt1060_iled_blinky_ota_0x60040000.bin blhost.exe -t 5242000 -u 0x15a2,0x0073 -j -- read-memory 0x60040000 0x6000 flexspiNorCfg.dat 9 Fig 5 After the download is finished, press the reset, wait 5 seconds, it will jump to the app, we can find the MIMXRT1060-EVK on board led is blinking. This method realize the flash ISP bootloader downloading and the app working. The above command is using the USB HID to download, if need to use the UART, also can use command like this: blhost.exe -t 50000 -p COM45,19200 -j -- get-property 1 0 blhost.exe -t 50000 -p COM45,19200 -j -- flash-erase-region 0x60040000 0x6000 9 blhost.exe -t 50000 -p COM45,19200 -j -- write-memory 0x60040000 evkmimxrt1060_iled_blinky_ota_0x60040000.bin blhost.exe -t 50000 -p COM45,19200 -j -- read-memory 0x60040000 0x6000 flexspiNorCfg.dat 9 To the UART communication, as the ota_bootloader auto baud detect issues, it’s better to use the baudrate not larger than 19200bps, or in the ota_bootloader code, define the fixed baudrate, eg, 115200. 2.3 Bootloader DCD consideration Some customer use the ota_bootloader to associate with their own application, which is used the external SDRAM, and find although the downloading works, but after boot, the app didn’t run successfully. In the APP code, it add the DCD configuration, but as the ota_bootloader already put in the front of the QSPI, app will use the ota_hdr, which will delete the dcd part, so, to this situation, customer can add the dcd in the ota_bootloader. The SDK ota_bootloader didn’t add the dcd part in default. Now, modify the ota_bootloader at first, then prepare the sdram app and test it. 2.3.1 Ota_bootloader add dcd 2.3.1.1 add dcd files   ota_bootloader add dcd.c dcd.h, these two files can be found from the SDK hello_world project. Copy these two files to the ota_bootloader board folder. From the dcd.c code, we can see, dcd_data[] array is put in the “.boot_hdr.dcd_data” area, and it need to define the preprocessor: XIP_BOOT_HEADER_DCD_ENABLE=1 2.3.1.2 add dcd linker code   Modify the mcuxpresso IDE ld files, and add the dcd range.    .ivt : AT(ivt_begin) { . = 0x0000 ; KEEP(* (.boot_hdr.ivt)) /* ivt section */ . = 0x0020 ; KEEP(* (.boot_hdr.boot_data)) /* boot section */ . = 0x0030 ; KEEP(*(.boot_hdr.dcd_data)) __boot_hdr_end__ = ABSOLUTE(.) ; . = 0x1000 ; } > m_ivt Fig 6 0x60001000 : IVT 0x6000100c : DCD entry point 0x60001020 : boot data 0x60001030 : DCD detail data 2.3.1.3 add IVT dcd entry point   ota_bootloader->MIMXRT1062 folder->hardware_init_MIMXRT1062.c,modify the image_vector_table, fill the DCD address to the dcdc_data array address as the dcd entry point. Fig 7 Until now, we finish the ota_bootloader dcd add, build the project, generate the image, we can find, DCD already be added to the ota_bootloader image. Fig 8 DCD entry point in the IVT and the DCD data is correct, we can burn this modified ota_bootloader to the MIMXRT1060-EVK board. 2.3.2 SDRAM app prepare Still based on the evkmimxrt1060_iled_blinky project, just put some function in the SDRAM. From memory, we can find, the SRAM is in the RAM4: Fig 9 In the led_blinky.c, add this header: #include <cr_section_macros.h> Then, put the systick delay code to the RAM4 which is the SDRAM area. __RAMFUNC(RAM4) void SysTick_DelayTicks(uint32_t n) { g_systickCounter = n; while (g_systickCounter != 0U) { } } Now, we already finish the simple SDRAM app, we also can test it, and put the breakpoint in the SysTick_DelayTicks, we can find the address is also the SDRAM related address, generate the evkmimxrt1060_iled_blinky1_SDRAM_0x60040000.bin. If use the old ota_bootloader to download this .bin, we can find the led is not blinking. 2.3.3 Test result Command refer to chapter 2.2 ISP test related command, download the evkmimxrt1060_iled_blinky1_SDRAM_0x60040000.bin with the new modified ota_bootloader project, we can find, after reset, the led will blink. So the SDRAM app works with the modified ota_bootloader. 3 OTA bootloader swap and rollback OTA bootloader can realize the swap and rollback function, this part will test the ota_bootloader swap and rollback, prepare two apps and download to the different partition area, then use the UART input char to select the swap or rollback function, to check the which app is running. 3.1  memory map ota bootloader memory information. Fig 10 The above map is based on the external 8Mbyte QSPI flash. OTA bootloader: RT SDK ota bootloader code Boot meta 0: contains 3 partition start address, size etc. information, ISP peripheral information. Boot meta 1: contains 3 partition start address, size etc. information, ISP peripheral information. Swap meta 0: bootloader will use meta data to do the swap operation Swap meta 1: bootloader will use meta data to do the swap operation Partition 1: APP1 location Partition 2: APP2 location Scratch part: APP1 backup location, start point is before 0x60441000, which is enough to put APP1 and multiple sector size, eg, APP1 is 0X5410, sector size is 0x1000, APP1 need 6 sectors, so the scratch start address is 0x60441000-0x6000=0x43b000. User data: user used data area 3.2 swap and rollback basic Fig11  The APP1 and APP2 put in the partition1 and partition2 need to contains the ota_header which meet the bootloader demand, bootloader will check the APP CRC, if it passed, then will boot the app, otherwise, it will enter the ISP mode. Partition2 image need to has the valid header, otherwise, swap will be failed.   Swap function will erase partition2 scratch area, then put the partition1 code to the scratch, erase the partition 1 position, and write the partition 2 image to the partition1 position.   Rollback function will run the previous APP1, erase the partition 2 position, copy the parititon 1 image back to the partition 2, erase partition 1 position, copy partition 2 scratch image back to partition 1. 3.2.1 boot meta boot_meta 0: 0x0x6003c000 size: 0x20c boot_meta 1: 0x0x6003d000 size: 0x20c   OTA bootloader can read boot meta from the two different address, when the two address meta are valid(tag is 'B', 'L', 'M', 'T'), bootloader will choose the bigger version meta. If both meta is not valid, bootloader will copy default boot meta data to the boot meta 0 address.   Boot meta contains 3 partition start address, size information. SDK demo can call bootloader API to find the partition information, then do the image program.   Boot meta also contains the ISP peripheral information, the timeout(5s) information, the structure is: //!@brief Partition information table definitions typedef struct { uint32_t start; //!< Start address of the partition uint32_t size; //!< Size of the partition uint32_t image_state; //!< Active/ReadyForTest/UnderTest uint32_t attribute; //!< Partition Attribute - Defined for futher use uint32_t reserved[12]; //!< Reserved for future use } partition_t; //!@brief Bootloader meta data structure typedef struct { struct { uint32_t wdTimeout; uint32_t periphDetectTimeout; uint32_t enabledPeripherals; uint32_t reserved[12]; } features; partition_t partition[kPartition_Max];//16*4*7 bytes uint32_t meta_version; uint32_t patition_entries; uint32_t reserved0; uint32_t tag; } bootloader_meta_t;   3.2.2 swap meta Swap meta 0 : 0x6003e000, size 0x50 Swap meta 1 : 0x6003f000, size 0x50    OTA bootloader will read the swap meta from 2 different place, if it is not valid, bootloader will set the default data to swap meta 0(0x6003e000). If both image are valid, bootloader will choose the bigger version meta.    Bootloader will refer to the meta data to do the swap operation, sometimes, after reset, meta data will be modified automatically. It mainly relay on the swap_type: kSwapType_ReadyForTest :After reset, do swap operation. Modify meta swap_type to kSwapType_Test. Reset, as the meta data is kSwapType_Test, bootloader can do the rollback. kSwapType_Test : After reset, do rollback after operation, swap_type change to kSwapType_None.  kSwapType_Rollback bootloader will write kSwapType_Test to the meta, after reset, bootloader will refer to kSwapType_Test to do the operation.  kSwapType_Permanent After reset, modify meta data to kSwapType_Permanent, then the APP will boot with partition 1. Swap structure: //!@brief Swap progress definitions typedef struct { uint32_t swap_offset; //!< Current swap offset uint32_t scratch_size; //!< The scratch area size during current swapping uint32_t swap_status; // 1 : A -> B scratch, 2 : B -> A uint32_t remaining_size; //!< Remaining size to be swapped } swap_progress_t; typedef struct { uint32_t size; uint32_t active_flag; } image_info_t; //!@brief Swap meta information, used for the swapping operation typedef struct { image_info_t image_info[2]; //!< Image info table #if !defined(BL_FEATURE_HARDWARE_SWAP_SUPPORTED) || (BL_FEATURE_HARDWARE_SWAP_SUPPORTED == 0) swap_progress_t swap_progress; //!< Swap progress #endif uint32_t swap_type; //!< Swap type uint32_t copy_status; //!< Copy status uint32_t confirm_info; //!< Confirm Information uint32_t meta_version; //!< Meta version uint32_t reserved[7]; //!< Reserved for future use uint32_t tag; } swap_meta_t; 3.3 Common used API Bootloader provide the API for the customer to use it, the common used API are: 3.3.1 update_image_state   update swap meta data, before update, it will check the partition1 image valid or not, if image is not valid, swap meta data will not be updated, and return failure. 3.3.2 get_update_partition_info   get partition information, then define the image program address. 3.3.3 get_image_state   get the current boot image status. None/permanent/UnderTest 3.4  Swap rollback APP prepare Prepare two APPs: APP1 and APP2 bin file, and use the USB HID download to the partition1 and paritition 2. After reset, run APP1 in default, then use the COM input to select the swap and rollback function. Code is: int main(void) { char ch; status_t status; /* Board pin init */ BOARD_InitPins(); BOARD_InitBootClocks(); /* Update the core clock */ SystemCoreClockUpdate(); BOARD_InitDebugConsole(); PRINTF("\r\n------------------hello world + led blinky demo 2.------------------\r\n"); PRINTF("\r\nOTA bootloader test...\r\n" "1 - ReadyForTest\r\n" "3 - kSwapType_Permanent\r\n" "4 - kSwapType_Rollback\r\n" "5 - show image state\r\n" "6 - led blinking for 5times\r\n" "r - NVIC reset\r\n"); // show swap state in swap meta get_image_swap_state(); /* Set systick reload value to generate 1ms interrupt */ if (SysTick_Config(SystemCoreClock / 1000U)) { while (1) { } } while(1) { ch = GETCHAR(); switch(ch) { case '1': status = bl_update_image_state(kSwapType_ReadyForTest); PRINTF("update_image_state to kSwapType_ReadyForTest status: %i\n", status); if (status != 0) PRINTF("update_image_state(kSwapType_ReadyForTest): failed\n"); else NVIC_SystemReset(); break; case '3': status = bl_update_image_state(kSwapType_Permanent); PRINTF("update_image_state to kSwapType_Permanent status: %i\n", status); if (status != 0) PRINTF("update_image_state(kSwapType_Permanent): failed\n"); else NVIC_SystemReset(); break; case '4': status = bl_update_image_state(4); // PRINTF("update_image_state to kSwapType_Rollback status: %i\n", status); if (status != 0) PRINTF("update_image_state(kSwapType_Rollback): failed\n"); else NVIC_SystemReset(); break; case '5': // show swap state in swap meta get_image_swap_state(); break; case '6': Led_blink10times(); break; case 'r': NVIC_SystemReset(); break; } } } When download the APP, need to use the correct ota_header in the first 0x400 area, otherwise, swap will failed. const boot_image_header_t ota_header = { .tag = IMG_HDR_TAG, .load_addr = ((uint32_t)&ota_header) + BL_IMG_HEADER_SIZE, .image_type = IMG_TYPE_XIP, .image_size = 0, .algorithm = IMG_CHK_ALG_CRC32, .header_size = BL_IMG_HEADER_SIZE, .image_version = 0, .checksum = {0xFFFFFFFF}, }; This is a correct sample: Fig 12 Image size and checksum need to use the real APP image information, in the attached file, provide one image_header_padding.exe, it can input the none ota header image, then it will output the whole image which add the ota_header contains the image size and the image crc data in the first 0x400 range. 3.5 Test steps and result Prepare the none header APP1 evkmimxrt1060_APP1_0X60040400.bin, APP2 image evkmimxrt1060_APP2_0X60240400.bin, and put blhost.exe, image_header_padding.exe in the same folder. APP1 and APP2 just the printf version is different, one is version1, another is version2. Printf result: hello world + led blinky demo 1 hello world + led blinky demo 2 Attached OTAtest folder is used for test, but need to download the blhost.exe from the below link, and copy the blhost.exe to the OTAtest folder. https://www.nxp.com/webapp/sps/download/license.jsp?colCode=blhost_2.6.6&appType=file1&DOWNLOAD_ID=null Then run the following bat command: image_header_padding.exe evkmimxrt1060_APP1_0X60040400.bin 0x60040400 sleep 20 blhost.exe -t 50000 -u 0x15a2,0x0073 -j -- get-property 1 0 sleep 20 blhost.exe -u -t 1000000 -- flash-erase-region 0x6003c000 0x4000 sleep 50 blhost -u -t 5000 -- flash-erase-region 0x60040000 0x10000 sleep 50 blhost -u -t 5000 -- write-memory 0x60040000 boot_img_crc32.bin sleep 100 image_header_padding.exe evkmimxrt1060_APP2_0X60240400.bin 0x60040400 sleep 20 blhost -u -t 5000 -- flash-erase-region 0x60240000 0x10000 sleep 50 blhost -u -t 5000 -- flash-erase-region 0x6043b000 0x10000 sleep 50 blhost -u -t 5000 -- write-memory 0x60240000 boot_img_crc32.bin sleep 100 pause The function is to generate the APP1 with the correct ota_header, erase parititon 1,program APP1 to partition 1, generate the APP2 with the correct ota_header, erase partition 2 and scratch area, program APP2 to partition 2, Run the .bat file should in 5 seconds after reset, then use the ISP to connect it: blhost.exe -t 50000 -u 0x15a2,0x0073 -j -- get-property 1 0 ota_bootloader can use the ota_bootloader project download directly to the 0X60000000 area. After downloading, reset the chip, and wait for 5 seconds, the APP will run. This is the test result: Fig 13 From the test result, we can find, in the first time boot, APP1 running, image state: none Input 1, do the swap, will find the APP2 running, image state: undertest Input 3, select permanent, reset will find, still APP2 running, image state: permanent Input 4, choose rollback, reset will find APP1 running, image states: none Until now, finish the swap and rollback function. Input 6, will find the APP contains the SDRAM led blinky is working.          
查看全文
i.MXRT1170 crossover MCUs are a new generation product in the RT family of NXP. It has 1 GHz speed and rich on-chip peripherals. Among RT1170 sub-family, RT1173/RT1175/RT1176 have dual core. One cortex-M7 core runs in 1 GHz, and one cortex-M4 core runs in 400 MHz. The two cores can be debugged through one SWD port. In MIMXRT1170-EVK,the Freelink debug interface default use CMSIS-DAP as debug probe. When debug two core project, for example the evkmimxrt1170_hello_world_cm7 project and evkmimxrt1170_hello_world_cm4 project, just click the debug button in CM7 project. After CM7 project become debug status, CM4 project start to debug automatically. But if developer want to use jlink as debug probe, he will find the CM4 project will not start automatically. If he start CM4 project debugging manually, it will fail. Can jlink debug dual core simultaneously? Yes, it can. In order to debug dual core by jlink, there are some additional settings need to be done. IDE and SDK MCUXpresso IDE 11.3, MIMXRT1170-EVK SDK 2.9.1, Jlink probe version 9 or above or change Freelink application firmware to jlink, Segger jlink firmware JLink_Windows_V698a. Import SDK example, here we select multicore_examples/evkmimxrt1170_hello_world_cm7. MCUXpresso IDE can import both CM4 and CM7 project automatically. Compile both project. Debug the CM7 project first. Then switch to CM4 project and also click the debug button. The CM4 project will not debug properly. So, we exit debug. With this step, the IDE created two deug configurations in RUN->Debug Configurations. Click the evkmimxrt1170_hello_world_cm4 JLink Debug, click JLink Debugger label, Add evkmimxrt1170_connect_cm4_cm4side.jlinkscript. Then unselect the “Attach to a running target” checkbox.   Set a breakpoint at start of main() function of the CM4 project. This is because some time the IDE can’t suspend at start of main() when start debugging. A second breakpoint can be helpful. Take care to set the break point on BOARD_ConfigMPU() or below code. Don’t set break point on “gpio_pin_config_t led_config…”. Otherwise, debug will fail. Now we can start to debug CM7 project. Click the debug button in RUN-> evkmimxrt1170_hello_world_cm4 JLink Debug. This is because the IDE will enable “attach to a running target” automatically. We must disable it again. When CM7 debug circumstance is ready, switch to CM4 project and click “debug” button. Then resume the CM7 project. The CM4 project will start debugging and suspend at the breakpoint.   Notes: If you follow this guide but still can’t debug both core, please try to erase whole chip and try again. If CM7 project run fails in MCMGR_INIT(), please check the Boot Configure pin. It should be set to Internal Boot mode.
查看全文