i.MX RT Crossover MCUs Knowledge Base

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

i.MX RT Crossover MCUs Knowledge Base

讨论

排序依据:
There is an issue with the DCD file used in the SDK 2.9.0 release for the i.MX RT1170 processor. When the included DCD file is used in a project to configure the SDRAM memory on the EVK, the refresh for the memory is not enabled. This can lead to corruption/data loss over time.   To fix the problem, replace the dcd.c file in your project with the attached file instead.   We are working on a fix, and a new revision of the SDK will be released soon.
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/Design-an-IoT-edge-node-for-CV-application-base-on-the-i/ta-p/1127423 
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-Community-Articles/Effortless-GUI-Development-with-NXP-Microcontrollers/ba-p/1131179  
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-345190  
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-on-i-MX-RT1064-EVK/ta-p/1123602 
查看全文
[中文翻译版] 见附件   原文链接: https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/RT1050-HAB-Encrypted-Image-Generation-and-Analysis/ta-p/1124877  
查看全文
In the tutorial, I'd like to show the steps of deploying an image classification model on i.MX RT1060 to enabling you to classify fashion images and categories. In the first part of this tutorial, we will review the Fashion MNIST dataset, including how to download it to your system. From there we’ll define a simple CNN network using the TensorFlow platform. Next, we’ll train our CNN model on the Fashion MNIST dataset, train it, and review the results. Finally, we'll optimize the model, after that, the model will be smaller and increase inferencing speed, which is valuable for source-limited devices such as MCU. Let’s go ahead and get started! Fashion MNIST dataset The Fashion MNIST dataset was created by the e-commerce company, Zalando. Fig 1 Fashion MNIST dataset As they note on their official GitHub repo for the Fashion MNIST dataset, there are a few problems with the standard MNIST digit recognition dataset: It’s far too easy for standard machine learning algorithms to obtain 97%+ accuracy. It’s even easier for deep learning models to achieve 99%+ accuracy. The dataset is overused. MNIST cannot represent modern computer vision tasks. Zalando, therefore, created the Fashion MNIST dataset as a drop-in replacement for MNIST. 60,000 training examples 10,000 testing examples 10 classes: T-shirt/top, Trouser, Pullover, Dress, Coat, Sandal, Shirt, Sneaker, Bag, Ankle boot 28×28 grayscale images The code below loads the Fashion-MNIST dataset using the TensorFlow and creates a plot of the first 25 images in the training dataset. import tensorflow as tf import numpy as np # For easy reset of notebook state. tf.keras.backend.clear_session() # load dataset fashion_mnist = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() lass_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] plt.figure(figsize=(8,8)) for i in range(25): plt.subplot(5,5,i+1,) plt.tight_layout() plt.imshow(train_images[i]) plt.xlabel(lass_names[train_labels[i]]) plt.xticks([]) plt.yticks([]) plt.grid(False) plt.show() Fig 2 Running the code loads the Fashion-MNIST train and test dataset and prints their shape. Fig 3 We can see that there are 60,000 examples in the training dataset and 10,000 in the test dataset and that images are indeed square with 28×28 pixels. Creating model We need to define a neural network model for the image classify purpose, and the model should have two main parts: the feature extraction and the classifier that makes a prediction. Defining a simple Convolutional Neural Network (CNN) For the convolutional front-end, we build 3 layers of convolution layer with a small filter size (3,3) and a modest number of filters followed by a max-pooling layer. The last filter map is flattened to provide features to the classifier. As we know, it's a multi-class classification task, so we will require an output layer with 10 nodes in order to predict the probability distribution of an image belonging to each of the 10 classes. In this case, we will require the use of a softmax activation function. And between the feature extractor and the output layer, we can add a dense layer to interpret the features. All layers will use the ReLU activation function and the He weight initialization scheme, both best practices. We will use the Adam optimizer to optimize the sparse_categorical_crossentropy loss function, suitable for multi-class classification, and we will monitor the classification accuracy metric, which is appropriate given we have the same number of examples in each of the 10 classes. The below code will define and run it will show the struct of the model. # Define a Model model = tf.keras.models.Sequential() # First Convolution ,Kernel:16*3*3 model.add( tf.keras.layers.Conv2D(16, (3, 3), activation='relu', kernel_initializer='he_uniform',input_shape=(28, 28, 1))) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Second Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.MaxPooling2D((2, 2))) # Third Convolution ,Kernel:32*3*3 model.add( tf.keras.layers.Conv2D(32, (3, 3), activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Flatten()) model.add( tf.keras.layers.Dense(32, activation='relu',kernel_initializer='he_uniform')) model.add( tf.keras.layers.Dense(10, activation='softmax')) Fig 4 Training Model After the model is defined, we need to train it. The model will be trained using 5-fold cross-validation. The value of k=5 was chosen to provide a baseline for both repeated evaluation and to not be too large as to require a long running time. Each validation set will be 20% of the training dataset or about 12,000 examples. The training dataset is shuffled prior to being split and the sample shuffling is performed each time so that any model we train will have the same train and validation datasets in each fold, providing an apples-to-apples comparison. We will train the baseline model for a modest 20 training epochs with a default batch size of 32 examples. The validation set for each fold will be used to validate the model during each epoch of the training run, so we can later create learning curves, and at the end of the run, we use the test dataset to estimate the performance of the model. As such, we will keep track of the resulting history from each run, as well as the classification accuracy of the fold. The train_model() function below implements these behaviors, taking the training dataset and test dataset as arguments, and returning a list of accuracy scores and training histories that can be later summarized. from sklearn.model_selection import KFold # train a model using k-fold cross-validation def train_model(dataX, dataY, n_folds=5): scores, histories = list(), list() # prepare cross validation kfold = KFold(n_folds, shuffle=True, random_state=1) for train_ix, validate_ix in kfold.split(dataX): # select rows for train and test trainX, trainY, validate_X, validate_Y = dataX[train_ix], dataY[train_ix], dataX[validate_ix], dataY[validate_ix] # fit model history = model.fit(trainX, trainY, epochs=20, batch_size=32, validation_data=(validate_X, validate_Y), verbose=0) # evaluate model _, acc = model.evaluate(validate_X, validate_Y, verbose=0) print("Accurary: {:.4f},Total number of figures is {:0>2d}".format(acc * 100.0, len(testY))) # append scores scores.append(acc) histories.append(history) return scores, histories Module Summary After the model has been trained, we can present the results. There are two key aspects to present: the diagnostics of the learning behavior of the model during training and the estimation of the model performance. These can be implemented using separate functions. First, the diagnostics involve creating a line plot showing model performance on the train and validate set during each fold of the k-fold cross-validation. These plots are valuable for getting an idea of whether a model is overfitting, underfitting, or has a good fit for the dataset. We will create a single figure with two subplots, one for loss and one for accuracy. Blue lines will indicate model performance on the training dataset and orange lines will indicate performance on the hold-out validate dataset. The summarize_diagnostics() function below creates and shows this plot given the collected training histories. # plot diagnostic learning curves def summarize_diagnostics(histories): for i in range(len(histories)): # plot loss plt.subplot(2,1,1) plt.title('Cross Entropy Loss') plt.plot(histories[i].history['loss'], color='blue', label='train') plt.plot(histories[i].history['val_loss'], color='orange', label='test') # plot accuracy plt.subplot(2,1,2) plt.title('Classification Accuracy') plt.plot(histories[i].history['accuracy'], color='blue', label='train') plt.plot(histories[i].history['val_accuracy'], color='orange', label='test') plt.show() Fig 5 Next, the classification accuracy scores collected during each fold can be summarized by calculating the mean and standard deviation. This provides an estimate of the average expected performance of the model trained on the test dataset, with an estimate of the average variance in the mean. We will also summarize the distribution of scores by creating and showing a box and whisker plot. The summarize_performance() function below implements this for a given list of scores collected during model training. # summarize model performance def summarize_performance(scores): # print summary print('Accuracy: mean={:.4f} std={:.4f}, n={:0>2d}'.format(np.mean(trained_scores)*100, np.std(trained_scores)*100, len(scores))) # box and whisker plots of results plt.boxplot(scores) plt.show()   Fig 6 Verifying predictions According to the above figure, we see that the final trained model can get up to around 87.6% accuracy when predicting the test dataset. And with the trained model, running the below code will demonstrate the result of predictions about some images. def plot_image(i, predictions_array, true_label, img): true_label, img = true_label[i], img[i] plt.grid(False) plt.xticks([]) plt.yticks([]) plt.imshow(img.reshape(28, 28), cmap=plt.cm.binary) predicted_label = np.argmax(predictions_array) if predicted_label == true_label: color = 'blue' else: color = 'red' plt.xlabel("{} {:2.0f}% ({})".format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color) def plot_value_array(i, predictions_array, true_label): true_label = true_label[i] plt.grid(False) plt.xticks(range(10)) plt.yticks([]) thisplot = plt.bar(range(10), predictions_array, color="#777777") plt.ylim([0, 1]) predicted_label = np.argmax(predictions_array) thisplot[predicted_label].set_color('red') thisplot[true_label].set_color('blue') predictions = model.predict(test_images) # Plot the first X test images, their predicted labels, and the true labels. # Color correct predictions in blue and incorrect predictions in red. num_rows = 5 num_cols = 3 num_images = num_rows*num_cols plt.figure(figsize=(2*2*num_cols, 2*num_rows)) for i in range(num_images): plt.subplot(num_rows, 2*num_cols, 2*i+1) plot_image(i, predictions[i], test_labels, test_images) plt.subplot(num_rows, 2*num_cols, 2*i+2) plot_value_array(i, predictions[i], test_labels) plt.tight_layout() plt.show()   Fig 7 Model quantization Post-training quantization is a conversion technique that can reduce model size while also improving CPU and hardware accelerator latency, with little degradation in model accuracy, especially it's crucial to embedded platforms, as it lacks the compute-intensive performance, the Flash and RAM memory is also very limited. TensorFlow Lite is able to be used to convert an already-trained float TensorFlow model to the TensorFlow Lite format. In addition, the TensorFlow Lite provides several approaches to optimize the mode, among these ways, Integer quantization is an optimization strategy that converts 32-bit floating-point numbers (such as weights and activation outputs) to the nearest 8-bit fixed-point numbers. This results in a smaller model and increased inferencing speed, which is very valuable for low-power devices such as microcontrollers. The below codes show how to implement the Integer quantization of the trained model, and after running these codes, we can find that the size of Tensorflow Lite mode reduces almost 64.9 KB versus the original model, becomes about 32% of the original size(Fig 8). import os # Convert using integer-only quantization def representative_data_gen(): for input_value in tf.data.Dataset.from_tensor_slices(tf.cast(train_images,tf.float32)).shuffle(500).batch(1).take(150): yield [input_value] # Convert using dynamic range quantization converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_model_quant = converter.convert() # Save the model to disk open("model_dynamic_range_quantization.tflite", "wb").write(tflite_model_quant) ## Size difference Dynamic_range_quantization_model_size = os.path.getsize("model_dynamic_range_quantization.tflite") print("Dynamic range quantization model is %d bytes" % Dynamic_range_quantization_model_size) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_data_gen # Ensure that if any ops can't be quantized, the converter throws an error converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] # Set the input and output tensors to uint8 (APIs added in r2.3) converter.inference_input_type = tf.uint8 converter.inference_output_type = tf.uint8 tflite_model_advanced_quant = converter.convert() # Save the model to disk open("model_integer_only_quantization.tflite", "wb").write(tflite_model_advanced_quant) Integer_only_quantization_model_size = os.path.getsize("model_integer_only_quantization.tflite") print("Integer_only_quantization_model is %d bytes" % Integer_only_quantization_model_size) difference = Dynamic_range_quantization_model_size - Integer_only_quantization_model_size print("Difference is %d bytes" % difference) Fig 8 Evaluating the TensorFlow Lite model Now we'll run inferences using the TensorFlow Lite Interpreter to compare the model accuracies. First, we need a function that runs inference with a given model and images, and then returns the predictions: # Helper function to run inference on a TFLite model def run_tflite_model(tflite_file, test_image_indices): # Initialize the interpreter interpreter = tf.lite.Interpreter(model_path=str(tflite_file)) interpreter.allocate_tensors() input_details = interpreter.get_input_details()[0] output_details = interpreter.get_output_details()[0] predictions = np.zeros((len(test_image_indices),), dtype=int) for i, test_image_index in enumerate(test_image_indices): test_image = test_images[test_image_index] test_label = test_labels[test_image_index] # Check if the input type is quantized, then rescale input data to uint8 if input_details['dtype'] == np.uint8: input_scale, input_zero_point = input_details["quantization"] test_image = test_image / input_scale + input_zero_point test_image = np.expand_dims(test_image, axis=0).astype(input_details["dtype"]) interpreter.set_tensor(input_details["index"], test_image) interpreter.invoke() output = interpreter.get_tensor(output_details["index"])[0] predictions[i] = output.argmax() return predictions Next, we'll compare the performance of the original model and the quantized model on one image. model_basic_quantization.tflite is the original TensorFlow Lite model with floating-point data. model_integer_only_quantization.tflite is the last model we converted using integer-only quantization (it uses uint8 data for input and output). Let's create another function to print our predictions and run it for testing. import matplotlib.pylab as plt # Change this to test a different image test_image_index = 1 ## Helper function to test the models on one image def test_model(tflite_file, test_image_index, model_type): global test_labels predictions = run_tflite_model(tflite_file, [test_image_index]) plt.imshow(test_images[test_image_index].reshape(28,28)) template = model_type + " Model \n True:{true}, Predicted:{predict}" _ = plt.title(template.format(true= str(test_labels[test_image_index]), predict=str(predictions[0]))) plt.grid(False) Fig 9 Fig 10 Then evaluate the quantized model by using all the test images we loaded at the beginning of this tutorial. After summarizing the prediction result of the test dataset, we can see that the prediction accuracy of the quantized model decrease 7% less than the original model, it's not bad. # Helper function to evaluate a TFLite model on all images def evaluate_model(tflite_file, model_type): test_image_indices = range(test_images.shape[0]) predictions = run_tflite_model(tflite_file, test_image_indices) accuracy = (np.sum(test_labels== predictions) * 100) / len(test_images) print('%s model accuracy is %.4f%% (Number of test samples=%d)' % ( model_type, accuracy, len(test_images))) Deploying model Converting TensorFlow Lite model to C file The following code runs xxd on the quantized model, writes the output to a file called model_quantized.cc, in the file, the model is defined as an array of bytes, and prints it to the screen. The output is very long, so we won’t reproduce it all here, but here’s a snippet that includes just the beginning and end. # Save the file as a C source file xxd -i model_integer_only_quantization.tflite > model_quantized.cc # Print the source file cat model_quantized.cc Fig 11 Deploying the C file to project We use the tensorflow_lite_cifar10 demo as a prototype, then replace the original model and do some code modification, below is the code in the modified main file. #include "board.h" #include "fsl_debug_console.h" #include "pin_mux.h" #include "timer.h" #include <iomanip> #include <iostream> #include <string> #include <vector> #include "tensorflow/lite/kernels/register.h" #include "tensorflow/lite/model.h" #include "tensorflow/lite/optional_debug_tools.h" #include "tensorflow/lite/string_util.h" #include "get_top_n.h" #include "model.h" #define LOG(x) std::cout // ---------------------------- Application ----------------------------- // Lenet Mnist model input data size (bytes). #define LENET_MNIST_INPUT_SIZE 28*28*sizeof(char) // Lenet Mnist model number of output classes. #define LENET_MNIST_OUTPUT_CLASS 10 // Allocate buffer for input data. This buffer contains the input image // pre-processed and serialized as text to include here. uint8_t imageData[LENET_MNIST_INPUT_SIZE] = { #include "clothes_select.inc" }; /* Tresholds */ #define DETECTION_TRESHOLD 60 /*! * @brief Initialize parameters for inference * * @param reference to flat buffer * @param reference to interpreter * @param pointer to storing input tensor address * @param verbose mode flag. Set true for verbose mode */ void InferenceInit(std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor** input_tensor, bool isVerbose) { model = tflite::FlatBufferModel::BuildFromBuffer(Fashion_MNIST_model, Fashion_MNIST_model_len); if (!model) { LOG(FATAL) << "Failed to load model\r\n"; return; } tflite::ops::builtin::BuiltinOpResolver resolver; tflite::InterpreterBuilder(*model, resolver)(&interpreter); if (!interpreter) { LOG(FATAL) << "Failed to construct interpreter\r\n"; return; } int input = interpreter->inputs()[0]; const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); if (interpreter->AllocateTensors() != kTfLiteOk) { LOG(FATAL) << "Failed to allocate tensors!"; return; } /* Get input dimension from the input tensor metadata assuming one input only */ *input_tensor = interpreter->tensor(input); auto data_type = (*input_tensor)->type; if (isVerbose) { const std::vector<int> inputs = interpreter->inputs(); const std::vector<int> outputs = interpreter->outputs(); LOG(INFO) << "input: " << inputs[0] << "\r\n"; LOG(INFO) << "number of inputs: " << inputs.size() << "\r\n"; LOG(INFO) << "number of outputs: " << outputs.size() << "\r\n"; LOG(INFO) << "tensors size: " << interpreter->tensors_size() << "\r\n"; LOG(INFO) << "nodes size: " << interpreter->nodes_size() << "\r\n"; LOG(INFO) << "inputs: " << interpreter->inputs().size() << "\r\n"; LOG(INFO) << "input(0) name: " << interpreter->GetInputName(0) << "\r\n"; int t_size = interpreter->tensors_size(); for (int i = 0; i < t_size; i++) { if (interpreter->tensor(i)->name) { LOG(INFO) << i << ": " << interpreter->tensor(i)->name << ", " << interpreter->tensor(i)->bytes << ", " << interpreter->tensor(i)->type << ", " << interpreter->tensor(i)->params.scale << ", " << interpreter->tensor(i)->params.zero_point << "\r\n"; } } LOG(INFO) << "\r\n"; } } /*! * @brief Runs inference input buffer and print result to console * * @param pointer to image data * @param image data length * @param pointer to labels string array * @param reference to flat buffer model * @param reference to interpreter * @param pointer to input tensor */ void RunInference(const uint8_t* image, size_t image_len, const std::string* labels, std::unique_ptr<tflite::FlatBufferModel> &model, std::unique_ptr<tflite::Interpreter> &interpreter, TfLiteTensor* input_tensor) { /* Copy image to tensor. */ memcpy(input_tensor->data.uint8, image, image_len); /* Do inference on static image in first loop. */ auto start = GetTimeInUS(); if (interpreter->Invoke() != kTfLiteOk) { LOG(FATAL) << "Failed to invoke tflite!\r\n"; return; } auto end = GetTimeInUS(); const float threshold = (float)DETECTION_TRESHOLD /100; std::vector<std::pair<float, int>> top_results; int output = interpreter->outputs()[0]; TfLiteTensor *output_tensor = interpreter->tensor(output); TfLiteIntArray* output_dims = output_tensor->dims; // assume output dims to be something like (1, 1, ... , size) auto output_size = output_dims->data[output_dims->size - 1]; /* Find best image candidates. */ GetTopN<uint8_t>(interpreter->typed_output_tensor<uint8_t>(0), output_size, 1, threshold, &top_results, false); if (!top_results.empty()) { auto result = top_results.front(); const float confidence = result.first; const int index = result.second; if (confidence * 100 > DETECTION_TRESHOLD) { LOG(INFO) << "----------------------------------------\r\n"; LOG(INFO) << " Inference time: " << (end - start) / 1000 << " ms\r\n"; LOG(INFO) << " Detected: " << std::setw(10) << labels[index] << " (" << (int)(confidence * 100) << "%)\r\n"; LOG(INFO) << "----------------------------------------\r\n\r\n"; } } } /*! * @brief Main function */ int main(void) { const std::string labels[] = {"T-shirt/top", "Trouser","Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"}; /* Init board hardware. */ BOARD_ConfigMPU(); BOARD_InitPins(); BOARD_BootClockRUN(); BOARD_InitDebugConsole(); InitTimer(); std::unique_ptr<tflite::FlatBufferModel> model; std::unique_ptr<tflite::Interpreter> interpreter; TfLiteTensor* input_tensor = 0; InferenceInit(model, interpreter, &input_tensor, false); LOG(INFO) << "Fashion MNIST object recognition example using a TensorFlow Lite model.\r\n"; LOG(INFO) << "Detection threshold: " << DETECTION_TRESHOLD << "%\r\n"; /* Run inference on static ship image. */ LOG(INFO) << "\r\nStatic data processing:\r\n"; RunInference((uint8_t*)imageData, (size_t)LENET_MNIST_INPUT_SIZE, labels, model, interpreter, input_tensor); while(1) {} } Testing result After deploying the model in the demo project, then we'll run this demo on the MIMXRT1060 (Fig 12) board for testing. Fig 12 Run the below code to covert the Fashion MNIST image to text The process_image() function can convert a Fashion MNIST image to an include file as static data, then include this file in the demo project. def process_image(image, output_path, num_batch=1): img_data = np.transpose(image, (2, 0, 1)) # Repeat image for batch processing (resulting tensor is NCHW or NHWC) img_data = np.reshape(img_data, (num_batch, img_data.shape[0], img_data.shape[1], img_data.shape[2])) img_data = np.repeat(img_data, num_batch, axis=0) img_data = np.reshape(img_data, (num_batch, img_data.shape[1], img_data.shape[2], img_data.shape[3])) # Serialize image batch img_data_bytes = bytearray(img_data.tobytes(order='C')) image_bytes_per_line = 20 with open(output_path, 'wt') as f: idx = 0 for byte in img_data_bytes: f.write('0X%02X, ' % byte) if idx % image_bytes_per_line == (image_bytes_per_line - 1): f.write('\n') idx = idx + 1 # Return serialized image size return len(img_data_bytes)      2. Run the demo project on board.
查看全文
                                      配置RT600开发环境 RT600开发入门培训视频。 https://www.nxp.com/document/guide/getting-started-with-i-mx-rt600-evaluation-kit:GS-MIMXRT685-EVK?&tid=vanGS-MIMXRT685-EVK#title2.1   下载I.MX RT600 SDK。下载链接: https://mcuxpresso.nxp.com/en/select?device=EVK-MIMXRT685     下载MCUXpresso IDE。注意需要安装MCUXpresso IDE 11.1.1及最新版本。https://www.nxp.com/webapp/swlicensing/sso/downloadSoftware.sp?catid=MCUXPRESSO               下载安装LPCScrypt,可以将默认板载的CMSIS-DAP固件升级改为J-LINK。通过J-LINK,可以下载调试HiFi4 DSP固件。下载链接https://www.nxp.com/design/microcontrollers-developer-resources/lpc-microcontroller-utilities/lpcscrypt-v2-1-1:LPCSCRYPT?&tab=Design_Tools_Tab     下载安装J-LINK驱动。下载链接https://www.segger.com/downloads/jlink/   下载安装Cadence HiFi 4 DSP IDE for MIMXRT600。 第一次下载,注册用户https://tensilicatools.com/register/。国内用户注册时,如果页面没有出现下面人机身份验证,说明IP被GW Firewall屏蔽了。需要通过代理或者其他特殊手段,否则用户注册将无法成功提交。   下载HiFi DSP Development Tools for i.MX RT600开发工具。 https://tensilicatools.com/download/rt600-download-page/   申请License for RT600 SDK。注意输入绑定网卡MAC地址时,需要去除中间‘:’等字符,否则提示失败。   申请成功后,可以下载License文件。   启动Xplorer 8.0.13后,在菜单Help -- Xplorer License Keys安装License文件。安装成功后显示如下:     Xplorer下载调试器配置。 将xt-ocd.exe所在目录加入到系统Path环境变量。   使能”Use XOCD Manager”,指定Topology File   设置Download binary为Always,取消每次下载前都弹出提示框,节省下载时间。     通过J-Link下载HiFi4 DSP固件,可以单步调试代码。    
查看全文
The i.MX RT600 crossover MCU combines an ultra-low power MCU with a high performance DSP to enable the next generation of ML/AI, voice and audio applications. Get started today and order your MIMXRT685-EVK.
查看全文
Get 500 MHz for just $1 with NXP's new i.MX RT1010 crossover MCU.  Targeted for a variety of applications, this video highlights two very popular example use-cases for i.MX RT1010 -audio and motor control.
查看全文
RT1050 SDRAM app code boot from SDcard burn with 3 tools Abstract       This document is about the RT series app running on the external SDRAM, but boot from SD card. The content contains SDRAM app code generate with the RT1050 SDK MCUXpresso IDE project, burn the code to the external SD card with flashloader MFG tool, and MCUXPresso Secure Provisioning. The MCUBootUtility method can be found from this post: https://community.nxp.com/docs/DOC-346194       Software and Hardware platform: SDK 2.7.0_EVKB-IMXRT1050 MCUXpresso IDE MXRT1050_GA MCUBootUtility MCUXPresso Secure Provisioning MIMXRT1050-EVKB 2 RT1050 SDRAM app image generation     Porting SDK_2.7.0_EVKB-IMXRT1050 iled_blinky project to the MCUXPresso IDE, to generate the code which is located in SDRAM, the configuration is modified like the following items:       2.1 Copy code to RAM 2.2  Modify memory location to SDRAM address 0X80002000 The code which boots from SD card and running in the SDRAM is the non-xip code, so the IVT offset is 0X400, in our test, we put the image from the SDRAM memory address 0x800002000, the configuration is: 2.3 Modify the symbol 2.4 Generate the .s19 file      After build has no problems, then generate the app.s19 file:   Rename the app.19 image file to evkbimxrt1050_iled_blinky_sdram_0x2000.s19, and copy it to the flashloader folder: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win   3, Flashloader configuration and download    This chapter will use flashloader to configure the image which can download the SDRAM app code to the external SD card with MFGTool.       We need to prepare the following files: SDRAM interface configuration file CFG_DCD.bin imx-sdram-unsigned-dcd.bd program_sdcard_image.bd 3.1 SDRAM DCD file preparation      MIMXRT1050-EVKB on board SDRAM is IS42S16160J, we can use the attached dcd_model\ISSI_IS42S16160J\dcd.cfg and dcdgen.exe tool to generate the CFG_DCD.bin, the commander is: dcdgen -inputfile=dcd.cfg -bout -cout   Copy CFG_DCD.bin file to the flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win 3.2 imx-sdram-unsigned-dcd.bd file Prepare the imx-sdram-unsigned-dcd.bd file content as: options {     flags = 0x00;     startAddress = 0x80000000;     ivtOffset = 0x400;     initialLoadSize = 0x2000;     DCDFilePath = "CFG_DCD.bin";     # Note: This is required if the default entrypoint is not the Reset_Handler     #       Please set the entryPointAddress to Reset_Handler address     entryPointAddress = 0x800022f1; }   sources {     elfFile = extern(0); }   section (0) { }  The above entrypointAddress data is from the .s19 reset handler(0X80002000+4 address data): Copy imx-sdram-unsigned-dcd.bd file to flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Open cmd, run the following command: elftosb.exe -f imx -V -c imx-sdram-unsigned-dcd.bd -o ivt_evkbimxrt1050_iled_blinky_sdram_0x2000.bin evkbimxrt1050_iled_blinky_sdram_0x2000.s19 After running the command, two app IVT files will be generated: 3.3 program_sdcard_image.bd file Prepare the program_sdcard_image.bd file content as: # The source block assign file name to identifiers sources {  myBootImageFile = extern (0); }   # The section block specifies the sequence of boot commands to be written to the SB file section (0) {       #1. Prepare SDCard option block     load 0xd0000000 > 0x100;     load 0x00000000 > 0x104;       #2. Configure SDCard     enable sdcard 0x100;       #3. Erase blocks as needed.     erase sdcard 0x400..0x14000;       #4. Program SDCard Image     load sdcard myBootImageFile > 0x400;         #5. Program Efuse for optimal read performance (optional)     # Note: It is just a template, please program the actual Fuse required in the application     # and remove the # to enable the command     #load fuse 0x00000000 > 0x07;   } Copy program_sdcard_image.bd to the flashloader path: Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\elftosb\win Open cmd, run the following command: elftosb.exe -f kinetis -V -c program_sdcard_image.bd -o boot_image.sb ivt_evkbimxrt1050_iled_blinky_sdram_0x2000_nopadding.bin Copy the generated boot_image.sb file to the following flashloader path: \Flashloader_i.MXRT1050_GA\Flashloader_RT1050_1.1\Tools\mfgtools-rel\Profiles\MXRT105X\OS Firmware 3.4 MFGTool burn code to SD card    Prepare one SD card, insert it to J20, let the board enter the serial download mode, SW7:1-ON 2-OFF 3-OFF 4-ON. Find two USB cable, one is connected to J28, another is connected to J9, we use the HID to download the image.    Open MFGTool.exe, and click the start button:          Modify the boot mode to internal boot, and boot from the external SD card, SW7:1-ON 2-OFF 3-ON 4-OFF.      Power off and power on the board again, you will find the onboard LED D18 is blinking, it means the external SDRAM APP code is boot from external SD card successfully. 4, MCUBootUtility configuration and code download    Please check this community document: https://community.nxp.com/docs/DOC-346194     Here just give one image readout memory map, it will be useful to understand the image location information:     After download, we can readout the SD card image, from 0X400 is the IVT, BD, DCD data, from 0X1000 is the image which is the same as the app.s19 file.     5, MCUXpresso Secure Provisioning configuration and download   This software is released in the NXP official website, it is also the GUI version, which can realize the normal code and the secure code downloading, it will be more easy to use than the flashloader tool, customer don’t need to input the command, the tool help the customer to do it, the function is similar to the MCUBootUtility, MCUBootUtility tool is the opensource tool which is shared in the github, but is not released in the NXP official website.   Now, we use the new official realized tool to download the SDRAM app code to the external SD card, the board still need to enter the serial download mode, just like the flashloader and the MCUBootUtility too, the detail operation is:  We can find this tool is also very easy to use, customer still need to provide the app.19 and the dcd.bin, then give the related boot device configuration is OK.    After the code is downloaded successfully, modify the boot mode to internal boot, and boot from the external SD card, SW7:1-ON 2-OFF 3-ON 4-OFF.     Power off and power on the board again, you will find the onboard LED D18 is blinking, it means the external SDRAM APP code is boot from external SD card successfully.   Until now, all the three methods to download the SDRAM app code to the SD card is working, flashloader is the command based tool, MCUBootUtility and MCUXPresso Secure Provisioning is the GUI tool, which is more easy to use.        
查看全文
Introduction A common need for GUI applications is to implement a clock function.  Whether it be to create a clock interface for the end user's benefit, or just to time animations or other actions, implementing an accurate clock is a useful and important feature for GUI applications.  The aim of this document is to help you implement clock functions in your AppWizard project.   Methods When implementing a real-time clock, there are a couple of general methods to do so.   Use an independent timer in your MCU Using animation objects Each of these methods have their advantages and disadvantages.  If you just need a timer that doesn't require extra code and you don't require control or assurance of precision, or maybe you can't spare another timer, using an animation object (method #2) may be a good option in that application.  If your application requires an assurance of precision or requires other real-time actions to be performed that AppWizard can't control, it is best to implement an independent timer in your MCU (method #1).  Method 1:  Independent MCU Timer Implementing a timer via an independent MCU timer allows better control and guarantees the precision because it isn't a shared clock and the developer can adjust the interrupt priorities such that the timer interrupt has the highest priority.  AppWizard timing uses a common timer and then time slices activities similar to how an operating system works.  It is for this reason that implementing an independent MCU timer is best when you need control over the precision of the timer or you need other real-time actions to be triggered by this timer.  When implementing a timer using an independent MCU timer (like the RTC module), an understanding of how to interact with Text widgets is needed. Let's look at this first.   Interacting with Text Widgets Editing Text widgets occurs through the use of the emWin library API (the emWin library is the underlying code that AppWizard builds upon). The Text widget API functions are documented in the emWin Graphic Library User Guide and Reference Manual, UM3001.  Most of the Text widget API functions require a Text widget handle.  Be sure to not confuse this handle for the AppWizard ID.  Imagine a clock example where there are two Text widgets in the interface:  one for the minutes and one for the seconds.  The AppWizard IDs of these objects might be ID_TEXT_MINS and ID_TEXT_SECONDS respectively (again, these are not to be confused with the handle to the Text widget for use by emWin library functions).  The first action software should take is to obtain the handle for the Text widgets.   This can be done using the WM_GetDialogItem function.  The code to get the active window handle and the handle for the two Text widgets is shown below: activeWin = WM_GetActiveWindow(); textBoxMins = WM_GetDialogItem(activeWin, ID_TEXT_MINS); textBoxSecs = WM_GetDialogItem(activeWin, ID_TEXT_SECONDS);‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Note that this function requires the handle to the parent window of the Text widget.  If your application has multiple windows or screens, you may need to be creative in how you acquire this handle, but for this example, the software can simply call the WM_GetActiveWindow function (since there is only one screen).  When to call these functions can be a bit tricky as well.  They can be called before the MainTask() function of the application is called and the application will not crash.  However, the handles won't be correct and the Text widgets will not be updated as expected.  It's recommended that these handles be initialized when the screen is initialized.  An example of how this would be done is shown below: void cbID_SCREEN_CLOCK(WM_MESSAGE * pMsg) { extern WM_HWIN activeWin; extern WM_HWIN textBoxMins; extern WM_HWIN textBoxSecs; extern WM_HWIN textBoxDbg; if(pMsg->MsgId == WM_INIT_DIALOG) { activeWin = WM_GetActiveWindow(); textBoxMins = WM_GetDialogItem(activeWin, ID_TEXT_MINS); textBoxSecs = WM_GetDialogItem(activeWin, ID_TEXT_SECONDS); textBoxDbg = WM_GetDialogItem(activeWin, ID_TEXT_DBG); } GUI_USE_PARA(pMsg); }‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Once the Text widget handles have been acquired, the text can be updated using the TEXT_SetText() function or the TEXT_SetDec() function in this case, because the Text widgets are configured for decimal mode, since we want to display numbers.  An example of the code to do this is shown below.  /* TEXT_SetDec(Text Widget Handle, Value as Int, Length, Shift, Sign, Leading Spaces) */ if(TEXT_SetDec(textBoxSecs, (int)gSecs, 2, 0, 0, 0)) { /* Perform action here if necessary */ } if(TEXT_SetDec(textBoxMins, (int)gMins, 2, 0, 0, 0)) { /* Perform action here if necessary */ } ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Method 2:  Animation Objects When implementing a real-time clock using animation objects, it is necessary to implement a loop.  This could be done outside of the AppWizard GUI (in your code) but because the timing precision can't be guaranteed, it's just as easy to implement a loop in the AppWizard GUI if you know how (it isn't very intuitive as to how to do this). Before examining the interactions to do this, let's look at the variables and objects needed to do this.  ID_VAR_SECS - This variable holds the current seconds value. ID_VAR_SECS_1 - This variable holds the next second value.  ID_TEXT_SECONDS - Text box that displays the current seconds value. ID_END_CNT - Variable that holds the value at which the seconds rolls over and increments the minute count ID_TEXT_MINS - Text box that holds the current minute count. ID_MIN_END_CNT - Variable that holds the value at which the minutes rolls over (which would also increment the hour count if the hours were implemented). ID_BUTTON_SECS - This is a hidden button that initiates actions when the seconds variable has reached the end count.  Now, here are the interactions used to implement the clock feature using animation interactions.  The heart of the loop are the interactions triggered by ID_VAR_SECS.  ID_VAR_SECS -> ID_VAR_SECS_1:  When ID_VAR_SECS changes, it needs to add one to ID_VAR_SECS_1 so that the animation will animate to one second from the current time. ID_VAR_SECS -> ID_TEXT_SECONDS:  When ID_VAR_SECS changes, it also needs to start the animation from the current value to the next second (ID_VAR_SECS_1). A very essential part of the loop is ensuring the animation restarts every time.  So ID_TEXT_SECONDS needs to change the value of ID_VAR_SECS when the animation ends. ID_VAR_SECS is changed to the current time value, ID_VAR_SECS_1. When the ID_TEXT_SECONDS animation ends, it must also decrement the ID_VAR_END_CNT variable.  This is analogous to the control variable of a "For" loop being updated. This is done using the ADDVALUE job, adding '-1' to the variable, ID_VAR_END_CNT. When ID_VAR_END_CNT changes, it updates the hidden button, ID_BUTTON_SECS, with the new value.  This is analogous to a "For" loop checking whether its control variable is still within its limits.   The interactions in group 5 are interactions that restart the loop when the seconds reach the count that we desire.  When the loop is restarted, the following actions must be taken: Set ID_VAR_SECS and ID_VAR_SECS_1 to the initial value for the next loop ('0' in this case).  Note that ID_VAR_SECS_1 MUST be set before ID_VAR_SECS.  Additionally, if the loop is to continue, ID_VAR_SECS and ID_VAR_SECS_1 must be set to the same value.   ID_TEXT_SECONDS is set to the initial value.  If this isn't done, then the text box will try to animate from the final value to the initial value and then will look "weird". ID_VAR_END_CNT is reset to its initial value (60 in this case).  ID_BUTTON_SECS is also responsible for updating the minutes values.  In this case, it's incrementing the ID_TEXT_MINS value (counting up in minutes) and decrementing the ID_VAR_MIN_END_CNT  Adjusting the time of an animation object The animation object (as well as other emWin objects) use the GUI_X_DELAY function for timing.  It is up to the host software to implement this function.  In the i.MX RT examples, the General Purpose Timer (GPT) is used for this timer.  So how the GPT is configured will affect the timing of the application and the how fast or slow the animations run. The GPT is configured in the function BOARD_InitGPT() which resides in the main source file.  The recommended way to adjust the speed of the timer is by changing the divider value to the GPT. Conclusion So we have seen two different methods of implementing a real-time clock in an AppWizard GUI application.  Those methods are: Use an independent timer in your MCU Using animation objects Using an independent timer in your MCU may be preferred as it allows for better control over the timing, can allow for real-time actions to be performed that AppWizard can't control, and provides some assurance of precision.  Using animation objects may be preferred if you just need a quick timer implementation that doesn't require you to manually add code to your project or use a second timer.  
查看全文
This document describes how to use I2S (Inter-IC Sound Bus) and DMA to record and playback audio using NXP's i.MX RT600 crossover MCUs. It also includes the process of how to use the codec chip to process audio data on the i.MX RT600 Evaluation Kit (EVK) based on the Cadence® Tensilica® HiFi4 Audio DSP. Click here to access the full application note.
查看全文
When design a project, sometimes CCM_CLKO1 needs to output different clocks to meet customer needs. This customer does not need to buy a separate crystal, which can reduce costs。The document describe how to make CCM_CLKO1 output different clock on I.MXRT1050. According to  selection of the clock to be generated on CCM_CLKO1(CLKO1_SEL) and setting the divider of CCM_CLKO1(CLKO1_DIV) in I.MXRT1050reference manual. CCM_CLKO1 can output different clock. If CCM_CLKO1 output different clock via SYS PLL clock. We can get the different clock for the application. CLKO1_DIV 000 001 010 011 100 101 110 111 Freq(MHz) 264 132 88 66 52.8 44 37.714 33 For example we want to get 88Mhz output via SYS PLL clock. We can follow the steps as the below(led_blinky project in SDK 😞       1. PINMUX GPIO_SD_B0_04 as CCM_CLKO1 signal.       IOMUXC_SetPinConfig(       IOMUXC_GPIO_SD_B0_04_CCM_CLKO1,              0x10B0u; 2.Enable CCM_CLKO1 signal. CCM->CCOSR |= CCM_CCOSR_CLKO1_EN_MASK; 3.Set CLKO1_DIV to get 88MHZ the clock for the application. CCM->CCOSR = (CCM->CCOSR & (~CCM_CCOSR_CLKO1_DIV_MASK)) | CCM_CCOSR_CLKO1_DIV(2); CCM->CCOSR = (CCM->CCOSR & (~CCM_CCOSR_CLKO1_SEL_MASK)) | CCM_CCOSR_CLKO1_SEL(1); 4 We will get the clock as the below. Note: In principle, it is not recommended to output CLOCK in CCM_CLKO1, if necessary, Please connect an 8-10pf capacitor to GPIO_SD_B0_04, and connect a 22 ohm resistor in series to prevent interference.
查看全文
This guide will walk through how to do connect the camera and LCD modules to i.MX RT boards and how to test to ensure the camera and LCD are connected properly. Update May 2022: There are now updated versions of these LCD panels that have an impact on software. See this post for more details. The physical connections are the same for both the original and new panels however so there are no changes to this guide.   This first part of this guide is for the i.MX RT1050, i.MX RT1060, i.MX RT1064 EVKs. The second part of this guide is for i.MX RT595, i.MX RT1160 and i.MX RT1170 EVKs.      Part 1: Camera and LCD for i.MX RT1050, i.MX RT1060, and i.MX RT1064:  The camera used by the RT1050, RT1060, and RT1064 EVKs are the same. However this camera only comes with the RT1060 and RT1064 EVKs. There are alternatives available for the RT1050 as discussed in this blog post.    The LCD screen compatible with these boards is the RK043FN66HS-CTG    Camera:  1) The camera connector is on the front of the board. Flip the black connector up so it's 90 degrees from its original position.  2) Then slide in the flat ribbon connector of the camera 3) Flip the black connector back down. It should keep the ribbon cable snug.   LCD: 1) On the back of the board, slide the black connector for the LCD ribbon forward. 2) Then slide in the flat LCD ribbon cable underneath the black connector. 3) Slide the black connector back to its original position. The cable should be snug. 4) Do the same for the touch controller connector and slide the black connector forward Then insert the cable between the black connector and the white top so that the cable is in the middle. It might take a few tries as its somewhat difficult. You could also use needle nose pliers to help guide in the cable but be careful about damaging the cable. 5) Then slide the black connector back to the original position. The cable should be snug. 6) It should look like the following when complete.   Testing: 1) To test the camera and LCD, use the CSI driver examples in the MCUXpresso SDK.  2) The camera will likely be out of focus the first time you use it. Adjust it by rotating the lens clockwise until the image is in focus. You can use your fingers or some needle nose pliers. It could take up to two rotations and it should turn easily. Also remove the plastic cover.    3) To test the touch controller, use the emwin temperature control example in the MCUXpresso SDK   Tape: 1) Once the LCD has been confirmed to work, you can use two layers of thick double sided foam tape to securely attach it to the board.      Part 2: Camera and LCD for i.MX RT1160 and i.MX RT1170 EVKs:  The i.MX RT1160 and i.MX RT1170 EVKs both come with a OV5640 MIPI camera module in the box.    The LCD screen compatible with the i.MX RT1160 and i.MX RT1170-EVK is the RK055HDMIPI4MA0 and it can be found here.   i.MX RT1170-EVK Camera:  1) The camera connector is on the front of the board at J2. It connects by simply pressing the camera down onto the connector. It takes a bit of force but should not be too difficult.    i.MX RT1170-EVK LCD: 1) On the back of the board, slide the black connector (J40) for the LCD ribbon forward towards the edge of the board.    2) Then carefully slide in the flat LCD ribbon cable into the connector. The blue writing should be facing up like in the photo. It should go above the black part of the connector that you just slid out, and under the white part of the connector.  3) Slide the black plastic connector back to its original position. The cable should be snug if pulled. It should look like the following:    i.MX RT1170-EVK Power: 1) If using the LCD, then the external power adapter must be used with the board. Connect the barrel connector to J43 on the board. 2) Also change the jumper on J38 to be on pins 1-2 so that it uses the external power.  3) Connect a micro-USB cable to J11, which will cause the board to enumerate as a COM port and as a debug interface for downloading and debugging code   i.MX RT1170-EVK Camera and LCD Testing: 1) To test the camera and LCD, use the csi_mipi_rgb_cm7 driver example that can be found in the MCUXpresso SDK for i.MX RT1170. The camera input should be displayed on the LCD screen if everything is connected properly.          
查看全文
RT1015 APP BEE encryption operation method 1 Introduction    NXP RT product BEE encryption can use the master key(the fixed OTPMK SNVS key) or the User Key method. The Master key method is the fixed key, and the user can’t modify it, in the practical usage, a lot of customers need to define their own key, in this situation, customer can use the use key method. This document will take the NXP RT1015 as an example, use the flexible user key method to realize the BEE encryption without the HAB certification.     The BEE encryption test will on the MIMXRT1015-EVK board, mainly three ways to realize it: MCUBootUtility tool , the Commander line method with MFGTool and the MCUXPresso Secure Provisioning tool to download the BEE encryption code.   2 Preparation 2.1  Tool preparation    MCUBootUtility download link:     https://github.com/JayHeng/NXP-MCUBootUtility/archive/v2.3.0.zip    image_enc2.zip download link: https://www.cnblogs.com/henjay724/p/10189602.html After unzip the image_enc2.zip, will get the image_enc.exe, put it under the MCUBootUtility tool folder: NXP-MCUBootUtility-2.3.0\tools\image_enc2\win RT1015 SDK download link: https://mcuxpresso.nxp.com/ 2.2 app file preparation    This document will use the iled_blinky MCUXpresso IDE project in the SDK_2.8.0_EVK-MIMXRT1015 as an example, to generate the app without the XIP boot header. Generate evkmimxrt1015_igpio_led_output.s19 will be used later. Fig 1 3 MCUbootUtility BEE encryption with user key   This chapter will use MCUBootUtility tool to realize the app BEE encryption with the user key, no HAB certification. 3.1 MIMXRT1015-EVK original fuse map    Before doing the BEE encryption, readout the original fuse map, it will be used to compare with the fuse map after the BEE encryption operation. Use the MCUbootUtility tool effuse operation utility page can read out all the fuse map. Fig 2 3.2 MCUbootutility BEE encryption configuration Fig 3 This document just use the BEE encryption, without the HAB certificate, so in the “Enable Certificate for HAB(BEE/OTFAD) encryption”, select: No.    Check Fig4, Select the”Key storage region” as flexible user keys, the protect region 0 start from 0X60001000, length is 0x2000, didn’t encrypt all the app region, just used to compare the original app with the BEE encrypted app code, we can find from 0X60003000, the code will be the plaintext code. But from 0X60001000 to 0X60002FFF will be the BEE encrypted code. After the configuration, Click the button”all in one action”, burn the code to the external QSPI flash. Fig 4 Fig 5 SW_GP2 region in the fuse can be burned separated, click the button”burn DEK data” is OK. Fig 6 Then read out all the fuse map again, we can find in the cfg1, BEE_KEY0_SEL is SW-GP2, it defines the BEE key is using the flexible use key method, not the fixed master key. Fig 7 Then, readout the BEE burned code from the flash with the normal burned code from the flash, and compare with it, the detail situation is: Fig 8 Fig 9 Fig 10 Fig 11 Fig 12    We can find, after the BEE encryption, 0X60001000 to 0X60002FFF is the encrypted code, 0X6000400 area add the EKIB0 data, 0X6000480 area add the EPRDB0 data. Because we just select the BEE engine 0, no BEE engine 1, then we can find 0X60000800 EKIB1 and EPRDB1 are all 0, not the valid data. From 0X60003000, we can find the app data is the plaintext data, the same result with our expected BEE configuration app encrypted range.    Until now, we already realize the MCUBootUtility tool BEE encryption. Exit the serial download mode, configure the MIMXRT10150-EVK on board SW8 as: 1-ON, 2-OFF, 3-ON, 4-OFF, reset the board, we can find the on board user LED is blinking, the BEE encrypted code is working. 4 BEE encryption with the Commander line mode    In practical usage, a lot of customers also need to use the commander line mode to realize the BEE encryption operation, and choose MFGTool download method. So this document will also give the way how to use the SDK SDK_2.8.0_EVK-MIMXRT1015\middleware\mcu-boot\bin\Tools and image_enc tool to realize the BEE commander line method encryption operation, then use the MFGTool download the BEE encrypted code to the RT1015 external QSPI flash.     Because from SDK2.8.0, blhost, elftosb related tools will not be packed in the SDK middleware directly, the customer need to download it from this link: www.nxp.com/mcuboot   4.1 Commander line file preparation     Prepare one folder, put elftosb.exe, image_enc.exe,app file evkmimxrt1015_iled_blinky_0x60002000.s19,RemoveBinaryBytes.exe to that folder. RemoveBinaryBytes.exe is used to modify the bin file, it can be downloaded from this link: https://community.nxp.com/pwmxy87654/attachments/pwmxy87654/imxrt/8733/2/Test.zip (https://community.nxp.com/t5/i-MX-RT/RT1015-BEE-XIP-Step-Confirm/m-p/1070076/page/2)    Then prepare the following files: imx-flexspinor-normal-unsigned.bd imxrt1015_app_flash_sb_gen.bd burn_fuse.bd 4.1.1 imx-flexspinor-normal-unsigned.bd imx-flexspinor-normal-unsigned.bd files is used to generate the app file evkmimxrt1015_iled_blinky_0x60002000.s19 related boot .bin file, which is include the IVT header code: ivt_evkmimxrt1015_iled_blinky_0x60002000.bin ivt_evkmimxrt1015_iled_blinky_0x60002000_nopadding.bin bd file content is   /*********************file start****************************/ options {     flags = 0x00;     startAddress = 0x60000000;     ivtOffset = 0x1000;     initialLoadSize = 0x2000;     //DCDFilePath = "dcd.bin";     # Note: This is required if the default entrypoint is not the Reset_Handler     #       Please set the entryPointAddress to Reset_Handler address     // entryPointAddress = 0x60002000; }   sources {     elfFile = extern(0); }   section (0) { } /*********************file end****************************/‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   4.1.2 imxrt1015_app_flash_sb_gen.bd    This file is used to configure the external QSPI flash, and realize the program function, normally use this .bd file to generate the .sb file, then use the MFGtool select this .sb file and download the code to the external flash.   /*********************file start****************************/ sources {     myBinFile = extern (0); }   section (0) {     load 0xc0000007 > 0x20202000;     load 0x0 > 0x20202004;     enable flexspinor 0x20202000;     erase  0x60000000..0x60005000;     load 0xf000000f > 0x20203000;     enable flexspinor 0x20203000;     load  myBinFile > 0x60000400; } /*********************file end****************************/‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   4.1.3 burn_fuse.bd     BEE encryption operation need to burn the fuse map, but the fuse data is the one time operation from 0 to 1, here will separate the burn fuse operation, only do the burn fuse operation during the first time which the RT chip still didn’t be modified the fuse map. Otherwise, in the next operation, just modify the app code, don’t need to burn the fuse. Burn_fuse.bd is mainly used to configure the fuse data which need to burn the related fuse map, then generate the .sb file, and use the MFGTool burn it with the app together.   /*********************file start****************************/ # The source block assign file name to identifiers sources { }   constants { }   #                !!!!!!!!!!!! WARNING !!!!!!!!!!!! # The section block specifies the sequence of boot commands to be written to the SB file # Note: this is just a template, please update it to actual values in users' project section (0) {     # program SW_GP2     load fuse 0x76543210 > 0x29;     load fuse 0xfedcba98 > 0x2a;     load fuse 0x89abcdef > 0x2b;     load fuse 0x01234567 > 0x2c;         # Program BEE_KEY0_SEL     load fuse 0x00003000 > 0x6;     } /*********************file end****************************/‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ 4.2 BEE commander line operation steps  Create the rt1015_bee_userkey_gp2.bat file, the content is:   elftosb.exe -f imx -V -c imx-flexspinor-normal-unsigned.bd -o ivt_evkmimxrt1015_iled_blinky_0x60002000.bin evkmimxrt1015_iled_blinky_0x60002000.s19 image_enc.exe hw_eng=bee ifile=ivt_evkmimxrt1015_iled_blinky_0x60002000.bin ofile=evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted.bin base_addr=0x60000000 region0_key=0123456789abcdeffedcba9876543210 region0_arg=1,[0x60001000,0x2000,0] region0_lock=0 use_zero_key=1 is_boot_image=1 RemoveBinaryBytes.exe evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted.bin evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin 1024 elftosb.exe -f kinetis -V -c program_imxrt1015_qspi_encrypt_sw_gp2.bd -o boot_image_encrypt.sb evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin elftosb.exe -f kinetis -V -c burn_fuse.bd -o burn_fuse.sb pause‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Fig 13 Fig 14 it mainly has 5 steps: 4.2.1 elftosb generate app file with IVT header elftosb.exe -f imx -V -c imx-flexspinor-normal-unsigned.bd -o ivt_evkmimxrt1015_iled_blinky_0x60002000.bin evkmimxrt1015_iled_blinky_0x60002000.s19 After this commander, will generate two files with the IVT header: ivt_evkmimxrt1015_iled_blinky_0x60002000.bin,ivt_evkmimxrt1015_iled_blinky_0x60002000_nopadding.bin Here, we will use the ivt_evkmimxrt1015_iled_blinky_0x60002000.bin 4.2.2 image_enc generate the app related BEE encrypted code image_enc.exe hw_eng=bee ifile=ivt_evkmimxrt1015_iled_blinky_0x60002000.bin ofile=evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted.bin base_addr=0x60000000 region0_key=0123456789abcdeffedcba9876543210 region0_arg=1,[0x60001000,0x2000,0] region0_lock=0 use_zero_key=1 is_boot_image=1 About the keyword meaning in the image_enc, we can run the image_enc directly to find it. Fig 15 This commander line run result will be the same as the MCUBootUtility configuration. The encryption area from 0X60001000, the length is 0x2000, more details, can refer to Fig 4. After the operation, we can get this file: evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted.bin   4.2.3 RemoveBinaryBytes remove the BEE encrypted file above 1024 bytes RemoveBinaryBytes.exe evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted.bin evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin 1024 This commaner will used to remove the BEE encrypted file, the above 0X400 length data, after the modification, the encrypted file will start from EKIB0 directly. After running it, will get this file: evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin   4.2.4 elftosb generate BEE encrypted app related sb file elftosb.exe -f kinetis -V -c program_imxrt1015_qspi_encrypt_sw_gp2.bd -o boot_image_encrypt.sb evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin This commander will use evkmimxrt1015_iled_blinky_0x60002000_bee_encrypted_remove1K.bin and program_imxrt1015_qspi_encrypt_sw_gp2.bd to generate the sb file which can use the MFGTool download the code to the external flash After running it, we can get this file: boot_image_encrypt.sb   4.2.5 elftosb generate the burn fuse related sb file elftosb.exe -f kinetis -V -c burn_fuse.bd -o burn_fuse.sb This commander is used to generate the BEE code related fuse bits sb file, this sb file will be burned together with the boot_image_encrypt.sb in the MFGTool. But after the fuse is burned, the next app modify operation don’t need to add the burn fuse operation, can download the add directly. After running it, can get this file: burn_fuse.sb   4.3 MFGTool downloading   MIMXRT1015-EVK board enter the serial downloader mode, find two USB cable, plug it in J41 and J9 to the PC. MFGTool can be found in folder: SDK_2.8.0_EVK-MIMXRT1015\middleware\mcu-boot\bin\Tools\mfgtools-rel   If need to burn the burn_fuse.sb, need to modify the ucl2.xml, folder path: \SDK_2.8.0_EVK-MIMXRT1015\middleware\mcu-boot\bin\Tools\mfgtools-rel\Profiles\MXRT1015\OS Firmware    Add the following list to realize it. <LIST name="MXRT1015-beefuse_DevBoot" desc="Boot Flashloader"> <!-- Stage 1, load and execute Flashloader -->        <CMD state="BootStrap" type="boot" body="BootStrap" file="ivt_flashloader.bin" > Loading Flashloader. </CMD>     <CMD state="BootStrap" type="jump"  onError = "ignore"> Jumping to Flashloader. </CMD> <!-- Stage 2, burn BEE related fuse using Flashloader -->      <CMD state="Blhost" type="blhost" body="get-property 1" > Get Property 1. </CMD> <!--Used to test if flashloader runs successfully-->     <CMD state="Blhost" type="blhost" body="receive-sb-file \"Profiles\\MXRT1015\\OS Firmware\\burn_fuse.sb\"" > Program Boot Image. </CMD>     <CMD state="Blhost" type="blhost" body="reset" > Reset. </CMD> <!--Reset device--> <!-- Stage 3, Program boot image into external memory using Flashloader -->       <CMD state="Blhost" type="blhost" body="get-property 1" > Get Property 1. </CMD> <!--Used to test if flashloader runs successfully-->     <CMD state="Blhost" type="blhost" timeout="15000" body="receive-sb-file \"Profiles\\MXRT1015\\OS Firmware\\ boot_image_encrypt.sb\"" > Program Boot Image. </CMD>     <CMD state="Blhost" type="blhost" body="Update Completed!">Done</CMD> </list>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍     If already have burned the Fuse bits, just need to update the app, then we can use MIMXRT1015-DevBoot   <LIST name="MXRT1015-DevBoot" desc="Boot Flashloader"> <!-- Stage 1, load and execute Flashloader -->        <CMD state="BootStrap" type="boot" body="BootStrap" file="ivt_flashloader.bin" > Loading Flashloader. </CMD>     <CMD state="BootStrap" type="jump"  onError = "ignore"> Jumping to Flashloader. </CMD> <!-- Stage 2, Program boot image into external memory using Flashloader -->       <CMD state="Blhost" type="blhost" body="get-property 1" > Get Property 1. </CMD> <!--Used to test if flashloader runs successfully-->     <CMD state="Blhost" type="blhost" timeout="15000" body="receive-sb-file \"Profiles\\MXRT1015\\OS Firmware\\boot_image.sb\"" > Program Boot Image. </CMD>     <CMD state="Blhost" type="blhost" body="Update Completed!">Done</CMD> </list>‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Which detail list is select, it is determined by the cfg.ini name item [profiles] chip = MXRT1015 [platform] board = [LIST] name = MXRT1015-DevBoot‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍   Because my side do the MCUbootUtility operation at first, then the fuse is burned, so in the commander line, I just use MXRT1015-DevBoot download the app.sb Fig 16 We can find, it is burned successfully, click stop button, Configure the MIMXRT1015-EVK on board SW8 as 1-ON,2-OFF,3-ON,4-OFF, reset the board, we can find the on board LED is blinking, it means the commander line also can finish the BEE encryption successfully.   5  MCUXpresso Secure Provisioning BEE unsigned operation      This part will use the MCUXPresso Secure Provisioning tool to finish the BEE unsigned image downloading BEE unsigned image is just use the BEE, no certification. 5.1 Tool downloading MCUXPresso Secure Provisioning download link is: https://www.nxp.com/design/software/development-software/mcuxpresso-software-and-tools-/mcuxpresso-secure-provisioning-tool:MCUXPRESSO-SECURE-PROVISIONING Download it and install it, it’s better to read the tool document at first: C:\nxp\MCUX_Provi_v2.1\MCUXpresso Secure Provisioning Tool.pdf 5.2 Operation Steps Step1: Create the new tool workspace File->New Workspace, select the workspace path. Fig 17 Step2: Chip boot related configuration Fig 18 Here, please note, the boot type need to select as XIP Encrypted(BEE User Keys) unsigned, which is not added the HAB certification function. Step3: USB connection Connect Select USB, it will use the USB HID to connect the board in serial download mode, so the MIMXRT1015-EVK board need insert the USB port to the J9, and the board need to enter the serial download mode: SW8:1-ON,2-OFF,3-OFF,4-ON Connect Test Connection Button, the connection result is: Fig 19 We can see the connection is OK, due to this board has done the BEE operation in the previous time, so the related BEE fuse is burned, then we can find the BEE key and the key source SW-GP2 fuse already has data. Step4: image selection Just like the previous content, prepare one app image. Step 5: XIP Encryption(BEE user keys) configuration Fig 20 Here, it will need to select which engine, we select Engine0, BEE engine KEY use zero key, key source use the SW-GP2, then the detail user key data: 0123456789abcdeffedcba9876543210 Will be wrote to the swGp2 fuse area. Because my board already do that fuse operation, so here it won’t burn the fuse again. Step 6: build image Fig 21 Here, we will find, after this operation, the tool will generate 5 files: 1) evkmimxrt1015_iled_blinky_0x60002000.bin 2) evkmimxrt1015_iled_blinky_0x60002000_bootable.bin 3) evkmimxrt1015_iled_blinky_0x60002000_bootable_nopadding.bin 4) evkmimxrt1015_iled_blinky_0x60002000_nopadding.bin 5) evkmimxrt1015_iled_blinky_0x60002000_nopadding_ehdr0.bin 1), 2), 3) is the plaintext file, 1) and 2) are totally the same, this file maps the data from base 0, from 0x1000 it is IVT+BD+DCD, from 0X2000 is app, so these files are the whole image, just except the FlexSPI Configuration block data, which should put from base address 0. 3) it is the 2) image just delete the first 0X1000 data, and just from IVT+BD+DCD+app. 4) ,5) is the BEE encrypted image, 4) is related to 3), just the BEE encrypted image, 5) is the EKIB0, EPRDB0 data, which should be put in the real address from 0X60000400, it is the BEE Encrypted Key Info Block 0 and Encrypted Protection Region Descriptor Block 0 data, as we just use the engine0, so just have the engin0 data. In fact, the BEE whole image contains : FlexSPI Configuration block data +IVT+BD+DCD+APP FlexSPI Configuration block data is the plaintext, but from 0X60001000 to 0X60002fff is the encrypted image. Step 7: burn the encrypted image Fig 22 Click the Write Image button, to finish the BEE image program. Here, just open the bee_user_key0.bin, we will find, it is just the user key data which is defined in Fig 20, which also should be written to the swGp2 fuse. Check the log, we will find it mainly these process: Erase image from 0x60000000, length is 0x5000. Generate the flexSPI Configuration block data, and download from 0x60000000 Burn evkmimxrt1015_iled_blinky_0x60002000_nopadding_ehdr0.bin to 0X60000400 Burn evkmimxrt1015_iled_blinky_0x60002000_nopadding.bin to 0x60001000 Modify the MIMXRT1015-EVK SW8:1-ON,2-OFF,3-ON,4-OFF, reset or repower on the board, we will find the on board led is blinking, it means the bee encrypted image already runs OK. Please note: SW8_1 is the Encrypted XIP pin, it must be enable, otherwise, even the BEE encrypted image is downloaded to the external flash, but the boot will be failed, as the ROM will use normal boot not the BEE encrypted boot. So, SW8_1 should be ON.    Following pictures are the BEE encrypted image readout file to compare with the tool generated files. Fig 23 Fig 24 Fig 25 Fig 26 Fig 27 About the MCUBootUtility lack the BEE tool image_enc.exe, we also can use the MCUXPresso Secure Provisioning’s image_enc.exe: Copy: C:\nxp\MCUX_Provi_v2.1\bin\tools\image_enc\win\ image_enc.exe To the MCUbootUtility folder: NXP-MCUBootUtility-3.2.0\tools\image_enc2\win Attachment also contains the video about this tool usage operation.    
查看全文
Source code: https://github.com/JayHeng/NXP-MCUBootUtility   【v2.0.0】 Features: > 1. Support i.MXRT5xx A0, i.MXRT6xx A0 >    支持i.MXRT5xx A0, i.MXRT6xx A0 > 2. Support i.MXRT1011, i.MXRT117x A0 >    支持i.MXRT1011, i.MXRT117x A0 > 3. [RTyyyy] Support OTFAD encryption secure boot case (SNVS Key, User Key) >     [RTyyyy] 支持基于OTFAD实现的安全加密启动(唯一SNVS key,用户自定义key) > 4. [RTxxx] Support both UART and USB-HID ISP modes >     [RTxxx] 支持UART和USB-HID两种串行编程方式(COM端口/USB设备自动识别) > 5. [RTxxx] Support for converting bare image into bootable image >     [RTxxx] 支持将裸源image文件自动转换成i.MXRT能启动的Bootable image > 6. [RTxxx] Original image can be a bootable image (with FDCB) >     [RTxxx] 用户输入的源程序文件可以包含i.MXRT启动头 (FDCB) > 7. [RTxxx] Support for loading bootable image into FlexSPI/QuadSPI NOR boot device >     [RTxxx] 支持下载Bootable image进主动启动设备 - FlexSPI/QuadSPI NOR接口Flash > 8. [RTxxx] Support development boot case (Unsigned, CRC) >     [RTxxx] 支持用于开发阶段的非安全加密启动(未签名,CRC校验) > 9. Add Execute action support for Flash Programmer >     在通用Flash编程器模式下增加执行(跳转)操作 > 10. [RTyyyy] Can show FlexRAM info in device status >       [RTyyyy] 支持在device status里显示当前FlexRAM配置情况 Improvements: > 1. [RTyyyy] Improve stability of USB connection of i.MXRT105x board >     [RTyyyy] 提高i.MXRT105x目标板USB连接稳定性 > 2. Can write/read RAM via Flash Programmer >    通用Flash编程器里也支持读写RAM > 3. [RTyyyy] Provide Flashloader resident option to adapt to different FlexRAM configurations >     [RTyyyy] 提供Flashloader执行空间选项以适应不同的FlexRAM配置 Bugfixes: > 1. [RTyyyy] Sometimes tool will report error "xx.bat file cannot be found" >     [RTyyyy] 有时候生成证书时会提示bat文件无法找到,导致证书无法生成 > 2. [RTyyyy] Editing mixed eFuse fields is not working as expected >     [RTyyyy] 可视化方式去编辑混合eFuse区域并没有生效 > 3. [RTyyyy] Cannot support 32MB or larger LPSPI NOR/EEPROM device >     [RTyyyy] 无法支持32MB及以上容量的LPSPI NOR/EEPROM设备 > 4. Cannot erase/read the last two pages of boot device via Flash Programmer >    在通用Flash编程器模式下无法擦除/读取外部启动设备的最后两个Page
查看全文
The i.MX RT600 MCU includes a Cadence® Tensilica® HiFi 4 DSP running at frequencies of up to 600 MHz.The XOS embedded kernel from Cadence is designed for efficient operation on embedded system built using the Xtensa architecture. Although various parts of XOS continue to be tuned for efficient performance on the Xtensa hardware, most of the code is written in standard C and is not Xtensa-specific. Click here to access the full application note.
查看全文