eIQ Machine Learning Software Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

eIQ Machine Learning Software Knowledge Base

Labels

Discussions

Sort by:
eIQ Development software for i.MX RT devices can be downloaded from https://mcuxpresso.nxp.com    The current MCUXpresso SDK 2.16 release supports the following devices:  MCX N i.MX RT500 i.MX RT600 i.MX RT1050 i.MX RT1060 i.MX RT1064 i.MX RT1160 i.MX RT1170 i.MX RT1180   Full details on how to download eIQ and run it with MCUXpresso IDE, VS Code, IAR, or Keil MDK can be found in the attached Getting Started guide.  For more information about eIQ and some hands-on labs for the i.MX RT family, see the following links: eIQ FAQ Getting Started with Time Series Studio Getting Started with MCX N Neutron NPU Getting Started with eIQ Toolkit  Getting Started with TensorFlow Lite for Microcontrollers for i.MX RT Anomaly Detection App Note  Handwritten Digit Recognition App Note Datasets and Transfer Learning App Note  Security for Machine Learning Package 
View full article
See the latest version of this document here: https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-FAQ/ta-p/1099741 
View full article
When comparing NPU with CPU performance on the i.MX 8M Plus, the perception is that inference time is much longer on the NPU. This is due to the fact that the ML accelerator spends more time performing overall initialization steps. This initialization phase is known as warmup and is necessary only once at the beginning of the application. After this step inference is executed in a truly accelerated manner as expected for a dedicated NPU. The purpose of this document is to clarify the impact of the warmup time on overall performance.
View full article
The OpenCV Deep-Neural Network (DNN) is a module for inference in deep networks. It is easy to use and it is a great way to get started with computer vision and inferencing. OpenCV DNN supports many frameworks, such as: Caffe TensorFlow Torch Darknet Models in ONNX format For a simple object detection code with OpenCV DNN, please check Object Detection with OpenCV. For running OpenCV DNN inference with camera input, please refer to eIQ Sample Apps - OpenCV Lab 3. For exploring different models ready for OpenCV DNN inference, please refer to Caffe and TensorFlow Pretrained Models. For more information on OpenCV DNN module, please check Deep Learning in OpenCV.
View full article
The eIQ CMSIS-NN software for i.MX RT devices that is found in the MCUXPresso SDK package can be ported to other microcontroller devices in the RT family, as well as to some LPC and Kinetis devices.  A very common question is what processors support inferencing of models, and the answer is that inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, which almost any MCU or MPU is capable of. There’s no special hardware or module required to do inferencing. However high core clock speeds, and fast memory can drastically reduce inference time. Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application and the particular model.  Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the intermediate calculations and output The attached guide walks through how to port the CMSIS-NN inference engine to the LPC55S69 family. Similar steps can be done to port eIQ to other microcontroller devices. This guide is made available as a reference for users interested in exploring eIQ on other devices, however only the RT1050 and RT1060 are officially supported at this time for CMSIS-NN for MCUs as part of eIQ.  These other eIQ porting guides might also be of interest: Glow Porting Guide for MCUs TensorFlow Lite Porting Guide for RT685
View full article
Caffe and TensorFlow provide a set of pretrained models ready for inference. They can be used as is or for retraining the models with your own dataset. Please check: TensorFlow Hosted Models TensorFlow Model Zoo Caffe Model Zoo
View full article
Two new LCD panels for i.MX RT EVKs are now available. However this new LCD panel is not supported by the i.MX RT1160/RT1170 eIQ demos in MCUXpresso SDK 2.11, and so some changes will need to be made to use the new LCD panels.    For i.MX RT1050/RT1060/RT1064 eIQ Demos in MCUXPresso SDK 2.11 that use the LCD: The eIQ demos are only configured for the original panel. However because the eIQ demos do not use the touch controller, all eIQ demos for i.MX RT1050/1060/1064 will work fine with both the original and new LCD panels without any changes.   For i.MX RT1160/RT1170 eIQ Demos in MCUXPresso SDK 2.11 that use the LCD: The eIQ demos are still only configured for the original panel. MCUXpresso SDK 2.12 will support both panels when it is released later this summer. In the meantime, for those who have the new LCD panel, some changes need to be made to the eIQ demos for i.MX RT1160/RT1170 otherwise you will just get a black or blank screen.  Unzip the MCUXpresso SDK if not done so already Open an eIQ project Find the directory the eIQ project is located in by right clicking on the project name and select Utilities->Open directory browser here   Copy both the fsl_hx8394.c and fsl_hx8394.h files found in \SDK_2_11_1_MIMXRT1170-EVK\components\video\display\hx8394\ into your eIQ project. You can place them in the video folder which would typically be located at C:\Users\username\Documents\MCUXpressoIDE_11.5.0_7232\workspace\eiq_demo_name\video   Overwrite the eiq_display_conf.c and eiq_display_conf.h files in the eIQ project with the updated versions attached to this post. Typically these files would be located at C:\Users\username\Documents\MCUXpressoIDE_11.5.0_7232\workspace\eiq_demo_name\source\video   6. Compile the project as normal and the eIQ demo will now work with the new LCD panel for RT1160/RT1170.
View full article
The attached lab guide walks through step-by-step how to use the new Application Software Pack for the ML-based System State Monitor found on Github.  This is related to AN13562 -  Building and Benchmarking Deep Learning Models for Smart Sensing Appliances on MCUs This lab guide was written for FRDM-MCXN947 but the application software pack also supports RT1170, LPC55S69 and Kinetis K66F devices. It can also be ported to other MCX, i.MX RT, LPC, and Kinetis devices. There's also a document on Dataset creation that goes into more detail on the considerations to make when gathering data. For more details visit the ML-based System State Monitor website on NXP.com The lab uses the FXLS8974CF accelerometer found on the ACCEL 4 Click board or the FRDM-STBI-A8974 board. The FXLS8974CF is the latest accelerometer from NXP and is what is recommended to use.  The video below walks through the steps for the FRDM-STBC-AGM01 but there may be updated details in the lab guide so follow the lab guide if there is any differences.    
View full article
The attached file serves as a table of contents for the various collateral, documents, training, etc. that support eIQ software.
View full article
The attached project enables users to capture and save the camera data captured by a i.MXRT1060-EVK board onto a microSD card. This project does not do inferencing of a model. Instead it is meant to be used to generate images that can then be used on a PC for training a model. The images are saved in RGB NHWC format as a binary file on the microSD card, and then a Python script running on the PC can convert those binary files into the PNG image format.   Software requirements ==================== - MCUXpresso IDE 11.2.x - MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable - IMXRT1060-EVK board with included camera + RK043FN02H-CT LCD - MicroSD card - Personal computer (Windows) - Micro SD card reader - Python 3.x installed Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Insert a micro SD card into the micro SD card slot on the i.MXRT1060-EVK (J39) 2. Open MCUXpresso IDE 11.2 and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 3. If desired, two #define values can be modified in camera_capture.c to adjust the captured image size: #define EXTRACT_HEIGHT 256 //Max EXTRACT_HEIGHT value possible is 271 (due to border drawing) #define EXTRACT_WIDTH 256 //Max EXTRACT_WIDTH value possible is 480 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel 8. Click on the "Resume" icon to begin running the demo. Running the demo ================ The terminal will ask for a classification name. This will create a new directory on the SD card with that name. The name size is limited to 5 characters because the FATFS file system supports only 8 characters in a file name, and three of those characters are used to number the images. For best results, the selection rectangle should be centered on the image and nearly (but not completely)  fill up the whole rectangle. The camera should be stabilized with your finger or by some other means to prevent shaking. Also ensure the camera lens has been focused as described in the instructions when connecting the camera and LCD. While running the demo, press the 'c' key to enter a new classification name, or press 'q' to quit the program and remove the micro SD card. Transfer the .bin files created on the SD card to your PC in the same directory that the Python script which can be found in the "scripts" directory to convert the images to PNG format. If the captured image is a square (width==height) the script can be called with: python convert_image.py directory_name which will convert all the .BIN files in the specified directory name to PNG files. If the captured image is not a square, the width and height can be specified at the command line: python convert_image.py directory_name width height Terminal Output ============== Camera SD Card Capture Extracted Image: Height x Width: 256x256 Please insert a card into board. Card inserted. Mounting SD Card Enter name of new class (must be less than 5 characters): test Creating directory test...... Press any key to capture image. Press 'c' to change class or 'q' to quit Writing file /test/test001.bin...... Write Complete Press any key to capture image. Press 'c' to change class or 'q' to quit Remove SD Card
View full article
The two demos attached for models that were compiled using the GLOW AOT tools and uses a camera connected to the i.MXRT1060-EVK to generate data for inferencing. The default MCUXpresso SDK Glow demos inference on static images, and these demos expand the capability of those projects to do inferencing on camera data. Each demo uses the default model that is found in the SDK. A readme.txt file found in the /doc folder of each demo provides details for each demo, and there a PDF available inside that same /doc folder for example images to point the camera at for inferencing.    Software requirements ==================== - MCUXpresso IDE 11.2.x - MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable - Personal Computer - IMXRT1060-EVK board with included camera + RK043FN02H-CT LCD Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Install and open MCUXpresso IDE 11.2 2. If not already done, import the RT1060 MCUXpresso SDK by dragging and dropping the zipped SDK file into the "Installed SDKs" tab. 3. Download one of the attached zip files and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel and then click on the "Resume" button in the Debug perspective that comes up to run the demo. Running the demo ================ For CIFAR10 demo: Use the camera to look at images of airplanes, ships, deer, etc that can be recognized by the CIFAR10 model. The include PDF can be used for example images. For MNIST demo: Use the camera to look at handwritten digits which can be recognized by the LeNet MNIST model. The included PDF can be used for example digits or you can write your own. For further details see the readme.txt file found inside each demo in the /doc directory. Also see the Glow Lab for i.MX RT for more details on how to compile neural network models with Glow. 
View full article
The eIQ Glow neural network compiler software for i.MX RT devices that is found in the MCUXPresso SDK package can be ported to other microcontroller devices in the RT family as well as to some LPC and Kinetis devices. Glow supports compiling machine learning models for Cortex-M4, Cortex-M7, and Cortex-M33 cores out of the box.  Because inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, most embedded microcontrollers can support inferencing of a neural network model. There’s no special hardware or module required to do the inferencing. However specialized ML hardware accelerators, high core clock speeds, and fast memory can drastically reduce inference time. The minimum hardware requirements are also extremely dependent on the particular model being used. Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application and your particular model.  Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the model's intermediate calculations and output   The minimum memory requirements for a particular model when using Glow can be found by using a simple formula using numbers found in the Glow bundle header file after compiling your model: Flash: Base Project + CONSTANT_MEM_SIZE + .o object File    RAM: Base Project + MUTABLE_MEM_SIZE + ACTIVATIONS_MEM_SIZE        More details can be found in this Glow Memory Usage app note.   The attached guide walks through how to port Glow to the LPC55S69 family based on the Cortex-M33 core. Similar steps can be done to port Glow to other NXP microcontroller devices. This guide is made available as a reference for users interested in exploring Glow on other devices not currently supported in the MCUXpresso SDK.  These other eIQ porting guides might also be of interest: TensorFlow Lite Porting Guide for RT685
View full article
Getting Started with eIQ Software for i.MX Applications Processors Getting Started with eIQ for i.MX RT
View full article
The eIQ demos for i.MX RT use arrays for input data for inferencing. The attached guide and scripts describe how to create custom input data (both images and audio) for use with the eIQ examples.
View full article
NXP BSP currently does not support running a Keras application directly on i.MX. The customers that use this approach must convert their Keras model into one the supported inference engines in eIQ. In this post we will cover converting a Keras model (.h5) to a TfLite model (.tflite). Install TensorFlow with the same eIQ TfLite supported version (you can find this information on Linux User's Guide). For L4.19.35_1.0.0 the TfLite version is v1.12.0. $ pip3 install tensorflow==1.12.0 Run the following commands in a python3 environment to convert the .h5 model to a .tflite model: >>> from tensorflow.contrib import lite >>> converter = lite.TFLiteConverter.from_keras_model_file('model.h5') #path to your model  >>> tfmodel = converter.convert() >>> open("model.tflite", "wb").write(tfmodel) The model can be deployed and used by TfLite inference engine in eIQ.
View full article
eIQ Software for i.MX application processors eIQ Machine Learning Software for iMX Linux - 5.4.3_1.0.0 GA for i.MX6/7 and i.MX8MQ/8MM/8MN/8QM/8QXP has been released. eIQ Machine Learning Software for iMX Linux - 5.4.24_2.1.0 BETA for i.MX8QXPlus, BETA for i.MX8MP, and ALPHA 2 for i.MX8DXL has been released. It contains machine learning support for Arm NN, TensorFlow and TensorFlow Lite, ONNX, and OpenCV.  For running on Arm Cortex A cores, these inference engines are accelerated with Arm NEON instructions. For running on the NPU (of the i.MX 8M Plus) and i.MX 8 GPUs, NXP has included optimizations with Arm NN and TensorFlow Lite inference engines. For more information and complete details please be sure to check out the "NXP eIQ Machine Learning" chapter in the Linux User Guide (starting on L4.19 releases; L4.14 releases users should refer to NXP eIQ™ Machine Learning Software Development Environment for i.MX Applications Processors). You can access corresponding sample applications at https://source.codeaurora.org/external/imxsupport/eiq_sample_apps/.   For more information on artificial intelligence, machine learning and eIQ Software please visit AI & Machine Learning | NXP.     eIQ Software for i.MX RT crossover processors eIQ is now included in the MCUXpresso SDK package for i.MX RT1050 and i.MX RT1060. Go to https://mcuxpresso.nxp.comand search for the SDK for your board On the SDK builder page, click on “Add software component” Click on “Select All” and verify the eIQ software option is now checked. Then click on “Save Changes” Download the SDK. It will be saved as a .zip file. eIQ projects can be found in the \boards\<board_name>\eiq_examples folder eIQ source code can be found in the \middleware\eiq folder   More details can be found in this Community post  on how to get started with eIQ on i.MX RT devices. 
View full article
This Lab 4 explains how to get started with TensorFlow Lite application demo on i.MX8 board using Inference Engines for eIQ Software. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: TensorFlow Lite MobileFaceNets MIPI/USB Camera Face Detection Using OpenCV   This application demo uses Haar Feature-based Cascade Classifiers for real time face detection. The pre-trained Haar Feature-based Cascade Classifiers for face, named as XML, is already contained in OpenCV. The XML file for face is stored in the opencv/data/haarcascades/ folder. It is also put on code aurora. Read Face Detection using Haar Cascades for more details. TensorFlow Lite implementation for MobileFaceNets  The MobileFaceNets is re-trained with a smaller batch size and input size to get a higher performance on a host PC. The trained model is loaded as a source file in this demo. Setting Up the Board Step 1 - Download the demo from eIQ Sample Apps and put it in /opt/tflite folder. Then enter the src folder: root@imx8mmevk:~# cd /opt/tflite/examples-tflite/1-example/src/ root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# This folder should include these files: . ├── face_detect_helpers.cpp ├── face_detect_helpers.h ├── face_detect_helpers_impl.h ├── face_recognition.cpp ├── face_recognition.h ├── haarcascade_frontalface_alt.xml ├── Makefile ├── mfn.h ├── profiling.h └── ThreadPool.h Step 2 - Compile the source code on the board: root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# make Step 3 - Run the demo: root@imx8mmevk:/opt/tflite/examples-tflite/1-example/src# ./FaceRecognition -c 0 -h 0.85 NOTE: -c is used to specify the camera index. '0' means the MIPI/USB camera is mounted on /dev/video0. -h is a threshold for the prediction score. Step 4 - Add a new person to the face data set.  When the demo is running, it will detect one biggest face at real time. Once the face is detected, you can click keyboards on the right of GUI to input the new person's name. Then, click 'Add new person' to add the face to data set.  In brief, 1. Detect face. 2. Input new person's name. 3. Click 'Add new person'. NOTE: Once new faces are added, it will create a folder named 'data' in current directory. If you want to remove the new face from the data set, just delete it in 'data'.
View full article
TensorFlow provides a set of pretrained models ready for inference, please checkCaffe and TensorFlow Pretrained Models. You can use the model as it is or you can retrain the model with your own data to detect specific objects for your custom application. This post shows some useful links that can help you on this task. Inception models: Training Custom Object Detector — TensorFlow Object Detection API tutorial documentation The link above also shows the needed steps to prepare your data for the retraining process (image labeling). MobileNet models: Train your own model with SSD MobileNet · ichbinblau/tfrecord_generator Wiki · GitHub  For the link above, you also need to follow the steps to prepare your data for the retraining process as the Inception retraining tutorial. Make sure your exported the needed PYTHONPATH variable: export PYTHONPATH=$PYTHONPATH:/path/to/tf_models/models/research:/path/to/tf_models/models/research/slim A few tips: - Retraining a model will be faster than training a model from the beginning, but it can still take a long time to complete. It depends on many factors, such as the number of steps defined on the model's *.config file. You need to be aware of overfitting your model if your dataset is too small and the number of steps are too large. Also, TensorFlow saves checkpoints at the retraining process, which you can prepare for inference and test before the retraining process is over and check when the models is good enough for your application. Please check "Exporting a Trained Inference Graph" in the Inception retraining tutorial and keep in mind that you can follow these steps before the training process is complete. Of course, low checkpoints may not be well trained. - If your are running OpenCV DNN inference, you may need to run the following command to get the *.pbtxt file, where X corresponds to the number of classes trained in your model and tf_text_graph_ssd.py is an OpenCV DNN script: python tf_text_graph_ssd.py --input frozen_inference_graph.pb --output frozen_inference_graph.pbtxt --num_classes X
View full article