eIQ Machine Learning Software Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

eIQ Machine Learning Software Knowledge Base

Labels

Discussions

Sort by:
eIQ Toolkit enables machine learning development with an intuitive GUI (named eIQ Portal) and development workflow tools, along with command line host tool options as part of the eIQ ML software development environment. Developers can create, optimize, debug and export ML models, as well as import datasets and models, rapidly train and deploy neural network models and ML workloads. The eIQ Portal provides output TensorFlow Lite models that seamlessly feed into eIQ inference engines like TensorFlow Lite and TensorFlow Lite for Microcontrollers. Using a tool called Model Runner, eIQ Toolkit can also generate runtime insights to help optimize neural network architectures on i.MX RT and i.MX devices. These labs go over how to use eIQ Portal. It is recommended to do them in the following order: Data Import Lab Model Runner Lab   The labs are written for using a FRDM-MCXN947 and i.MX RT1170-EVK, but other eIQ supported devices can be used as well.  MCX N i.MX RT1050 i.MX RT1060 i.MX RT1064 i.MX RT1160 i.MX RT1170 i.MX RT1180 i.MX RT500 i.MX RT600 For details on the Time Series Studio tool included in eIQ Toolkit please see the Time Series Studio lab guides. For 
View full article
This docker is designed to enable portability to leverage i.MX 8 series GPUs. It enables flexibility on top of the Yocto image and allows the use of libraries available in Ubuntu for machine learning, which is otherwise difficult. Using docker, a user can develop and prototype GPU applications and then ship and run it anywhere using the container. This App note describes how to enable this Docker. The docker is a wrapper that provides an application with the necessary dependencies to execute code on the GPU. This is a significant achievement and has the potential to greatly simplify many customer developments for Linux. We keep the Yocto BSP intact, but customers can develop applications using the widely available neural network frameworks and libraries at the same time leveraging the GPU without compromising on performance. Not quite as straightforward as full up Debian but still should be an easy sell. marcofranchi‌ diegodorta‌
View full article
See the latest version of this document here: https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-FAQ/ta-p/1099741 
View full article
This lab will walk through how to use eIQ Time Series Studio (TSS), a new tool, included as part of eIQ Tookit, for creating time series models for embedded microcontrollers.  It covers how to import time series data, shows how the tool can generate multiple ML algorithms, and describes how to deploy those generated models to your development board. This lab uses the FRDM-MCXN947 but the same steps will apply to any of the devices supported by eIQ Time Series Studio:  MCX FRDM-MCXA153 FRDM-MCXN947 MCX-N9XX-EVK i.MX RT i.MXRT685-EVK i.MXRT595-EVK MIMXRT1060-EVK MIMXRT1170-EVK MIMXRT1180-EVK LPC LPC55S69-EVK Kinetis FRDM-K66F FRDM-KV31F FRDM-K32L3A6 DSC MC56F83000-EVK MC56F80000-EVK   You can also view the video below for a quick overview of the Time Series Studio process.  Also check out the ML Universal Datalogger on the App Code Hub for a tool to collect sensor data that can be used with the Time Series Studio. 
View full article
Two new LCD panels for i.MX RT EVKs are now available. However this new LCD panel is not supported by the i.MX RT1160/RT1170 eIQ demos in MCUXpresso SDK 2.11, and so some changes will need to be made to use the new LCD panels.    For i.MX RT1050/RT1060/RT1064 eIQ Demos in MCUXPresso SDK 2.11 that use the LCD: The eIQ demos are only configured for the original panel. However because the eIQ demos do not use the touch controller, all eIQ demos for i.MX RT1050/1060/1064 will work fine with both the original and new LCD panels without any changes.   For i.MX RT1160/RT1170 eIQ Demos in MCUXPresso SDK 2.11 that use the LCD: The eIQ demos are still only configured for the original panel. MCUXpresso SDK 2.12 will support both panels when it is released later this summer. In the meantime, for those who have the new LCD panel, some changes need to be made to the eIQ demos for i.MX RT1160/RT1170 otherwise you will just get a black or blank screen.  Unzip the MCUXpresso SDK if not done so already Open an eIQ project Find the directory the eIQ project is located in by right clicking on the project name and select Utilities->Open directory browser here   Copy both the fsl_hx8394.c and fsl_hx8394.h files found in \SDK_2_11_1_MIMXRT1170-EVK\components\video\display\hx8394\ into your eIQ project. You can place them in the video folder which would typically be located at C:\Users\username\Documents\MCUXpressoIDE_11.5.0_7232\workspace\eiq_demo_name\video   Overwrite the eiq_display_conf.c and eiq_display_conf.h files in the eIQ project with the updated versions attached to this post. Typically these files would be located at C:\Users\username\Documents\MCUXpressoIDE_11.5.0_7232\workspace\eiq_demo_name\source\video   6. Compile the project as normal and the eIQ demo will now work with the new LCD panel for RT1160/RT1170.
View full article
The attached lab guide walks through step-by-step how to use the new Application Software Pack for the ML-based System State Monitor found on Github.  This is related to AN13562 -  Building and Benchmarking Deep Learning Models for Smart Sensing Appliances on MCUs This lab guide was written for FRDM-MCXN947 but the application software pack also supports RT1170, LPC55S69 and Kinetis K66F devices. It can also be ported to other MCX, i.MX RT, LPC, and Kinetis devices. There's also a document on Dataset creation that goes into more detail on the considerations to make when gathering data. For more details visit the ML-based System State Monitor website on NXP.com The lab uses the FXLS8974CF accelerometer found on the ACCEL 4 Click board or the FRDM-STBI-A8974 board. The FXLS8974CF is the latest accelerometer from NXP and is what is recommended to use.  The video below walks through the steps for the FRDM-STBC-AGM01 but there may be updated details in the lab guide so follow the lab guide if there is any differences.    
View full article
The attached file serves as a table of contents for the various collateral, documents, training, etc. that support eIQ software.
View full article
The attached project enables users to capture and save the camera data captured by a i.MXRT1060-EVK board onto a microSD card. This project does not do inferencing of a model. Instead it is meant to be used to generate images that can then be used on a PC for training a model. The images are saved in RGB NHWC format as a binary file on the microSD card, and then a Python script running on the PC can convert those binary files into the PNG image format.   Software requirements ==================== - MCUXpresso IDE 11.2.x - MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable - IMXRT1060-EVK board with included camera + RK043FN02H-CT LCD - MicroSD card - Personal computer (Windows) - Micro SD card reader - Python 3.x installed Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Insert a micro SD card into the micro SD card slot on the i.MXRT1060-EVK (J39) 2. Open MCUXpresso IDE 11.2 and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 3. If desired, two #define values can be modified in camera_capture.c to adjust the captured image size: #define EXTRACT_HEIGHT 256 //Max EXTRACT_HEIGHT value possible is 271 (due to border drawing) #define EXTRACT_WIDTH 256 //Max EXTRACT_WIDTH value possible is 480 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel 8. Click on the "Resume" icon to begin running the demo. Running the demo ================ The terminal will ask for a classification name. This will create a new directory on the SD card with that name. The name size is limited to 5 characters because the FATFS file system supports only 8 characters in a file name, and three of those characters are used to number the images. For best results, the selection rectangle should be centered on the image and nearly (but not completely)  fill up the whole rectangle. The camera should be stabilized with your finger or by some other means to prevent shaking. Also ensure the camera lens has been focused as described in the instructions when connecting the camera and LCD. While running the demo, press the 'c' key to enter a new classification name, or press 'q' to quit the program and remove the micro SD card. Transfer the .bin files created on the SD card to your PC in the same directory that the Python script which can be found in the "scripts" directory to convert the images to PNG format. If the captured image is a square (width==height) the script can be called with: python convert_image.py directory_name which will convert all the .BIN files in the specified directory name to PNG files. If the captured image is not a square, the width and height can be specified at the command line: python convert_image.py directory_name width height Terminal Output ============== Camera SD Card Capture Extracted Image: Height x Width: 256x256 Please insert a card into board. Card inserted. Mounting SD Card Enter name of new class (must be less than 5 characters): test Creating directory test...... Press any key to capture image. Press 'c' to change class or 'q' to quit Writing file /test/test001.bin...... Write Complete Press any key to capture image. Press 'c' to change class or 'q' to quit Remove SD Card
View full article
The two demos attached for models that were compiled using the GLOW AOT tools and uses a camera connected to the i.MXRT1060-EVK to generate data for inferencing. The default MCUXpresso SDK Glow demos inference on static images, and these demos expand the capability of those projects to do inferencing on camera data. Each demo uses the default model that is found in the SDK. A readme.txt file found in the /doc folder of each demo provides details for each demo, and there a PDF available inside that same /doc folder for example images to point the camera at for inferencing.    Software requirements ==================== - MCUXpresso IDE 11.2.x - MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable - Personal Computer - IMXRT1060-EVK board with included camera + RK043FN02H-CT LCD Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Install and open MCUXpresso IDE 11.2 2. If not already done, import the RT1060 MCUXpresso SDK by dragging and dropping the zipped SDK file into the "Installed SDKs" tab. 3. Download one of the attached zip files and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel and then click on the "Resume" button in the Debug perspective that comes up to run the demo. Running the demo ================ For CIFAR10 demo: Use the camera to look at images of airplanes, ships, deer, etc that can be recognized by the CIFAR10 model. The include PDF can be used for example images. For MNIST demo: Use the camera to look at handwritten digits which can be recognized by the LeNet MNIST model. The included PDF can be used for example digits or you can write your own. For further details see the readme.txt file found inside each demo in the /doc directory. Also see the Glow Lab for i.MX RT for more details on how to compile neural network models with Glow. 
View full article
When comparing NPU with CPU performance on the i.MX 8M Plus, the perception is that inference time is much longer on the NPU. This is due to the fact that the ML accelerator spends more time performing overall initialization steps. This initialization phase is known as warmup and is necessary only once at the beginning of the application. After this step inference is executed in a truly accelerated manner as expected for a dedicated NPU. The purpose of this document is to clarify the impact of the warmup time on overall performance.
View full article
The eIQ Glow neural network compiler software for i.MX RT devices that is found in the MCUXPresso SDK package can be ported to other microcontroller devices in the RT family as well as to some LPC and Kinetis devices. Glow supports compiling machine learning models for Cortex-M4, Cortex-M7, and Cortex-M33 cores out of the box.  Because inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, most embedded microcontrollers can support inferencing of a neural network model. There’s no special hardware or module required to do the inferencing. However specialized ML hardware accelerators, high core clock speeds, and fast memory can drastically reduce inference time. The minimum hardware requirements are also extremely dependent on the particular model being used. Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application and your particular model.  Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the model's intermediate calculations and output   The minimum memory requirements for a particular model when using Glow can be found by using a simple formula using numbers found in the Glow bundle header file after compiling your model: Flash: Base Project + CONSTANT_MEM_SIZE + .o object File    RAM: Base Project + MUTABLE_MEM_SIZE + ACTIVATIONS_MEM_SIZE        More details can be found in this Glow Memory Usage app note.   The attached guide walks through how to port Glow to the LPC55S69 family based on the Cortex-M33 core. Similar steps can be done to port Glow to other NXP microcontroller devices. This guide is made available as a reference for users interested in exploring Glow on other devices not currently supported in the MCUXpresso SDK.  These other eIQ porting guides might also be of interest: TensorFlow Lite Porting Guide for RT685
View full article
This lab will cover how to take an existing TensorFlow Lite model and run it on NXP MCU devices using the TensorFlow Lite for Microcontrollers inference engine. It will use the Flower model generated as part of the eIQ Toolkit lab as an example, but the same process can be used for other TFLite models. eIQ provides examples that incorporate an LCD and camera alongside the inference engine, and so the EVK boards can be used to identify different types of flowers.   This lab can also be used without a camera+LCD, but in that scenario the flowers images will need to be converted to a C array and loaded at compile time.      Attached to this post you will find: Photos to test out the new model A lab document on how to do 'transfer learning' on a TensorFlow model and then run that TFLite model on the i.MX RT family using TensorFlow Lite for Microcontrollers. The use of the camera+LCD is optional. If have camera+LCD use: eIQ TensorFlow Lite for Microcontrollers for i.MX RT170 - With Camera.pdf If do not have camera or LCD use: eIQ TensorFlow Lite for Microcontrollers for i.MX RT170 - Without Camera.pdf If using the RT685 use: eIQ TensorFlow Lite for Microcontrollers for i.MX RT685 - Without Camera.pdf   This lab supports the following boards: FRDM-MCXN947 i.MX RT685-EVK i.MX RT1050-EVKB i.MX RT1060-EVK i.MX RT1064-EVK i.MX RT1160-EVK i.MX RT1170-EVK i.MX RT1180-EVK Updated November 2024 for MCUXpresso SDK 2.16 and eIQ Toolkit 1.13.1
View full article
The OpenCV Deep-Neural Network (DNN) is a module for inference in deep networks. It is easy to use and it is a great way to get started with computer vision and inferencing. OpenCV DNN supports many frameworks, such as: Caffe TensorFlow Torch Darknet Models in ONNX format For a simple object detection code with OpenCV DNN, please check Object Detection with OpenCV. For running OpenCV DNN inference with camera input, please refer to eIQ Sample Apps - OpenCV Lab 3. For exploring different models ready for OpenCV DNN inference, please refer to Caffe and TensorFlow Pretrained Models. For more information on OpenCV DNN module, please check Deep Learning in OpenCV.
View full article
MCUXpresso SDK 2.10 for RT1064 now includes eIQ projects for all eIQ inference engines and so this Knowledge Base article is now depreciated. The instructions are being left up however in case any users using older versions of the SDK before i.MX RT1064 eIQ was fully supported need these steps in the future. Users with a i.MX RT1064 EVK should just use SDK 2.10 or later which has all the eIQ projects natively for i.MX RT1064.        1. Import an i.MX RT1060 project into the SDK. For this example, we'll use the Label Image demo.    2. Right click on the project in the workspace and select Properties.      3. Open the C/C++ Build -> MCU Settings page   4. Change the "Location" of the BOARD_FLASH parameter to 0x70000000 which is where the flash is located on the RT1064. Also adjust the size to be 0x400000. You will need to type it out.      5. Then you need to change the "Driver" parameter so the debugger knows to use the flash algorithm for the RT1064 board. Click on that field and you will see a "..." icon come up. Click on it.      6. Change the Flash driver to MIMXRT1064.cfx     7. Click on OK to close the dialog box, then click on Apply and Close to close the Properties dialog box.        8. Next we need to modify the MPU settings for the new flash address.    9. Open up board.c file. Modify the lines below to change the memory address and the memory size on lines 322 and 323 to start at 0x70000000 and for a 4MB region size.      9. Next, modify the clock settings code to ensure that FlexSPI2 is enabled. The clock setup code in the RT1060 SDK disables FlexSPI2, so we need to comment out that code in order to run the example on the RT1064. Open up clock_config.c file and comment out lines 264, 266, and 268.   10. Finally, open the fsl_flexpi_nor_boot.h file and modify the FLASH_BASE define to use FlexSPI2_AMBA_BASE on line 103     11. Compile and debug the project like normal and this project will now run on the RT1064 board.    Updated July 2021 for SDK 2.10 release. 
View full article
Getting Started with eIQ Software for i.MX Applications Processors Getting Started with eIQ for i.MX RT
View full article
UPDATE: Note that this document describes eIQ Machine Learning Software for the NXP L4.14 BSP release. Beginning with the L4.19 BSP, eIQ Software is pre-integrated in the BSP release and this document is no longer necessary or being maintained. For more information on eIQ Software in these releases (L4.19, L5.4, etc), please refer to the "NXP eIQ Machine Learning" chapter in the Linux User Guide for that specific release.  Original post: eIQ Machine Learning Software for iMX Linux 4.14.y kernel series is available now. The NXP eIQ™ Machine Learning Software Development Environment enables the use of ML algorithms on NXP MCUs, i.MX RT crossover processors, and i.MX family SoCs. eIQ software includes inference engines, neural network compilers, and optimized libraries and leverages open source technologies. eIQ is fully integrated into our MCUXpresso SDK and Yocto development environments, allowing you to develop complete system-level applications with ease. Source download, build and installation Please refer to document NXP eIQ(TM) Machine Learning Enablement (UM11226.pdf) for detailed instructions on how to download, build and install eIQ software on your platform. Sample applications To help get you started right away we've posted numerous howtos and sample applications right here in the community. Please refer to eIQ Sample Apps - Overview. Supported platforms eIQ Machine learning software for i.MX Linux 4.14.y supports the L4.14.78-1.0.0 and L4.14.98-2.0.0 GA releases running on i.MX 8 Series Applications Processors. For more information on artificial intelligence, machine learning and eIQ Software please visit AI & Machine Learning | NXP.
View full article
The attached labs provide a step-by-step guide on how to use the eIQ for Glow Neural Network compiler with a handwritten digit recognition model example. This compiler tool turns a model into an machine executable binary for a targeted device. Both the model and the inference engine are compiled into a binary that is generated, which can decrease both inference time and memory usage. That binary can then be integrated into an MCUXpresso SDK software project.    The eIQ Glow Lab for RT1170.pdf can be used with the i.MX RT1170, RT1160, RT1064, RT1060, and RT1050 The eIQ Glow Lab for RT685.pdf can be used with the RT685.    A step-by-step video is also available You will need to download the Glow compiler tools package as well as the latest MCUXpresso SDK for the board you're using. More details on Glow can be found in the eIQ Glow Ahead of Time User Guide and the Glow website.  Updated August 2023
View full article
   The eIQ Sample Apps repository hosts Machine Learning applications demos based on the eIQ ™ ML Software Development Environment. The following examples were tested and used for training purposes. To be understandable each application contains a read-me file allowing the user to get started with the eIQ demos.    The eIQ samples apps target the latest eIQ release and are split in labs sections. Before starting with the examples, read the introduction part: eIQ Sample Apps - Introduction Object Recognition using Arm NN This section contains samples for running inference and predicting different objects. It also includes an extension that can recognize any given camera input/object. eIQ Sample Apps - Object Recognition using Arm NN Handwritten Digit Recognition This section focuses on a comparison of inference time between different models for handwritten digits recognition. eIQ Sample Apps - Handwritten Digit Recognition Object Recognition using OpenCV DNN This section uses OpenCV DNN module for running inference and detecting objects from an image. It also includes an extension that can detect any given camera input/object. eIQ Sample Apps - Object Recognition using OpenCV DNN Face Recognition using TensorFlow Lite This section uses a model for running inference and recognizing faces. eIQ Sample Apps - Face Recognition using TF Lite TensorFlow Lite Quantization This tutorial demonstrates how to convert a TensorFlow model to TensorFlow Lite and then apply quantization. eIQ Sample Apps - TFLite Quantization TensorFlow Transfer Learning This lab takes a TensorFlow image classification model and re-trains it to categorize images of flowers.  eIQ Transfer Learning Lab with i.MX 8 To deploy the demos from the eIQ Sample Apps repository to an i.MX8 board, please check: Deploying the eIQ Sample Apps to an i.MX8 board  These labs sections will be updated frequently in order to keep all codes and tutorials up-to-date. Check also: https://community.nxp.com/community/eiq/blog/2020/06/30/pyeiq-a-python-framework-for-eiq-on-imx-processors 
View full article
This Lab 4 explains how to get started with TensorFlow Lite application demo on i.MX8 board using Inference Engines for eIQ Software. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: TensorFlow Lite MobileFaceNets MIPI/USB Camera Face Detection Using OpenCV   This application demo uses Haar Feature-based Cascade Classifiers for real time face detection. The pre-trained Haar Feature-based Cascade Classifiers for face, named as XML, is already contained in OpenCV. The XML file for face is stored in the opencv/data/haarcascades/ folder as well as code aurora. Read Face Detection using Haar Cascades for more details. TensorFlow Lite implementation for MobileFaceNets  The MobileFaceNets is re-trained on a host PC with a smaller batch size and input size to get higher performance. The trained model is loaded as a source file in this demo. Setting Up the Board Step 1 - Download the demo from eIQ Sample Apps and put it in /opt/tflite folder. Then enter the src folder: root@imx8mmevk:~# cd /opt/tflite/examples-tflite/face_recognition/src/ root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# This folder should include these files: . ├── face_detect_helpers.cpp ├── face_detect_helpers.h ├── face_detect_helpers_impl.h ├── face_recognition.cpp ├── face_recognition.h ├── haarcascade_frontalface_alt.xml ├── Makefile ├── mfn.h ├── profiling.h └── ThreadPool.h Step 2 - Compile the source code on the board: root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# make Step 3 - Run the demo: root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# ./FaceRecognition -c 0 -h 0.85 NOTE: -c is used to specify the camera index. '0' means the MIPI/USB camera is mounted on /dev/video0. -h is a threshold for the prediction score. Step 4 - Add a new person to the face data set.  When the demo is running, it will detect one biggest face at real time. Once the face is detected, you can click keyboards on the right of GUI to input the new person's name. Then, click 'Add new person' to add the face to data set.  In brief, 1. Detect face. 2. Input new person's name. 3. Click 'Add new person'. NOTE: Once new faces are added, it will create a folder named 'data' in current directory. If you want to remove the new face from the data set, just delete it in 'data'.
View full article
The eIQ demos for i.MX RT use arrays for input data for inferencing. The attached guide and scripts describe how to create custom input data (both images and audio) for use with the eIQ examples.
View full article