eIQ Machine Learning Software Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

eIQ Machine Learning Software Knowledge Base

Labels

Discussions

Sort by:
This document will cover some of the most commonly asked questions we've gotten about eIQ and embedded machine learning. Anything requiring more in-depth discussion/explanation will be put in a separate thread. All new questions should go into their own thread as well What is eIQ? The NXP® eIQ™ machine learning (ML) software development environment enables the use of ML algorithms on NXP EdgeVerse™ microcontrollers and microprocessors, including MCX-N microcontrollers, i.MX RT crossover MCUs, and i.MX family application processors. eIQ ML software includes a ML workflow tool called eIQ Toolkit, along with inference engines, neural network compilers and optimized libraries. This software leverages open-source and proprietary technologies and is fully integrated into our MCUXpresso SDK and Yocto development environments, allowing you to develop complete system-level applications with ease. eIQ also enables models to use the new eIQ Neutron NPU found on the MCX-N microcontroller devices and upcoming future NPU enabled embedded devices.    How much does eIQ cost? Free! NXP is making eIQ freely available as a basic enablement to jumpstart ML application development. It is also royalty free.    What devices are supported by eIQ? eIQ is available for the following i.MX application processors: i.MX 8M Plus i.MX 8M i.MX 8M Nano i.MX 8M Mini i.MX 8 i.MX 8X eIQ is available for the following MCX MCUs: MCX-N eIQ is available for the following i.MX RT crossover MCUs: i.MX RT1170 i.MX RT1160 i.MX RT1064 i.MX RT1060 i.MX RT1050 i.MX RT685 i.MX RT595   What inference engines are available in eIQ? i.MX apps processors and i.MX RT MCUs support different inference engines. The best inference engine can depend on the particular model being used, so eIQ offers several inference engine options to find the best fit for your particular application.    Inference engines for i.MX: TensorFlow Lite (Supported on both CPU and GPU/NPU) ARM NN (Supported on both CPU and GPU/NPU) OpenCV (Supported on only CPU) ONNX Runtime (Currently only supported on CPU) Inference engines for MCX and i.MX RT TensorFlow Lite for Microcontrollers     Can eIQ run on other MCU devices? There's no special hardware module required to run eIQ inference engines and it is possible to port the inference engines to other NXP devices.    What is eIQ Toolkit? eIQ Toolkit enables machine learning development with an intuitive GUI (named eIQ Portal) and development workflow tools, along with command line host tool options as part of the eIQ ML software development environment.  The eIQ Portal is an intuitive graphical user interface (GUI) that simplifies ML development. Developers can create, optimize, debug and export ML models, as well as import datasets and models, rapidly train and deploy neural network models and ML workloads. eIQ Toolkit also includes the Neutron Converter Tool that is used to convert quantized TensorFlow Lite models so they can make use of the eIQ Neutron NPU found on newer NXP devices like the MCX N family.  Is eIQ Toolkit required to use eIQ inference engines?  No, eIQ Toolkit is optional enablement from NXP to make it easier to generate vision-based models that can then be used with the eIQ inference engines. However if you already have your model development flow in place, or want to use pre-created models from a model zoo, you can use those models with eIQ inference engines as well.    What is the eIQ Neutron NPU?  The new eIQ Neutron NPU is a Neural Processing Unit developed by NXP which has been integrated into the upcoming MCX N and i.MX95 devices, with many more to come. It was designed to accelerate neural network computations and significantly reduce model inference time. The scalability of this module allows NXP to integrate this NPU into a wide range of devices all while having the same eIQ software enablement.  For more details on the NPU for MCX N see this Community post. How can I start using the eIQ Neutron NPU?  There are hands-on NPU lab guides available that walk through the steps for converting and running a model with the eIQ Neutron NPU.  How can I get eIQ? For MCU devices: eIQ inference engines are included as part of MCUXpresso SDK for supported devices. Make sure to select the “eIQ” middleware option. There is an additional optional software packages: eIQ Toolkit - for model creation and conversion. Includes the GUI model creation tool eIQ Portal and eIQ Neutron Converter Tool for eIQ Neutron NPU enabled devices like MCX N.  For i.MX devices: eIQ is distributed as part of the Yocto Linux BSP. Starting with the 4.19 release line there is a dedicated Yocto image that includes all the Machine Learning features: ‘imx-image-full’. For pre-build binaries refer to i.MX Linux Releases and Pre-releases pages. There are is also an optional software package:  eIQ Toolkit - for model creation and conversion. Includes the GUI model creation tool eIQ Portal   What documentation is available for eIQ? For i.MX RT and MCX N devices:  There are user guides inside the \middleware\eiq\doc folder after downloading the MCUXpresso SDK from the MCUXpresso SDK builder. Documentation for eIQ Toolkit can be found inside the eIQ Toolkit documentation folder after installation, typically at C:\NXP\eIQ_Toolkit_v1.10.0\docs   For i.MX devices: The eIQ documentation for i.MX is integrated in the Yocto BSP documentation. Refer to i.MX Linux Releases and Pre-releases pages. i.MX Reference Manual: presents an overview of the NXP eIQ Machine Learning technology. i.MX Linux_User's Guide: presents detailed instructions on how to run and develop applications using the ML frameworks available in eIQ (currently ArmNN, TFLite, OpenCV and ONNX). i.MX Yocto Project User's Guide: presents build instructions to include eIQ ML support (check sections referring to ‘imx-image-full’ that includes all eIQ features). It is recommended to also check the i.MX Linux Release Notes which includes eIQ details.   For i.MX devices, what type of Machine Learning applications can I create?  Following the BYOM principle described above, you can create a wide variety of applications for running on I.MX. To help kickstart your efforts, refer to PyeIQ – a collection of demos and applications that demonstrate the Machine Learning capabilities available on i.MX. They are very easy to use (install with a single command, retrieve input data automatically) The implementation is very easy to understand (using the python API for TFLite, ArmNN and OpenCV) They demonstrate several types of ML applications (e.g., object detection, classification, facial expression detection) running on the different compute units available on i.MX to execute the inference (Cortex-A, GPU, NPU).   Can I use the python API provided by PyeIQ to develop my own application on i.MX devices? For developing a custom application in python, it is recommended to directly use the python API for ArmNN, TFLite, and OpenCV. Refer to the i.MX Linux User’s Guide for more details.   You can use the PyeIQ scripts as a starting point and include code snippets in a custom application (please make sure to add the right copyright terms) but shouldn’t rely on PyeIQ to entirely develop a product.   The PyeIQ python API is meant to help demo developers with the creation of new examples.   What eIQ example applications are available for i.MX RT? eIQ example applications can be found in the <SDK DIR>\boards\<board_name>\eiq_examples directory:      What are Glow and DeepViewRT inference engines in the MCUXpresso SDK?  These are inference engines that were supported in previous versions of eIQ but are now deprecated as new development has focused on TensorFlow Lite for Microcontrollers. These projects are still available in MCUXpresso SDK 2.15 for legacy users, but it is highly recommended that any new projects use TensorFlow Lite for Microcontrollers.     How can I learn more about using TensorFlow Lite with eIQ? There is a hands-on TensorFlow Lite for Microcontrollers lab available. There is also a i.MX TensorFlow Lite Lab that provide a step-by-step guide on how to get started with eIQ for TensorFlow Lite for i.MX devices.    How can I learn more about using eIQ Toolkit to generate a model? There are hands-on eIQ Toolkit labs available that provide a step-by-step guide to get started with generating a vision based model with eIQ Toolkit.    What application notes are available to learn more about eIQ? Anomaly Detection App Note: https://www.nxp.com/docs/en/application-note/AN12766.pdf Handwritten Digit Recognition: https://www.nxp.com/docs/en/application-note/AN12603.pdf Datasets and Transfer Learning App Note: https://www.nxp.com/docs/en/application-note/AN12892.pdf Glow Memory Analysis App Note: https://www.nxp.com/docs/en/application-note/AN13001.pdf  Security for Machine Learning Package: https://www.nxp.com/docs/en/application-note/AN12867.pdf i.MX 8M Plus NPU Warmup Time App Note: https://www.nxp.com/docs/en/application-note/AN12964.pdf What is the advantage of using eIQ instead of using the open-sourced software directly from Github? eIQ supported inference engines work out of the box and are already tested and optimized, allowing for performance enhancements compared to the original code. eIQ also includes the software to capture the camera or voice data from external peripherals. eIQ allows you to get up and running within minutes instead of weeks. As a comparison, rolling your own is like grinding your own flour to make a pizza from scratch, instead of just ordering a great pizza from your favorite pizza place.    Does eIQ include ML models? Do I use it to train a model? eIQ is a collection of software that allows you to Bring Your Own Model (BYOM) and run it on NXP embedded devices. eIQ provides the ability to run your own specialized model on NXP’s embedded devices.  For those new to AI/ML, we also now offer eIQ Toolkit which can be used to generate new vision based AI models using images provided to the tool.  There are several inference engine options like TensorFlow Lite and Glow that can be used to run your model. MCUXpresso SDK and the i.MX Linux releases come with several examples that use pre-created models that can be used to get a sense of what is possible on our platforms, and it is very easy to substitute in your own model into those examples.   I’m new to AI/ML and don’t know how to create a model, what can I do? A wide variety of resources are available for creating models, from labs and tutorials, to automated model generation tools like eIQ Toolkit, Google Cloud AutoML, Microsoft Azure Machine Learning, or Amazon ML Services, to 3 rd party partners like SensiML and Au-Zone that can help you define, enhance, and create a model for your specific application. Alternatively if you have no interest in generating models yourself, NXP also offers several pre-built voice and facial recognition solutions that include the appropriate models already created for you. There are Alexa Voice Services, Local voice control, and face and emotion recognition solutions available. Note that these solutions are different from eIQ as they include the model as well as the appropriate hardware and so those devices are sold as unique part numbers and have a cost-optimized BOM to directly use in your final product. eIQ is for those who want to use their own model or generate a model themselves using eIQ Toolkit. The three solutions mentioned above are for those who want a full solution (including model) already created for them for those specific applications. I’m interested in anomaly detect or time series models on microcontrollers, where can I get started? The ML-based System State Monitor Application Software Pack provides an example of gathering time-series data, in this case vibrations picked up by an accelerometer, and includes Python scripts to use the data that was collected to generate a small model that can be deployed on many different microcontrollers (including i.MX RT1170, LPC55S69, K66F) for anomaly detection. The same concepts and technique can be used for any sort of times series data like magnetometers, pressure, temperature, flow speed, and much more. This can simplify the work of coming up with a customer algorithm to detect the different states of whatever system you're interested in, as you can let the power of machine learning figure all that out for you.      Troubleshooting: Why do I get an error when running Tensorflow Lite Micro that it "Didn't find op for builtin opcode"? The full error will look something like this: Didn't find op for builtin opcode 'PAD' version '1' Failed to get registration from op code ADD Failed starting model allocation. AllocateTensors() failed Failed initializing model The reason is that with MCUXpresso SDK, the TFLM examples have been optimized to only support the operands necessary for the default models. If you are using your own model, it may use extra types of operands. The way to fix this is described in the TFLM Lab Guide on how to use the All Ops Resolver. Add the following header file #include"tensorflow/lite/micro/all_ops_resolver.h" and then comment out the micro_op_resolver and use this instead:  //tflite::MicroOpResolver &micro_op_resolver = //MODEL_GetOpsResolver(s_errorReporter); tflite::AllOpsResolver micro_op_resolver; Why do I get the error "Incompatible Neutron NPU microcode and driver versions!" when using the Neutron NPU? The version of the eIQ Neutron Converter Tool needs to be compatible with the NPU libraries used by your project. See more details in this post on using custom models with eIQ Neutron NPU.  How do I use my GPU when training with eIQ Toolkit? eIQ Toolkit 1.10 only supports GPU training on Linux due to the latest TensorFlow versions no longer supporting GPU on Windows.  Why do I get a blank or black LCD screen when I use the eIQ demos that have camera+LCD support on RT1170 or RT1160? There are different versions of the LCD, so you need to make sure you have the software configured correctly for the LCD you have. See this post for more details on what to change. Why is the inference speed slow when using TensorFlow Lite for Microcontrollers? Make sure you are using the "Release" project configuration which enables high compiler optimizations. This significantly reduces the inference time for TFLM projects. Glow and DeepViewRT projects are not affected by this setting because they use pre-compiled binaries and pre-compiled libraries respectively.    General AI/ML: What is Artificial Intelligence, Machine Learning, and Deep Learning? Artificial intelligence is the idea of using machines to do “smart” things like a human. Machine Learning is one way to implement artificial intelligence, and is the idea that if you give a computer a lot of data, it can learn how to do smart things on its own. Deep Learning is a particular way of implementing machine learning by using something called a neural network. It’s one of the more promising subareas of artificial intelligence today.   This video series on Neural Network basics provides an excellent introduction into what a neural network is and the basics of how one works.    What are some uses for machine learning on embedded systems? Image classification – identify what a camera is looking at Coffee pods Empty vs full trucks Factory defects on manufacturing line Produce on supermarket scale Facial recognition – identifying faces for personalization without uploading that private information to the cloud Home Personalization Appliances Toys Auto Audio Analysis Wake-word detection Voice commands Alarm Analytics (Breaking glass/crying baby) Anomaly Detection Identify factory issues before they become catastrophic Motor analysis Personalized health analysis   What is training and inference? Machine learning consists of two phases: Training and Inference   Training is the process of creating and teaching the model. This occurs on a PC or in the cloud and requires a lot of data to do the training. eIQ is not used during the training process.   Inference is using a completed and trained model to do predictions on new data. eIQ is focused on enhancing the inferencing of models on embedded devices.   What are the benefits for “on the edge” inference? When inference occurs on the embedded device instead of the cloud, it’s called “on the edge”. The biggest advantage of on the edge inferencing is that the data being analyzed never goes anywhere except the local embedded system, providing increased security and privacy. It also saves BOM costs because there’s no need for WiFi or BLE to get data up to the cloud, and there’s no charge for the cloud compute costs to do the inferencing.  It also allows for faster inferencing since there’s no latency waiting for data to be uploaded and then the answer received from the cloud.   What processor do I need to do inferencing of models? Inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, which any MCU or MPU is capable of. There’s no special hardware or module required to do inferencing. However specialized ML hardware accelerators, high core clock speeds, and fast memory can drastically reduce inference time.   Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application. Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the intermediate calculations and output   As an example, the performance required for image recognition will be very dependent on the model is being used to do image recognition. This will vary depending on how many classes, what size of images to be analyzed, if multiple objects or just one will be identified, and how that particular model is structured. In general image classification can be done on i.MX RT devices and multiple object detection requires i.MX devices, as those models are significantly more complex.   eIQ provides several examples of image recognition for i.MX RT and i.MX devices and your own custom models can be easily evaluated using those example projects.    How is accuracy affected when running on slower/simpler MCUs? The same model running on different processors will give the exact same result if given the same input. It will just take longer to run the inference on a slower processor.   In order to get an acceptable inference time on a simpler MCU, it may be necessary to simplify the model, which will affect accuracy. How much the accuracy is affected is extremely model dependent and also very dependent on what techniques are used to simplify the model.   What are some ways models can be simplified? Quantization – Transforming the model from its original 32-bit floating point weights to 8-bit fixed point weights. Requires ¼ the space for weights and fixed point math is faster than floating point math. Often does not have much impact on accuracy but that is model dependent. Fewer output classifications can allow for a simpler yet still accurate model Decreasing the input data size (e.g. 128x128 image input instead of 256x256) can reduce complexity with the trade-off of accuracy due to the reduced resolution. How much that trade-off is depends on the model and requires experimentation to find. Software could rotate image to specific position using classic image manipulation techniques, which means the neural network for identification can be much smaller while maintaining good accuracy compared to case that neural network has to analyze an image that could be in all possible orientations.   What is the difference between image classification, object detection, and instance segmentation? Image classification identifies an entire image and gives a single answer for what it thinks it is seeing. Object detection is detecting one or more objects in an image. Instance segmentation is finding the exact outline of the objects in an image.   Larger and more complex models are needed to do object detection or instance segmentation compared to image classification.   What is the difference between Facial Detection and Facial Recognition? Facial detection finds any human face. Facial recognition identifies a particular human face. A model that does facial recognition will be more complex than a model that only does facial detection.    How come I don’t see 100% accuracy on the data I trained my model on? Models need to generalize the training data in order to avoid overfitting. This means a model will not always give 100% confidence , even on the data a model was trained on.   What are some resources to learn more about machine learning concepts?  Video series on Neural Network basics  ARM Embedded Machine Learning for Dummies Google TensorFlow Lab Google Machine Learning Crash Course Google Image Classification Practica YouTube series on the basics of ML and TensorFlow (ML Zero to Hero Series)
View full article
   The eIQ Sample Apps repository hosts Machine Learning applications demos based on the eIQ ™ ML Software Development Environment. The following examples were tested and used for training purposes. To be understandable each application contains a read-me file allowing the user to get started with the eIQ demos.    The eIQ samples apps target the latest eIQ release and are split in labs sections. Before starting with the examples, read the introduction part: eIQ Sample Apps - Introduction Object Recognition using Arm NN This section contains samples for running inference and predicting different objects. It also includes an extension that can recognize any given camera input/object. eIQ Sample Apps - Object Recognition using Arm NN Handwritten Digit Recognition This section focuses on a comparison of inference time between different models for handwritten digits recognition. eIQ Sample Apps - Handwritten Digit Recognition Object Recognition using OpenCV DNN This section uses OpenCV DNN module for running inference and detecting objects from an image. It also includes an extension that can detect any given camera input/object. eIQ Sample Apps - Object Recognition using OpenCV DNN Face Recognition using TensorFlow Lite This section uses a model for running inference and recognizing faces. eIQ Sample Apps - Face Recognition using TF Lite TensorFlow Lite Quantization This tutorial demonstrates how to convert a TensorFlow model to TensorFlow Lite and then apply quantization. eIQ Sample Apps - TFLite Quantization TensorFlow Transfer Learning This lab takes a TensorFlow image classification model and re-trains it to categorize images of flowers.  eIQ Transfer Learning Lab with i.MX 8 To deploy the demos from the eIQ Sample Apps repository to an i.MX8 board, please check: Deploying the eIQ Sample Apps to an i.MX8 board  These labs sections will be updated frequently in order to keep all codes and tutorials up-to-date. Check also: https://community.nxp.com/community/eiq/blog/2020/06/30/pyeiq-a-python-framework-for-eiq-on-imx-processors 
View full article
This docker is designed to enable portability to leverage i.MX 8 series GPUs. It enables flexibility on top of the Yocto image and allows the use of libraries available in Ubuntu for machine learning, which is otherwise difficult. Using docker, a user can develop and prototype GPU applications and then ship and run it anywhere using the container. This App note describes how to enable this Docker. The docker is a wrapper that provides an application with the necessary dependencies to execute code on the GPU. This is a significant achievement and has the potential to greatly simplify many customer developments for Linux. We keep the Yocto BSP intact, but customers can develop applications using the widely available neural network frameworks and libraries at the same time leveraging the GPU without compromising on performance. Not quite as straightforward as full up Debian but still should be an easy sell. marcofranchi‌ diegodorta‌
View full article
This Lab 4 explains how to get started with TensorFlow Lite application demo on i.MX8 board using Inference Engines for eIQ Software. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: TensorFlow Lite MobileFaceNets MIPI/USB Camera Face Detection Using OpenCV   This application demo uses Haar Feature-based Cascade Classifiers for real time face detection. The pre-trained Haar Feature-based Cascade Classifiers for face, named as XML, is already contained in OpenCV. The XML file for face is stored in the opencv/data/haarcascades/ folder as well as code aurora. Read Face Detection using Haar Cascades for more details. TensorFlow Lite implementation for MobileFaceNets  The MobileFaceNets is re-trained on a host PC with a smaller batch size and input size to get higher performance. The trained model is loaded as a source file in this demo. Setting Up the Board Step 1 - Download the demo from eIQ Sample Apps and put it in /opt/tflite folder. Then enter the src folder: root@imx8mmevk:~# cd /opt/tflite/examples-tflite/face_recognition/src/ root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# This folder should include these files: . ├── face_detect_helpers.cpp ├── face_detect_helpers.h ├── face_detect_helpers_impl.h ├── face_recognition.cpp ├── face_recognition.h ├── haarcascade_frontalface_alt.xml ├── Makefile ├── mfn.h ├── profiling.h └── ThreadPool.h Step 2 - Compile the source code on the board: root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# make Step 3 - Run the demo: root@imx8mmevk:/opt/tflite/examples-tflite/face_recognition/src# ./FaceRecognition -c 0 -h 0.85 NOTE: -c is used to specify the camera index. '0' means the MIPI/USB camera is mounted on /dev/video0. -h is a threshold for the prediction score. Step 4 - Add a new person to the face data set.  When the demo is running, it will detect one biggest face at real time. Once the face is detected, you can click keyboards on the right of GUI to input the new person's name. Then, click 'Add new person' to add the face to data set.  In brief, 1. Detect face. 2. Input new person's name. 3. Click 'Add new person'. NOTE: Once new faces are added, it will create a folder named 'data' in current directory. If you want to remove the new face from the data set, just delete it in 'data'.
View full article
All the required steps for getting a full eIQ image can be found in the following documentation: NXP eIQ(TM) Machine Learning Enablement. These labs can be applied to all i.MX 8 boards, in this particular tutorial describes the i.MX 8M Mini EVK board. Hardware Requirements i.MX 8MM EVK board USB cable (micro-B to standard-A) USB Type-C to A Adapter USB Type-C 45W Power Delivery Supply IMX-MIPI-HDMI Daughter Card MINISASTOCSI Camera Daughter Card 2×Mini-SAS cable AI/ML BSP flashed into the SDCard Ethernet cable USB Mouse HDMI Cable Monitor Software Requirements For GNU/Linux: minicom or screen. For Windows: PuTTY. Preparing the Board Connect the IMX-MIPI-HDMI daughter card to the Mini-SAS cable and into connector labeled DSI MIPI (J801) and then connect the HDMI monitor cable into it. Warning: Do not hot-plug the Mini-SAS cables and cards or the boards will be damaged! Remove power completely before connecting or disconnecting the Mini-SAS ends. Connect the MINISASTOCSI camera daughter card to the Mini-SAS cable and into connector labeled CSI MIPI (J802). Connect the micro-B end of the supplied USB cable into Debug UART port J901. Connect the other end of the cable to the host computer: For GNU/Linux: Configure minicom or screen with the /dev/ttyUSB port number and set the baud rate to 115200. The port number can be found checking in /dev directory. For Windows: Configure PuTTY with the board COM port number and set the baud rate to 115200. The port number can be found on the Windows Device Manager. NOTE: This board mounts two device port numbers, use the highest number which is used to communicate with Cortex-A. Connect the MicroSD Card to the MicroSD Card Connector J701 in the board back side. In order to Boot the board from the MicroSD Card, change the Boot Switches SW1101 and SW1102 according to the table below: BOOT Device SW1101 SW1102 MicroSD / uSDHC2 0110110010 0001101000 NOTE: The boot device settings above apply to the revision C i.MX 8MM EVK board. Other revisions of the boards may have a different number of boot mode switches and slightly different settings. Please follow the SW1101 and SW1102 values printed on your specific board for booting from the MicroSD Card. Connect the power supply cable to the power connector J302 and power on the board by flipping the switch button SW101. NOTE: For more details on the board peripherals, please consult the i.MX 8MM EVK Getting Started. After these steps, it is all set to start with the eIQ Sample Apps. Go to the eIQ Sample Apps - Object Recognition using Arm NN.
View full article
This Lab 2 explains how to get started with MNIST Handwritten Digit application demo on i.MX8 board using eIQ ™ ML Software Development Environment. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: Handwritten Digit Recognition MNIST Handwritten Digits The MNIST is a large database of handwritten digits commonly used for training various image processing systems. This section provides a comparison of Caffe and TensorFlow models for Handwritten Digit Recognition. The data set used for these applications is from Yann Lecun. This is an MNIST data set sample: Setting Up the Board Step 1 - Create the following folder and grant it permission as it follows: root@imx8mmevk:~# mkdir -p /opt/mnist root@imx8mmevk:~# chmod 777 /opt/mnist   Step 2 - To easily deploy the demos to the board, get the board's IP address using ifconfig command, then set the IMX_INET_ADDR environment variable as follows: $ export IMX_INET_ADDR=<imx_ip>   Setting Up the Host Step 1 - Obtain the eIQ toolchain (3.2.9.Generating the Toolchain) from NXP eIQ(TM) Machine Learning Enablement. Step 2 - Install the toolchain: $ chmod +x <toolchain>.sh $ ./<toolchain>.sh This provides all the needed setup for building ARM64 applications on a x86 machine. Step 3 - Download the application from eIQ Sample Apps. Step 4 - Get the models and dataset. The following command-lines create the needed folder structure for the demos and retrieves the mnist dataset, and the Caffe and TensorFlow models: $ mkdir -p bin data model $ wget -qN https://github.com/ARM-software/ML-examples/raw/master/armnn-mnist/data/t10k-images-idx3-ubyte -P data/ $ wget -qN https://github.com/ARM-software/ML-examples/raw/master/armnn-mnist/data/t10k-labels-idx1-ubyte -P data/ $ wget -qN https://github.com/ARM-software/ML-examples/raw/master/armnn-mnist/model/lenet_iter_9000.caffemodel -P model/ $ wget -qN https://github.com/ARM-software/ML-examples/raw/master/armnn-mnist/model/simple_mnist_tf.pb -P model/ $ wget -qN https://github.com/ARM-software/ML-examples/raw/master/armnn-mnist/model/simple_mnist_tf.prototxt -P model/ $ wget -qN https://github.com/ARM-software/Tool-Solutions/raw/master/ml-tool-examples/mnist-draw/model/optimized_mnist_tf.pb -P model/ Step 5 - Compile the source code using the eIQ toolchain: $ ${CXX} -Wall -Wextra -O3 -std=c++14 caffe_inference.cpp -o caffe_inference -larmnn -larmnnCaffeParser $ ${CXX} -Wall -Wextra -O3 -std=c++14 tensorflow_inference.cpp -o tensorflow_inference -larmnn -larmnnTfParser Step 6 - Deploy the built files to the board: $ scp -r caffe_inference tensorflow_inference data/ model/ root@${IMX_INET_ADDR}:/opt/mnist Inference Comparison Applications Step 1 - At user space, enter the mnist folder which holds the demo files: root@imx8mmevk:/opt/mnist# This is how the mnist folder structure should look like: │... ├── caffe_inference ├── tensorflow_inference ├── data │├── t10k-images-idx3-ubyte │└── t10k-labels-idx1-ubyte ├── model │├── lenet_iter_9000.caffemodel │├── optimized_mnist_tf.pb │├── simple_mnist_tf.pb │└── simple_mnist_tf.prototxt Step 2 - Run the applications: NOTE: For running these applications, please provide the wanted number of predictions, which can vary from 0 to 9999 since the dataset has 10K images. 1 - Handwritten Digit Recognition using Caffe root@imx8mmevk:/opt/mnist# ./caffe_inference 10 [0] Caffe >> Actual: 7 Predict: 7 Time: 0.0336484s [1] Caffe >> Actual: 2 Predict: 2 Time: 0.028399s [2] Caffe >> Actual: 1 Predict: 1 Time: 0.0283713s [3] Caffe >> Actual: 0 Predict: 0 Time: 0.0284133s [4] Caffe >> Actual: 4 Predict: 4 Time: 0.0280637s [5] Caffe >> Actual: 1 Predict: 1 Time: 0.0281574s [6] Caffe >> Actual: 4 Predict: 4 Time: 0.0285136s [7] Caffe >> Actual: 9 Predict: 9 Time: 0.0283779s [8] Caffe >> Actual: 5 Predict: 5 Time: 0.0283902s [9] Caffe >> Actual: 9 Predict: 9 Time: 0.0283282s Total Time: 0.296081s Sucessfull: 10 Failed: 0 2 - Handwritten Digit Recognition using TensorFlow root@imx8mmevk:/opt/mnist# ./tensorflow_inference 10 [0] Tensor >> Actual: 7 Predict: 7 Time: 0.00670075s [1] Tensor >> Actual: 2 Predict: 2 Time: 0.00377025s [2] Tensor >> Actual: 1 Predict: 1 Time: 0.0036785s [3] Tensor >> Actual: 0 Predict: 0 Time: 0.0036815s [4] Tensor >> Actual: 4 Predict: 4 Time: 0.00372875s [5] Tensor >> Actual: 1 Predict: 1 Time: 0.003669s [6] Tensor >> Actual: 4 Predict: 4 Time: 0.00367825s [7] Tensor >> Actual: 9 Predict: 9 Time: 0.0036955s [8] Tensor >> Actual: 5 Predict: 6 Time: 0.00367488s FAILED [9] Tensor >> Actual: 9 Predict: 9 Time: 0.0036025s Total Time: 0.0414569s Sucessfull: 10 Failed: 1 NOTE: The argument 10 refers to the number of predictions for each test. These tests run the inference on the input MNIST dataset images (Actual), showing the inference results (Predict) and how long it took to complete the prediction. The input images for this test are in the binary form and can be found at the t10k-images-idx3-ubyte.gz package from Yann Lecun. By the output results, it is possible to notice that the Caffe model is slower than TensorFlow, however, it is also more accurate than the latter. Change the argument to compare further results between the two models. Go to the eIQ Sample Apps - Object Recognition using OpenCV DNN.
View full article
After setting up an Yocto build environment as described in the L4.19.35_1.0.0 BSP Yocto Project User's Guide, apply the attached patch to the meta-fsl-bsp-release layer: <yocto_dir>/sources/meta-fsl-bsp-release$ git am eiq-sample-apps-Add-recipe.patch To include the applications in the image, add the following line to local.conf: IMAGE_INSTALL_append += "eiq-sample-apps" This will include all applications from the eIQ Sample Apps repository to the built image.
View full article
This Lab 3 explains how to get started with OpenCV DNN applications demos on i.MX8 board using eIQ ™ ML Software Development Environment. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: OpenCV DNN example - File-Based and MIPI Camera OpenCV Inference The OpenCV offers a unitary solution for both neural network inference (DNN module) and classic machine learning algorithms (ML module). Moreover, it includes many computer vision functions, making it easier to build complex machine learning applications in a short amount of time and without having dependencies on other libraries. The OpenCV DNN model is basically an inference engine. It does not aim to provide any model training capabilities. For training, one should use dedicated solutions, such as machine learning frameworks. The inference engine from OpenCV supports a wide set of input model formats: TensorFlow, Caffe, Torch/PyTorch. Comparison with Arm NN Arm NN is a library deeply focused on neural networks. It offers acceleration for Arm Neon, while Vivante GPUs are not currently supported. Arm NN does not support classical non-neural machine learning algorithms. OpenCV is a more complex library focused on computer vision. Besides image and vision specific algorithms, it offers support for neural network machine learning, but also for traditional non-neural machine learning algorithms. OpenCV is the best choice in case your application needs a neural network inference engine, but also other computer vision functionalities. Setting Up the Board Step 1 - Create the following folders and grant them permissions as it follows: root@imx8mmevk:# mkdir -p /opt/opencv/model root@imx8mmevk:# mkdir -p /opt/opencv/media root@imx8mmevk:# chmod 777 /opt/opencv Step 2 - To easily deploy the demos to the board, get the boards IP address using ifconfig command, then set the IMX_INET_ADDR environment variable as it follows: $ export IMX_INET_ADDR=<imx_ip> Step 3 - In the target device, export the required variables: root@imx8mmevk:~# export LD_LIBRARY_PATH=/usr/local/lib root@imx8mmevk:~# export PYTHONPATH=/usr/local/lib/python3.5/site-packages/   Setting Up the Host Step 1 - Download the application from eIQ Sample Apps. Step 2 - Get the models and dataset. The following command-line creates the needed folder structure for the demos and retrieves all needed data and model files for the demo: $ mkdir -p model $ wget -qN https://github.com/diegohdorta/models/raw/master/caffe/MobileNetSSD_deploy.caffemodel -P model/ $ wget -qN https://github.com/diegohdorta/models/raw/master/caffe/MobileNetSSD_deploy.prototxt -P model/   Step 3 - Deploy the built files to the board: $ scp -r src/* model/ media/ root@${IMX_INET_ADDR}:/opt/opencv OpenCV DNN Applications This application was based on: SSD: Single Shot MultiBox Detector. Caffe SSD Implementation. 1 - OpenCV DNN example: File-Based The folder structure must be equal to: ├── file.py ├── camera.py ├── media └── ... ├── model │├── MobileNetSSD_deploy.caffemodel │└── MobileNetSSD_deploy.prototxt This example runs a single picture for example, but you pass as many pictures as you want and save them inside media/ folder. The application tries to recognize all the objects in the picture. Step 1 - For copying new images to the media/ folder: root@imx8mmevk:/opt/opencv/media# cp <path_to_image> .   Step 2 - Run the example image: root@imx8mmevk:/opt/opencv# ./file.py  NOTE: If GPU is available, the example shows: [INFO:0] Initialize OpenCL runtime This demo runs the inference using a Caffe model to recognize a few type of objects for all the images inside the media/ folder. It includes labels for each recognized object in the input images. The processed images are available in the media-labeled/ folder. See before and after labeling: Step 3 - Display the labeled image with the following line: root@imx8mmevk:/opt/opencv/media-labeled# gst-launch-1.0 filesrc location=<image> ! jpegdec ! imagefreeze ! autovideosink 2 - OpenCV DNN example: MIPI Camera This example is the same as above, except that it uses a camera input. It enables the MIPI camera and runs an inference on each captured frame, then displays it in a window interface in real time: root@imx8mmevk:/opt/opencv# ./camera.py 3 - OpenCV DNN example: MIPI Camera improved This example differs from the above due the additional support of GStreamer applied to it. Using the Leaky Bucket algorithm idea, the GStreamer pipeline enables the camera to continue performing its own thread (bucket overflow when full), even if the frame was not processed by the inference thread (bucket water capacity). As a result of this Leaky Bucket algorithm, this demo has smooth camera video at the expense of having some frames dropped in the inference process. root@imx8mmevk:/opt/opencv# ./camera_improved.py Go to the eIQ Sample Apps - Face Recognition using TF Lite.
View full article
This Lab 1 explains how to get started with Arm NN application demo on i.MX8 board using eIQ ™ ML Software Development Environment. eIQ Sample Apps - Overview eIQ Sample Apps - Introduction Get the source code available on code aurora: Arm NN example - File-Based and MIPI Camera Setting Up the Board Step 1 - Create the following folders and grant them permission as it follows: root@imx8mmevk:# mkdir -p /opt/armnn/model root@imx8mmevk:# mkdir -p /opt/armnn/data root@imx8mmevk:# chmod 777 /opt/armnn Step 2 - To easily deploy the demos to the board, get the boards IP address using ifconfig command, then set the IMX_INET_ADDR environment variable as it follows: $ export IMX_INET_ADDR=<imx_ip> Setting Up Arm NN Step 1 - Install TensorFlow on host PC for preparing the model for inference: $ apt-get install python-pip $ pip install tensorflow $ git clone https://github.com/tensorflow/tensorflow.git NOTE: You may need root privileges (sudo) for running the apt-get command. Step 2 - Generate the graph used to prepare the TensorFlow InceptionV3 model for inference: $ mkdir checkpoints $ git clone https://github.com/tensorflow/models.git $ cd models/research/slim/ $ python export_inference_graph.py --model_name=inception_v3 --output_file=../../../checkpoints/inception_v3_inf_graph.pb   Step 3 - Download the pre-trained model and prepare it for inference with the generated graph: $ cd ../../../checkpoints $ wget http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz -qO- | tar -xvz # download pretrained model $ python <path_to_tensorflow_repo>/tensorflow/python/tools/freeze_graph.py \ --input_graph=inception_v3_inf_graph.pb --input_checkpoint=inception_v3.ckpt \ --input_binary=true --output_graph=inception_v3_2016_08_28_frozen_transformed.pb \ --output_node_names=InceptionV3/Predictions/Reshape_1 NOTE: <path_to_tensorflow_repo> refers to the cloned TensorFlow path from Step 1. Step 4 - Copy the prepared model inception_v3_2016_08_28_frozen_transformed.pb to /opt/armnn/models: $ scp inception_v3_2016_08_28_frozen_transformed.pb root@<imx_ip>:/opt/armnn/model Step 5 - Find three .jpg images on Google, one containing a dog, one with a cat and one with a shark. Rename them to Dog.jpg, Cat.jpg and shark.jpg accordingly (case sensitive) and copy them to the /opt/armnn/data folder on the device. $ scp Dog.jpg Cat.jpg shark.jpg root@<imx_ip>:/opt/armnn/data NOTE: For the modified demo, download it from eIQ Sample Apps and put it in /opt/armnn folder. 1 - Arm NN example: File-Based Step 1 - At user space, enter the armnn folder which holds the demo files: root@imx8mmevk:~# cd /opt/armnn root@imx8mmevk:/opt/armnn# Here is what the armnn folders should look like: │... ├── data │├── Cat.jpg │├── Dog.jpg │└── shark.jpg ├── model │└── inception_v3_2016_08_28_frozen_transformed.pb │... Step 2 - Run the demo: root@imx8mmevk:/opt/armnn# TfInceptionV3-Armnn --data-dir=data --model-dir=models = Prediction values for test #0 Top(1) prediction is 208 with confidence: 93.5791% Top(2) prediction is 209 with confidence: 2.06653% Top(3) prediction is 223 with confidence: 0.693557% Top(4) prediction is 170 with confidence: 0.210818% Top(5) prediction is 232 with confidence: 0.177887% = Prediction values for test #1 Top(1) prediction is 283 with confidence: 72.4617% Top(2) prediction is 282 with confidence: 22.5384% Top(3) prediction is 286 with confidence: 0.838241% Top(4) prediction is 288 with confidence: 0.0822042% Top(5) prediction is 841 with confidence: 0.05987% = Prediction values for test #2 Top(1) prediction is 3 with confidence: 62.0632% Top(2) prediction is 4 with confidence: 12.8319% Top(3) prediction is 5 with confidence: 1.25482% Top(4) prediction is 154 with confidence: 0.177708% Top(5) prediction is 149 with confidence: 0.116998% Total time for 3 test cases: 2.369 seconds Average time per test case: 789.765 ms Overall accuracy: 1.000 The TfInceptionV3-Armnn demo runs the inference on the three expected input images: one containing a dog, one with a cat and one with a shark. The output shows the top 5 inference results and their confidence percentage. The higher the confidence, the better the input image fits the expected content. There is a chance to get the following result by running the demo: Prediction for test case 0 ( x ) is incorrect (should be y) One or more test cases failed NOTE: ( x ) refers to the ID of the detected object, ( y ) refers to the ID expected object. This is not an execution error. This occurs because the TfInceptionV3-Armnn test expects a specific type of dog, cat and shark to be found so if a different type/breed of these animals is passed to the test, it returns a case failed. The expected inputs for this test are: A_ID Label File Name 208 Golden Retriever Dog.jpg 283 Tiger Cat Cat.jpg 3 White Shark shark.jpg The complete list of supported objects can be found here. Try passing different .jpg images to the test, including the expected types as well as other types and see the confidence percentage increasing when you match the expected breeds. Remember to rename the images according to the expect input (Dog.jpg, Cat.jpg, shark.jpg, case sensitive). To rename a file, use the mv command: root@imx8mmevk:/opt/armnn/data# mv <name>.jpg <new_name>.jpg The next section shows how to modify this demo to identify any object. 2 - Arm NN example: MIPI Camera This section shows how to use the TfInceptionV3-Armnn test from eIQ for general object detection. The list of all object detection supported by this model can be found here. Step 1 - Enter the demo directory and run the demo: root@imx8mmevk:/opt/armnn# python3 camera.py This runs the TfInceptionV3-Armnn test and parses the inference results to return any recognized object, not only the three expected types of animals. Step 2 - Show the provided flash cards to the camera and wait for the detection message: Image captured, wait. The flash cards should not be twisted or curved on this step. Step 3 - After a few seconds, the demo returns the detected object. NOTE: This can return False if the image was not correctly captured. In this case, try showing the flash card again. This video is currently being processed. Please try again in a few minutes. (view in My Videos) Go to the next eIQ Sample Apps - Handwritten Digit Recognition.
View full article
This demo shows a low power smart door running eIQ heterogeneously on i.MX8MMini: Cortex A53 performs face recognition using eIQ OpenCV Cortex M4 performs Key Word Spotting using eIQ CMSIS-NN The demo application is build around the Django framework running on the board. It has two main usage scenarios: The first one is to manage users and inspect the access logs through a dashboard. This dashboard is accesses from a web browser on the host PC. The second use case is the smart door application itself running on the board. The scenario is the following: Cortex A cores and connected peripherals stay in low power mode. Cortex M is active, waiting for the Key Word ‘GO’. When the word is detected, Cortex M sends an MU interrupt to Cortex A and the system wakes up. Now Cortex A performs face recognition and allows access for registered users. In addition to face recognition, the MPUs are able to run a Django server to manage the user’s database, a QT5 application for the graphical interface and perform training on the edge. The algorithm for face recognition running on Cortex A and the one for key word spotting running on Cortex M are both implemented using eIQ. For the MPU eIQ support is integrated in Yocto. For the MCU the support was ported to i.MX8 from the MCU Expresso SDK for RT for the purpose of this demo. Software Environment Ubuntu 16 host PC SD card image with Yocto BSP 4.14.98/sumo 2.0.0 GA for i.MX8MMini platform with eIQ OpenCV AND eIQ heterogenus demo. See detailed steps in Build Yocto Image section CMSIS-NN MCUXpresso SDK version 2.6.0 for i.MX8MMini (SDK_2.6.0_EVK-MIMX8MM-ARMGCC). See detailed build steps in Build Cortex M4 executable section. HW Environment i.MX 8MMini Kit Touch screen display (preferred resolution 1920x1080). Tested with HDMI connection to the board. NOTE: if the display does not support touch, a mouse can be connected to the board and used instead MIPI-CSI Camera module Microphone: Synaptics CONEXANT AudioSmart® DS20921 Ribbon, 4 female-female wires and 60 pins connector to connect mic to board Optional: headphones (used to test recording on M4 - everything recorded by the mic will be played to the headphones). Host PC for remote access to the demo application (used Chrome browser) NOTE: The board and the host PC should be in the same network to communicate. Build Yocto image: Step 1 – Project initialization: $: mkdir imx-linux-bsp $: cd imx-linux-bsp-bsp $: repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b imx-linux-sumo -m imx-4.14.98-2.0.0_machinelearning.xml $: repo sync Step 2 - Setup Project build: $: MACHINE=imx8mmevk DISTRO=fsl-imx-xwayland source ./fsl-setup-release.sh -b bld-xwayland Step 3 – Download project layer in ${BSPDIR}/sources/: $: git clone https://source.codeaurora.org/external/imxsupport/meta-eiq-heterogenous Step 4 – Add project layer into bblayers: Add the following line into ${BSPDIR}/sources/base/conf/bblayers.conf: BBLAYERS += " ${BSPDIR}/sources/meta-eiq-heterogenous " Step 5 – Enable eIQ and other dependencies. Add the following lines into conf/local.conf: EXTRA_IMAGE_FEATURES = " dev-pkgs debug-tweaks tools-debug \ tools-sdk ssh-server-openssh" IMAGE_INSTALL_append = " net-tools iputils dhcpcd which gzip \ python3 python3-pip wget cmake gtest \ git zlib patchelf nano grep vim tmux \ swig tar unzip parted \ e2fsprogs e2fsprogs-resize2fs" IMAGE_INSTALL_append = " python3-pytz python3-django-cors-headers" IMAGE_INSTALL_append = " opencv python3-opencv" PACKAGECONFIG_append_pn-opencv_mx8 = " dnn python3 qt5 jasper \ openmp test neon" PACKAGECONFIG_remove_pn-opencv_mx8 = "opencl" TOOLCHAIN_HOST_TASK_append = " nativesdk-cmake nativesdk-make" PREFERRED_VERSION_opencv = "4.0.1%" PREFERRED_VERSION_python3-django = "2.1%" IMAGE_ROOTFS_EXTRA_SPACE = "20971520" Step 6 – Bake the image: $: bitbake image-eiq-hetero Build Cortex M4 executable Download MCUXpresso SDK version 2.6.0 for i.MX8MMini (SDK_2.6.0_EVK-MIMX8MM-ARMGCC) OS: Linux, Toolchain: GCC ARM Embedded Components: Amazon-FreeRTOS, CMSIS DSP Library, multicore SDK Version: 2.6.0 (2019-06-14) SDK Tag: REL_2.6.0_REL10_RFP_RC3_4 Download CMIS NN and copy "CMSIS\NN" folder to "$MCUXpressoSDK_ROOT\CMSIS" Got to "$MCUXpressoSDK_ROOT\boards\evkmimx8mm\demo_apps\" Get M4 app from CAF: git clone https://source.codeaurora.org/external/imxsupport/eiq-heterogenous-cortexm4 [Win]: Open ARM GCC console and go to "$MCUXpressoSDK_ROOT\boards\evkmimx8mm\demo_apps\eiq-heterogenous-cortexm4\armgcc\" [Win]: Call "build_ddr_release.bat" to obtain "eiq-kws.bin". Deploy "eiq-kws.bin" to the Yocto image on the boot partition. Prepare the Demo 1.  Connect 12V power supply to the board, switch SW101 to power on the board 2.  Connect a USB cable between the host PC and the J901 USB port on the target board. 3.  Open two serial terminals for A53 core and M4 core with the following settings:     - 115200 baud rate     - 8 data bits     - No parity     - One stop bit     - No flow control 4. Connect display to the board (used 1920x1080 HDMI display connected to the board with an IMX-MIPI-HDMI adapter). NOTE: depending on the display, you might want to change the config in "/etc/xdg/weston/weston.ini". The demo was tested by uncommenting the following section in this file: [output] name=HDMI-A-1 mode=1920x1080@60 transform=90 5.Connect MIPI-CSI camera to the board. 6. Connect Synaptics microphone to the board using a 60 pins connector with a ribbon. SAI3 is used for record and playback on Cortex M4. The following pins are used: Pin 44 (connector) <-> I2S_TX_Data1 (mic board) Pin 43 (connector) <-> I2S_TX_LRCLK (mic board) Pin 41 (connector) <-> I2S_TX_CLK (mic board) Pin 60 (connector) <-> GND (mic board) 7. Using U-Boot command to run the demo.bin file. For details, please refer to "Getting Started with MCUXpresso SDK for i.MX 8M Mini.pdf". 8.  After running the demo.bin, using the "boot" command to boot the kernel on the A core terminal; 9.  After the kernel is boot, using "root" to login. 10.  After login, make sure imx_rpmsg_pingpong kernel module is inserted (lsmod) or insert it (modprobe imx_rpmsg_pingpong). Run the Demo Start Key Word Spotting on Cortex M4: Stop in u-boot and run the eiq-kws.bin executable in DDR: u-boot=>fatload mmc 0 0x80000000 eiq-kws.bin u-boot=>dcache flush u-boot=>bootaux 0x80000000 u-boot=>boot After the boot process succeeds, the ARM Cortex-M4 terminal displays the following information: RPMSG Ping-Pong FreeRTOS RTOS API Demo... RPMSG Share Base Addr is 0xb8000000 During boot the Kernel,the ARM Cortex-M4 terminal displays the following information: Link is up! Nameservice announce sent. Start Face Recognition on Cortex-A: Insert updated rpmsg driver: $: modprobe imx_rpmsg_pingpong After the Linux RPMsg pingpong module was installed, the ARM Cortex-M4 terminal displays the following information: Looping forever... Waiting for ping... Sending pong... 96% go First time only: $: cd ~/eiq-heterogenous-cortexa $: python3 wrap_migrate.py $: python3 wrap_createsuperuser.py Start: $: cd ~/eiq-heterogenous-cortexa $: python3 manage.py runserver 0.0.0.0:8000 --noreload & $: /opt/src/bin/src NOTE: the first instruction will start the django server, the second instruction will show the pin-pad on the display. Browser access from HOST PC: - http://$BOARD_IP:8000/dashboard/😞 Dashboard that facilitates managing users and view access logs - http://$BOARD_IP:8000/admin/: manage users database  
View full article
This lab will take an existing TensorFlow image classification model and re-train it to categorize images of flowers. This is known as transfer learning. This updated model will then be converted into a TensorFlow Lite file. By using that file with the TensorFlow Lite inference engine that is part of NXPs eIQ package, the model can be ran on an i.MX 8QM-MEK board. This lab can also be used with different i.MX 8 devices.   Please check attached pdf's: 'eIQ Transfer Learning Lab iMX8.pdf': detailed lab instructions 'TransferLearningOverview.pdf': theoretical background of transfer learning and results for customized data set.   Please check also: eIQ Transfer Learning Lab with i.MX RT.
View full article
See the latest version of this document here: https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-FAQ/ta-p/1099741 
View full article
When comparing NPU with CPU performance on the i.MX 8M Plus, the perception is that inference time is much longer on the NPU. This is due to the fact that the ML accelerator spends more time performing overall initialization steps. This initialization phase is known as warmup and is necessary only once at the beginning of the application. After this step inference is executed in a truly accelerated manner as expected for a dedicated NPU. The purpose of this document is to clarify the impact of the warmup time on overall performance.
View full article
The OpenCV Deep-Neural Network (DNN) is a module for inference in deep networks. It is easy to use and it is a great way to get started with computer vision and inferencing. OpenCV DNN supports many frameworks, such as: Caffe TensorFlow Torch Darknet Models in ONNX format For a simple object detection code with OpenCV DNN, please check Object Detection with OpenCV. For running OpenCV DNN inference with camera input, please refer to eIQ Sample Apps - OpenCV Lab 3. For exploring different models ready for OpenCV DNN inference, please refer to Caffe and TensorFlow Pretrained Models. For more information on OpenCV DNN module, please check Deep Learning in OpenCV.
View full article
Caffe and TensorFlow provide a set of pretrained models ready for inference. They can be used as is or for retraining the models with your own dataset. Please check: TensorFlow Hosted Models TensorFlow Model Zoo Caffe Model Zoo
View full article
The attached file serves as a table of contents for the various collateral, documents, training, etc. that support eIQ software.
View full article
Getting Started with eIQ Software for i.MX Applications Processors Getting Started with eIQ for i.MX RT
View full article
NXP BSP currently does not support running a Keras application directly on i.MX. The customers that use this approach must convert their Keras model into one the supported inference engines in eIQ. In this post we will cover converting a Keras model (.h5) to a TfLite model (.tflite). Install TensorFlow with the same eIQ TfLite supported version (you can find this information on Linux User's Guide). For L4.19.35_1.0.0 the TfLite version is v1.12.0. $ pip3 install tensorflow==1.12.0 Run the following commands in a python3 environment to convert the .h5 model to a .tflite model: >>> from tensorflow.contrib import lite >>> converter = lite.TFLiteConverter.from_keras_model_file('model.h5') #path to your model  >>> tfmodel = converter.convert() >>> open("model.tflite", "wb").write(tfmodel) The model can be deployed and used by TfLite inference engine in eIQ.
View full article
eIQ Software for i.MX application processors eIQ Machine Learning Software for iMX Linux - 5.4.3_1.0.0 GA for i.MX6/7 and i.MX8MQ/8MM/8MN/8QM/8QXP has been released. eIQ Machine Learning Software for iMX Linux - 5.4.24_2.1.0 BETA for i.MX8QXPlus, BETA for i.MX8MP, and ALPHA 2 for i.MX8DXL has been released. It contains machine learning support for Arm NN, TensorFlow and TensorFlow Lite, ONNX, and OpenCV.  For running on Arm Cortex A cores, these inference engines are accelerated with Arm NEON instructions. For running on the NPU (of the i.MX 8M Plus) and i.MX 8 GPUs, NXP has included optimizations with Arm NN and TensorFlow Lite inference engines. For more information and complete details please be sure to check out the "NXP eIQ Machine Learning" chapter in the Linux User Guide (starting on L4.19 releases; L4.14 releases users should refer to NXP eIQ™ Machine Learning Software Development Environment for i.MX Applications Processors). You can access corresponding sample applications at https://source.codeaurora.org/external/imxsupport/eiq_sample_apps/.   For more information on artificial intelligence, machine learning and eIQ Software please visit AI & Machine Learning | NXP.     eIQ Software for i.MX RT crossover processors eIQ is now included in the MCUXpresso SDK package for i.MX RT1050 and i.MX RT1060. Go to https://mcuxpresso.nxp.comand search for the SDK for your board On the SDK builder page, click on “Add software component” Click on “Select All” and verify the eIQ software option is now checked. Then click on “Save Changes” Download the SDK. It will be saved as a .zip file. eIQ projects can be found in the \boards\<board_name>\eiq_examples folder eIQ source code can be found in the \middleware\eiq folder   More details can be found in this Community post  on how to get started with eIQ on i.MX RT devices. 
View full article
TensorFlow provides a set of pretrained models ready for inference, please checkCaffe and TensorFlow Pretrained Models. You can use the model as it is or you can retrain the model with your own data to detect specific objects for your custom application. This post shows some useful links that can help you on this task. Inception models: Training Custom Object Detector — TensorFlow Object Detection API tutorial documentation The link above also shows the needed steps to prepare your data for the retraining process (image labeling). MobileNet models: Train your own model with SSD MobileNet · ichbinblau/tfrecord_generator Wiki · GitHub  For the link above, you also need to follow the steps to prepare your data for the retraining process as the Inception retraining tutorial. Make sure your exported the needed PYTHONPATH variable: export PYTHONPATH=$PYTHONPATH:/path/to/tf_models/models/research:/path/to/tf_models/models/research/slim A few tips: - Retraining a model will be faster than training a model from the beginning, but it can still take a long time to complete. It depends on many factors, such as the number of steps defined on the model's *.config file. You need to be aware of overfitting your model if your dataset is too small and the number of steps are too large. Also, TensorFlow saves checkpoints at the retraining process, which you can prepare for inference and test before the retraining process is over and check when the models is good enough for your application. Please check "Exporting a Trained Inference Graph" in the Inception retraining tutorial and keep in mind that you can follow these steps before the training process is complete. Of course, low checkpoints may not be well trained. - If your are running OpenCV DNN inference, you may need to run the following command to get the *.pbtxt file, where X corresponds to the number of classes trained in your model and tf_text_graph_ssd.py is an OpenCV DNN script: python tf_text_graph_ssd.py --input frozen_inference_graph.pb --output frozen_inference_graph.pbtxt --num_classes X
View full article