eIQ Machine Learning Software Knowledge Base

cancel
Showing results for 
Search instead for 
Did you mean: 

eIQ Machine Learning Software Knowledge Base

Labels

Discussions

Sort by:
This document will cover some of the most commonly asked questions we've gotten about eIQ and embedded machine learning. Anything requiring more in-depth discussion/explanation will be put in a separate thread. All new questions should go into their own thread as well.   What is eIQ? The NXP® eIQ™ machine learning (ML) software development environment enables the use of ML algorithms on NXP EdgeVerse™ microcontrollers and microprocessors, including i.MX RT crossover MCUs, and i.MX family application processors. eIQ ML software includes a ML workflow tool called eIQ Toolkit, along with inference engines, neural network compilers and optimized libraries. This software leverages open-source and proprietary technologies and is fully integrated into our MCUXpresso SDK and Yocto development environments, allowing you to develop complete system-level applications with ease.   How much does eIQ cost? Free! NXP is making eIQ freely available as a basic enablement to jumpstart ML application development. It is also royalty free.    What devices are supported by eIQ? eIQ is available for the following i.MX application processors: i.MX 8M Plus i.MX 8M i.MX 8M Nano i. MX 8M Mini i. MX 8 i. MX 8X   eIQ is available for the following i.MX RT crossover MCUs: i.MX RT1170 i.MX RT1160 i.MX RT1064 i. MX RT1060 i. MX RT1050 i. MX RT685 i.MX RT595   What inference engines are available in eIQ? i.MX apps processors and i.MX RT MCUs support different inference engines. The best inference engine can depend on the particular model being used, so eIQ offers several inference engine options to find the best fit for your particular application.    Inference engines for i.MX: TensorFlow Lite (Supported on both CPU and GPU/NPU) ARM NN (Supported on both CPU and GPU/NPU) OpenCV ( Supported on only CPU ) ONNX Runtime (Currently only supported on CPU) Inference engines for i.MX RT1170, i.MX RT1160, i.MX RT1064, i.MX RT1060, i.MX RT1050: DeepViewRT Glow TensorFlow Lite for Microcontrollers Inference engines for i.MX RT685: Glow (with DSP acceleration) TensorFlow Lite for Microcontrollers (with DSP acceleration) Inference engines for i.MX RT595: TensorFlow Lite for Microcontrollers (with DSP acceleration)         Can eIQ run on other MCU devices? Porting guides have been made available as a reference for users interested in using eIQ on other devices. There's no special hardware module required to run eIQ inference engines and it is possible to port the inference engines to other NXP devices. A guide  for porting Glow to other NXP MCUs is already available.    What is eIQ Toolkit? eIQ Toolkit enables machine learning development with an intuitive GUI (named eIQ Portal) and development workflow tools, along with command line host tool options as part of the eIQ ML software development environment.  The eIQ Portal, developed in exclusive partnership with Au-Zone Technologies, is an intuitive graphical user interface (GUI) that simplifies ML development. Developers can create, optimize, debug and export ML models, as well as import datasets and models, rapidly train and deploy neural network models and ML workloads. The eIQ Portal provides output software that seamlessly feeds into DeepViewRT™, TensorFlow™ Lite, TensorFlow Lite Micro, Glow, Arm NN and ONNX Runtime inference engines. Using a tool called Model Runner, eIQ Toolkit can also generate graph-level profiling capability with runtime insights to help optimize neural network architectures on i.MX RT and i.MX devices. Is eIQ Toolkit required to use eIQ inference engines?  No, eIQ Toolkit is optional enablement from NXP to make it easier to generate vision-based models that can then be used with the eIQ inference engines. However if you already have your model development flow in place, or want to use pre-created models from a model zoo, you can use those with eIQ inference engines as well.      How can I get eIQ? For i.MX RT devices: eIQ inference engines are included as part of MCUXpresso SDK for supported devices. Make sure to select the “eIQ” middleware option. There are two additional optional software packages: eIQ Toolkit - for model creation and conversion. Includes the GUI model creation tool eIQ Portal Glow - A neural Network compiler that creates a compiled binary that can be integrated into MCUXpresso SDK projects.     For i.MX devices: eIQ is distributed as part of the Yocto Linux BSP. Starting with the 4.19 release line there is a dedicated Yocto image that includes all the Machine Learning features: ‘imx-image-full’. For pre-build binaries refer to i.MX Linux Releases and Pre-releases pages. There are is also an optional software package:  eIQ Toolkit - for model creation and conversion. Includes the GUI model creation tool eIQ Portal   What documentation is available for eIQ? For i.MX RT devices:  There are user guides for Glow and TensorFlow Lite for Microcontrollers inside the SDK documentation package when downloading the SDK with the MCUXpresso SDK builder. The Glow user guide can also be found here. Documentation for DeepViewRT can be found inside the eIQ Toolkit documentation folder after installation.    For i.MX devices: The eIQ documentation for i.MX is integrated in the Yocto BSP documentation. Refer to i.MX Linux Releases and Pre-releases pages. i.MX Reference Manual: presents an overview of the NXP eIQ Machine Learning technology. i.MX Linux_User's Guide: presents detailed instructions on how to run and develop applications using the ML frameworks available in eIQ (currently ArmNN, TFLite, OpenCV and ONNX). i.MX Yocto Project User's Guide: presents build instructions to include eIQ ML support (check sections referring to ‘imx-image-full’ that includes all eIQ features). It is recommended to also check the i.MX Linux Release Notes which includes eIQ details.   For i.MX devices, what type of Machine Learning applications can I create?  Following the BYOM principle described above, you can create a wide variety of applications for running on I.MX. To help kickstart your efforts, refer to PyeIQ – a collection of demos and applications that demonstrate the Machine Learning capabilities available on i.MX. They are very easy to use (install with a single command, retrieve input data automatically) The implementation is very easy to understand (using the python API for TFLite, ArmNN and OpenCV) They demonstrate several types of ML applications (e.g., object detection, classification, facial expression detection) running on the different compute units available on i.MX to execute the inference (Cortex-A, GPU, NPU).   Can I use the python API provided by PyeIQ to develop my own application on i.MX devices? For developing a custom application in python, it is recommended to directly use the python API for ArmNN, TFLite, and OpenCV. Refer to the i.MX Linux User’s Guide for more details.   You can use the PyeIQ scripts as a starting point and include code snippets in a custom application (please make sure to add the right copyright terms) but shouldn’t rely on PyeIQ to entirely develop a product.   The PyeIQ python API is meant to help demo developers with the creation of new examples.   What eIQ example applications are available for i.MX RT? eIQ example applications can be found in the <SDK DIR>\boards\<board_name>\eiq_examples directory:        What is the difference between TensorFlow, TensorFlow Lite, and TensorFlow for Microcontrollers? Google created the TensorFlow framework  for designing and building neural network models. TensorFlow Lite was created to run those models on embedded systems and phones TensorFlow Lite for Microcontrollers (abbreviated TFLM and also known as TFLite Micro) was created to run models on even more resource constrained devices like MCUs. eIQ includes TensorFLow Lite for the i.MX microprocessor family and TensorFlow Lite for Microcontrollers for the i.MX RT microcontroller family. Note that in previous versions of eIQ there was also a version of TensorFlow Lite for MCUs that NXP had created, but with SDK 2.10, eIQ has transitioned fully to TFLM for the i.MX RT family.    How can I learn more about using TensorFlow Lite with eIQ? There is a hands-on TensorFlow Lite for Microcontrollers lab available. There is also a i.MX TensorFlow Lite Lab that provide a step-by-step guide on how to get started with eIQ for TensorFlow Lite for i.MX devices.    What is Glow? Glow is a compiler developed by Facebook that turns a model into a machine executable binary for the target device. Both the model and the inference engine are compiled into the binary that is generated, which can then be integrated into a MCUXpresso SDK software project. The advantages of using a model compiler is that it can make use of optimizations for that particular model like any other compiler, and it can use software acceleration libraries like CMSIS-NN or dispatch instructions to hardware accelerators like the HiFi4 DSP on the i.MX RT685. Glow supports models in the TFLite and ONNX formats as well as Caffe2.    How can I learn more about using Glow? There are hands-on Glow labs available that provide a step-by-step guide to get started with eIQ for Glow. There is also a video available as well for using Glow with RT1170  There's also a Glow app note available that dives into how to calculate Glow memory usage.    How can I learn more about using DeepViewRT inference engine? There are hands-on DeepViewRT labs available that provide a step-by-step guide to get started with eIQ for DeepViewRT as well as eIQ Toolkit.    What application notes are available to learn more about eIQ? Anomaly Detection App Note: https://www.nxp.com/docs/en/application-note/AN12766.pdf Handwritten Digit Recognition: https://www.nxp.com/docs/en/application-note/AN12603.pdf Datasets and Transfer Learning App Note: https://www.nxp.com/docs/en/application-note/AN12892.pdf Glow Memory Analysis App Note: https://www.nxp.com/docs/en/application-note/AN13001.pdf  Security for Machine Learning Package: https://www.nxp.com/docs/en/application-note/AN12867.pdf i.MX 8M Plus NPU Warmup Time App Note: https://www.nxp.com/docs/en/application-note/AN12964.pdf   Which inference engine should I use for my model?  There are several options, and there's no one correct answer as it can be very model dependent on which will perform the best. However there are some basic guidelines: eIQ Toolkit can be used to convert models into different formats required for different inference engines DeepViewRT can be used by models in the .rtm format generated by the eIQ Toolkit Glow is very flexible and can be used with models in the  .tflite format as well as the   universal ONNX format Only models written in TensorFlow can use the TensorFlow Lite for Microcontrollers inference engines.    What is the advantage of using eIQ instead of using the open-sourced software directly from Github? eIQ supported inference engines work out of the box and are already tested and optimized, allowing for performance enhancements compared to the original code. eIQ also includes the software to capture the camera or voice data from external peripherals. eIQ allows you to get up and running within minutes instead of weeks. As a comparison, rolling your own is like grinding your own flour to make a pizza from scratch, instead of just ordering a great pizza from your favorite pizza place.    Does eIQ include ML models? Do I use it to train a model? eIQ is a collection of software that allows you to Bring Your Own Model (BYOM) and run it on NXP embedded devices.  eIQ provides the ability to run your own specialized model on NXP’s embedded devices.  For those new to AI/ML, we also now offer eIQ Toolkit which can be used to generate new vision based AI models using images provided to the tool.  There are several inference engine options like TensorFlow Lite and Glow that can be used to run your model. MCUXpresso SDK and the i.MX Linux releases come with several examples that use pre-created models that can be used to get a sense of what is possible on our platforms, and it is very easy to substitute in your own model into those examples.   I’m new to AI/ML and don’t know how to create a model, what can I do? A wide variety of resources are available for creating models, from labs and tutorials, to automated model generation tools like eIQ Toolkit, Google Cloud AutoML, Microsoft Azure Machine Learning, or Amazon ML Services, to 3 rd party partners like SensiML and Au-Zone that can help you define, enhance, and create a model for your specific application. Alternatively if you have no interest in generating models yourself, NXP also offers several pre-built voice and facial recognition solutions that include the appropriate models already created for you. There are Alexa Voice Services, Local voice control, and face and emotion recognition solutions available. Note that these solutions are different from eIQ as they include the model as well as the appropriate hardware and so those devices are sold as unique part numbers and have a cost-optimized BOM to directly use in your final product. eIQ is for those who want to use their own model or generate a model themselves using eIQ Toolkit. The three solutions mentioned above are for those who want a full solution (including model) already created for them for those specific applications.   Troubleshooting: Why do I get an error when running Tensorflow Lite Micro that it "Didn't find op for builtin opcode"? The full error will look something like this: Didn't find op for builtin opcode 'PAD' version '1' Failed to get registration from op code ADD Failed starting model allocation. AllocateTensors() failed Failed initializing model The reason is that with MCUXpresso SDK, the TFLM examples have been optimized to only support the operands necessary for the default models. If you are using your own model, it may use extra types of operands. The way to fix this is described in the TFLM Lab Guide on how to use the All Ops Resolver. Add the following header file #include"tensorflow/lite/micro/all_ops_resolver.h" and then comment out the micro_op_resolver and use this instead:  //tflite::MicroOpResolver &micro_op_resolver = //MODEL_GetOpsResolver(s_errorReporter); tflite::AllOpsResolver micro_op_resolver; How do I use my GPU when training with eIQ Toolkit? eIQ Toolkit 1.0.5 is using TensorFlow version is 2.3.2, so to use the GPU when training you will need to install cuDNN v7.6 and CUDA 10.2. If you have newer versions of those tools installed on your PC you may need to uninstall them first.  Why do I get a blank/black LCD screen when I use the eIQ demos that have camera+LCD support on RT1170 or RT1160? The LCD driver needs to be updated to support the new LCD. See this post for more information on how to fix this issue in MCUXpresso SDK 2.11. It will be fixed in SDK 2.12. Why is the inference speed slower when using TensorFlow Lite for Microcontrollers? Make sure you are using the "Release" project configuration which enables high compiler optimizations. This significantly reduces the inference time for TFLM projects. Glow and DeepViewRT projects are not affected by this setting because they use pre-compiled binaries and pre-compiled libraries respectively.    General AI/ML: What is Artificial Intelligence, Machine Learning, and Deep Learning? Artificial intelligence is the idea of using machines to do “smart” things like a human. Machine Learning is one way to implement artificial intelligence, and is the idea that if you give a computer a lot of data, it can learn how to do smart things on its own. Deep Learning is a particular way of implementing machine learning by using something called a neural network. It’s one of the more promising subareas of artificial intelligence today.   This video series on Neural Network basics provides an excellent introduction into what a neural network is and the basics of how one works.    What are some uses for machine learning on embedded systems? Image classification – identify what a camera is looking at Coffee pods Empty vs full trucks Factory defects on manufacturing line Produce on supermarket scale Facial recognition – identifying faces for personalization without uploading that private information to the cloud Home Personalization Appliances Toys Auto Audio Analysis Wake-word detection Voice commands Alarm Analytics (Breaking glass/crying baby) Anomaly Detection Identify factory issues before they become catastrophic Motor analysis Personalized health analysis   What is training and inference? Machine learning consists of two phases: Training and Inference   Training is the process of creating and teaching the model. This occurs on a PC or in the cloud and requires a lot of data to do the training. eIQ is not used during the training process.   Inference is using a completed and trained model to do predictions on new data. eIQ is focused on enhancing the inferencing of models on embedded devices.   What are the benefits for “on the edge” inference? When inference occurs on the embedded device instead of the cloud, it’s called “on the edge”. The biggest advantage of on the edge inferencing is that the data being analyzed never goes anywhere except the local embedded system, providing increased security and privacy. It also saves BOM costs because there’s no need for WiFi or BLE to get data up to the cloud, and there’s no charge for the cloud compute costs to do the inferencing.  It also allows for faster inferencing since there’s no latency waiting for data to be uploaded and then the answer received from the cloud.   What processor do I need to do inferencing of models? Inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, which any MCU or MPU is capable of. There’s no special hardware or module required to do inferencing. However specialized ML hardware accelerators, high core clock speeds, and fast memory can drastically reduce inference time.   Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application. Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the intermediate calculations and output   As an example, the performance required for image recognition will be very dependent on the model is being used to do image recognition. This will vary depending on how many classes, what size of images to be analyzed, if multiple objects or just one will be identified, and how that particular model is structured. In general image classification can be done on i.MX RT devices and multiple object detection requires i.MX devices, as those models are significantly more complex.   eIQ provides several examples of image recognition for i.MX RT and i.MX devices and your own custom models can be easily evaluated using those example projects.    How is accuracy affected when running on slower/simpler MCUs? The same model running on different processors will give the exact same result if given the same input. It will just take longer to run the inference on a slower processor.   In order to get an acceptable inference time on a simpler MCU, it may be necessary to simplify the model, which will affect accuracy. How much the accuracy is affected is extremely model dependent and also very dependent on what techniques are used to simplify the model.   What are some ways models can be simplified? Quantization – Transforming the model from its original 32-bit floating point weights to 8-bit fixed point weights. Requires ¼ the space for weights and fixed point math is faster than floating point math. Often does not have much impact on accuracy but that is model dependent. Fewer output classifications can allow for a simpler yet still accurate model Decreasing the input data size (e.g. 128x128 image input instead of 256x256) can reduce complexity with the trade-off of accuracy due to the reduced resolution. How much that trade-off is depends on the model and requires experimentation to find. Software could rotate image to specific position using classic image manipulation techniques, which means the neural network for identification can be much smaller while maintaining good accuracy compared to case that neural network has to analyze an image that could be in all possible orientations.   What is the difference between image classification, object detection, and instance segmentation? Image classification identifies an entire image and gives a single answer for what it thinks it is seeing. Object detection is detecting one or more objects in an image. Instance segmentation is finding the exact outline of the objects in an image.   Larger and more complex models are needed to do object detection or instance segmentation compared to image classification.   What is the difference between Facial Detection and Facial Recognition? Facial detection finds any human face. Facial recognition identifies a particular human face. A model that does facial recognition will be more complex than a model that only does facial detection.    How come I don’t see 100% accuracy on the data I trained my model on? Models need to generalize the training data in order to avoid overfitting. This means a model will not always give 100% confidence , even on the data a model was trained on.   What are some resources to learn more about machine learning concepts?  Video series on Neural Network basics  ARM Embedded Machine Learning for Dummies Google TensorFlow Lab Google Machine Learning Crash Course Google Image Classification Practica YouTube series on the basics of ML and TensorFlow (ML Zero to Hero Series)
View full article
The attached lab guide walks through step-by-step how to use the new Application Software Pack for the ML-based System State Monitor found on Github.  This is related to AN13562 -  Building and Benchmarking Deep Learning Models for Smart Sensing Appliances on MCUs This lab guide was written for RT1170 but the application software pack now also supports the LPC55S69 and Kinetis K66F devices. It can also be ported to other i.MX RT, LPC, and Kinetis devices. There's also a document on Dataset creation that goes into more detail on the considerations to make when gathering data. For more details visit the ML-based System State Monitor website on NXP.com    
View full article
Two new LCD panels for i.MX RT EVKs are now available. However this new LCD panel is not supported by the i.MX RT1160/RT1170 eIQ demos in MCUXpresso SDK 2.11, and so some changes will need to be made to use the new LCD panels.    For i.MX RT1050/RT1060/RT1064 eIQ Demos in MCUXPresso SDK 2.11 that use the LCD: The eIQ demos are only configured for the original panel. However because the eIQ demos do not use the touch controller, all eIQ demos for i.MX RT1050/1060/1064 will work fine with both the original and new LCD panels without any changes.   For i.MX RT1160/RT1170 eIQ Demos in MCUXPresso SDK 2.11 that use the LCD: The eIQ demos are still only configured for the original panel. MCUXpresso SDK 2.12 will support both panels when it is released later this summer. In the meantime, for those who have the new LCD panel, some changes need to be made to the eIQ demos for i.MX RT1160/RT1170 otherwise you will just get a black or blank screen.  Unzip the MCUXpresso SDK if not done so already Open an eIQ project Find the directory the eIQ project is located in by right clicking on the project name and select Utilities->Open directory browser here   Copy both the fsl_hx8394.c and fsl_hx8394.h files found in \SDK_2_11_1_MIMXRT1170-EVK\components\video\display\hx8394\ into your eIQ project. You can place them in the video folder which would typically be located at C:\Users\username\Documents\MCUXpressoIDE_11.5.0_7232\workspace\eiq_demo_name\video   Overwrite the eiq_display_conf.c and eiq_display_conf.h files in the eIQ project with the updated versions attached to this post. Typically these files would be located at C:\Users\username\Documents\MCUXpressoIDE_11.5.0_7232\workspace\eiq_demo_name\source\video   6. Compile the project as normal and the eIQ demo will now work with the new LCD panel for RT1160/RT1170.
View full article
The eIQ Glow neural network compiler software for i.MX RT devices that is found in the MCUXPresso SDK package can be ported to other microcontroller devices in the RT family as well as to some LPC and Kinetis devices. Glow supports compiling machine learning models for Cortex-M4, Cortex-M7, and Cortex-M33 cores out of the box.  Because inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, most embedded microcontrollers can support inferencing of a neural network model. There’s no special hardware or module required to do the inferencing. However specialized ML hardware accelerators, high core clock speeds, and fast memory can drastically reduce inference time. The minimum hardware requirements are also extremely dependent on the particular model being used. Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application and your particular model.  Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the model's intermediate calculations and output   The minimum memory requirements for a particular model when using Glow can be found by using a simple formula using numbers found in the Glow bundle header file after compiling your model: Flash: Base Project + CONSTANT_MEM_SIZE + .o object File    RAM: Base Project + MUTABLE_MEM_SIZE + ACTIVATIONS_MEM_SIZE        More details can be found in this Glow Memory Usage app note.   The attached guide walks through how to port Glow to the LPC55S69 family based on the Cortex-M33 core. Similar steps can be done to port Glow to other NXP microcontroller devices. This guide is made available as a reference for users interested in exploring Glow on other devices not currently supported in the MCUXpresso SDK.  These other eIQ porting guides might also be of interest: TensorFlow Lite Porting Guide for RT685
View full article
This lab will cover how to take an existing TensorFlow image classification model named Mobilenet, and re-train it to categorize images of flowers. This is known as transfer learning. This updated model will then be saved as a TensorFlow Lite file.  By using that file with the TensorFlow Lite for MIcrocontrollers inference engine that is part of NXPs eIQ package, the model can be ran on an i.MX RT embedded device. A camera attached to the board can then be used to look at photos of flowers and the model will determine what type of flowers the camera is looking at. These same steps could then be used for classifying other types of images too.    This lab can also be used without a camera+LCD, but in that scenario the flowers images will need to be converted to a C array and loaded at compile time.      Attached to this post you will find: Photos to test out the new model Script for retraining a model A lab document on how to do 'transfer learning' on a TensorFlow model and then run that TFLite model on the i.MX RT family using TensorFlow Lite for Microcontrollers. The use of the camera+LCD is optional. If have camera+LCD use: eIQ TensorFlow Lite for Microcontrollers for i.MX RT170 - With Camera.pdf If do not have camera or LCD use: eIQ TensorFlow Lite for Microcontrollers for i.MX RT170 - Without Camera.pdf If using the RT685 use: eIQ TensorFlow Lite for Microcontrollers for i.MX RT685 - Without Camera.pdf   This lab supports the following boards: i.MX RT685-EVK i.MX RT1050-EVKB i.MX RT1060-EVK i.MX RT1064-EVK i.MX RT1160-EVK i.MX RT1170-EVK Updated April 2022 to update for Python 2.9 and TF 2.6. Also added RT685 HiFi4 DSP lab. 
View full article
The attached labs provide a step-by-step guide on how to use the eIQ for Glow Neural Network compiler with a handwritten digit recognition model example. This compiler tool turns a model into an machine executable binary for a targeted device. Both the model and the inference engine are compiled into a binary that is generated, which can decrease both inference time and memory usage. That binary can then be integrated into an MCUXpresso SDK software project.    The eIQ Glow Lab for RT1170.pdf can be used with the i.MX RT1170, RT1160, RT1064, RT1060, and RT1050 The eIQ Glow Lab for RT685.pdf can be used with the RT685.    A step-by-step video is also available You will need to download the Glow compiler tools package as well as the latest MCUXpresso SDK for the board you're using. More details on Glow can be found in the eIQ Glow Ahead of Time User Guide and the Glow website.  Updated April 2022 for Python 3.9
View full article
eIQ Toolkit   enables machine learning development with an intuitive GUI (named eIQ Portal) and development workflow tools, along with command line host tool options as part of the eIQ ML software development environment.  The eIQ Portal, developed in exclusive partnership with Au-Zone Technologies, is an intuitive graphical user interface (GUI) that simplifies ML development. Developers can create, optimize, debug and export ML models, as well as import datasets and models, rapidly train and deploy neural network models and ML workloads. The eIQ Portal provides output software that seamlessly feeds into DeepViewRT™, TensorFlow™ Lite, TensorFlow Lite Micro, Glow, Arm NN and ONNX Runtime inference engines. Using a tool called Model Runner, eIQ Toolkit can also generate graph-level profiling capability with runtime insights to help optimize neural network architectures on i.MX RT and i.MX devices. These labs go over how to use eIQ Portal and the DeepViewRT inference engine on i.MX RT devices. It is recommended to do them in the following order:  Getting Started Lab Data Import Lab Model Runner Lab   The labs are written for using a i.MX RT1170 EVK, but other devices can be used as well that support DeepViewRT in MCUXpresso SDK 2.10: i.MX RT1050 i.MX RT1060 i.MX RT1064 i.MX RT1160 i.MX RT1170     Note that the models generated with eIQ Portal can also also be used with TensorFlow Lite for Microcontrollers and Glow inference engines which are supported by eIQ with the above boards as well as:  i.MX RT500 i.MX RT600
View full article
Convolutional Neural Networks are the most popular NN approach to image recognition. Image recognition can be used for a wide variety of tasks like facial recognition for monitoring and security, car vision for safety and traffic sign recognition or augmented reality. All of these tasks require low latency, great security, and privacy, which can’t be guaranteed when using Cloud-based solutions. NXP eIQ makes it possible to run Deep Neural Network inference directly on an MCU. This enables intelligent, powerful, and affordable edge devices everywhere.   As a case study about CNNs on MCUs, a handwritten digit recognition example was created. It runs on the i.MX RT1060 and uses an LCD touch screen as the input interface. The application can recognize digits drawn with a finger on the LCD.   Handwritten digit recognition is a popular “hello world” project for machine learning. It is usually based on the MNIST dataset, which contains 70000 images of handwritten digits. Many machine learning algorithms and techniques have been benchmarked on this dataset since its creation. Convolutional Neural Networks are among the most successful.   The code is also accompanied by an application note describing how it was created and explaining the technologies it uses. The note talks about the MNIST dataset, TensorFlow, the application’s accuracy and other topics.     Application note URL: https://www.nxp.com/docs/en/application-note/AN12603.pdf (can be found at the documentation page for the i.MX RT1060)   Application code is in the attached zip files: *_eiq_mnist is the basic application from the first image and *_eiq_mnist_lock is the extended version from the second image. The applications are provided in the form of MCUXpresso projects and require an existing installation of the i.MX RT1060/RT1170 SDK with the eIQ component included.   The software for this AN was also ported to CMSIS-NN with a Caffe version of the MNIST model in a follow up AN, which can be found here: https://www.nxp.com/docs/en/application-note/AN12781.pdf 
View full article
eIQ for i.MX RT devices can be downloaded from https://mcuxpresso.nxp.com    The current MCUXpresso SDK 2.10 release supports the following devices:  i.MX RT500 i.MX RT600 i.MX RT1050 i.MX RT1060 i.MX RT1064 i.MX RT1160 i.MX RT1170   Full details on how to download eIQ and run it with MCUXpresso IDE, IAR, or Keil MDK can be found in the attached Getting Started guide.  For more information about eIQ and some hands-on labs for the i.MX RT family, see the following links: eIQ FAQ Getting Started with eIQ Portal and DeepViewRT for i.MX RT Getting Started with Glow for i.MX RT Getting Started with TensorFlow Lite for Microcontrollers for i.MX RT eIQ Porting Guide for Glow  Anomaly Detection App Note  Handwritten Digit Recognition App Note Datasets and Transfer Learning App Note  Security for Machine Learning Package: https://www.nxp.com/docs/en/application-note/AN12867.pdf
View full article
MCUXpresso SDK 2.10 for RT1064 now includes eIQ projects for all eIQ inference engines and so this Knowledge Base article is now depreciated. The instructions are being left up however in case any users using older versions of the SDK before i.MX RT1064 eIQ was fully supported need these steps in the future. Users with a i.MX RT1064 EVK should just use SDK 2.10 or later which has all the eIQ projects natively for i.MX RT1064.        1. Import an i.MX RT1060 project into the SDK. For this example, we'll use the Label Image demo.    2. Right click on the project in the workspace and select Properties.      3. Open the C/C++ Build -> MCU Settings page   4. Change the "Location" of the BOARD_FLASH parameter to 0x70000000 which is where the flash is located on the RT1064. Also adjust the size to be 0x400000. You will need to type it out.      5. Then you need to change the "Driver" parameter so the debugger knows to use the flash algorithm for the RT1064 board. Click on that field and you will see a "..." icon come up. Click on it.      6. Change the Flash driver to MIMXRT1064.cfx     7. Click on OK to close the dialog box, then click on Apply and Close to close the Properties dialog box.        8. Next we need to modify the MPU settings for the new flash address.    9. Open up board.c file. Modify the lines below to change the memory address and the memory size on lines 322 and 323 to start at 0x70000000 and for a 4MB region size.      9. Next, modify the clock settings code to ensure that FlexSPI2 is enabled. The clock setup code in the RT1060 SDK disables FlexSPI2, so we need to comment out that code in order to run the example on the RT1064. Open up clock_config.c file and comment out lines 264, 266, and 268.   10. Finally, open the fsl_flexpi_nor_boot.h file and modify the FLASH_BASE define to use FlexSPI2_AMBA_BASE on line 103     11. Compile and debug the project like normal and this project will now run on the RT1064 board.    Updated July 2021 for SDK 2.10 release. 
View full article
See the latest version of this document here: https://community.nxp.com/t5/eIQ-Machine-Learning-Software/eIQ-FAQ/ta-p/1099741 
View full article
The attached file serves as a table of contents for the various collateral, documents, training, etc. that support eIQ software.
View full article
Transfer learning is one the most important techniques in machine learning. It gives machine learning models the ability to apply past experience to quickly and more accurately learn to solve new problems. This approach is most commonly used in natural language processing and image recognition. However, even with transfer learning, if you don't have the right dataset, you will not get very far.   This application note aims to explain transfer learning and the importance of datasets in deep learning. The first part of the AN goes through the theoretical background of both topics. The second part describes a use case example based on the application from AN12603. It shows how a dataset of handwritten digits can be collected to match the input style of the handwritten digit recognition application. Afterwards, it illustrates how transfer learning can be used with a model trained on the original MNIST dataset to retrain it on the smaller custom dataset collected in the use case.   In the end, the AN shows that although handwritten digit recognition is a simple task for neural networks, it can still benefit from transfer learning. Training a model from scratch is slower and yields worse accuracy, especially if a very small amount of examples is used for training.     Application note URL: https://www.nxp.com/docs/en/application-note/AN12892.pdf 
View full article
The two demos attached for models that were compiled using the GLOW AOT tools and uses a camera connected to the i.MXRT1060-EVK to generate data for inferencing. The default MCUXpresso SDK Glow demos inference on static images, and these demos expand the capability of those projects to do inferencing on camera data. Each demo uses the default model that is found in the SDK. A readme.txt file found in the /doc folder of each demo provides details for each demo, and there a PDF available inside that same /doc folder for example images to point the camera at for inferencing.    Software requirements ==================== - MCUXpresso IDE 11.2.x - MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable - Personal Computer - IMXRT1060-EVK board with included camera + RK043FN02H-CT LCD Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Install and open MCUXpresso IDE 11.2 2. If not already done, import the RT1060 MCUXpresso SDK by dragging and dropping the zipped SDK file into the "Installed SDKs" tab. 3. Download one of the attached zip files and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel and then click on the "Resume" button in the Debug perspective that comes up to run the demo. Running the demo ================ For CIFAR10 demo: Use the camera to look at images of airplanes, ships, deer, etc that can be recognized by the CIFAR10 model. The include PDF can be used for example images. For MNIST demo: Use the camera to look at handwritten digits which can be recognized by the LeNet MNIST model. The included PDF can be used for example digits or you can write your own. For further details see the readme.txt file found inside each demo in the /doc directory. Also see the Glow Lab for i.MX RT for more details on how to compile neural network models with Glow. 
View full article
The eIQ CMSIS-NN software for i.MX RT devices that is found in the MCUXPresso SDK package can be ported to other microcontroller devices in the RT family, as well as to some LPC and Kinetis devices.  A very common question is what processors support inferencing of models, and the answer is that inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, which almost any MCU or MPU is capable of. There’s no special hardware or module required to do inferencing. However high core clock speeds, and fast memory can drastically reduce inference time. Determining if a particular model can run on a specific device is based on: How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application and the particular model.  Is there enough non-volatile memory to store the weights, the model itself, and the inference engine Is there enough RAM to keep track of the intermediate calculations and output The attached guide walks through how to port the CMSIS-NN inference engine to the LPC55S69 family. Similar steps can be done to port eIQ to other microcontroller devices. This guide is made available as a reference for users interested in exploring eIQ on other devices, however only the RT1050 and RT1060 are officially supported at this time for CMSIS-NN for MCUs as part of eIQ.  These other eIQ porting guides might also be of interest: Glow Porting Guide for MCUs TensorFlow Lite Porting Guide for RT685
View full article
Getting Started with eIQ Software for i.MX Applications Processors Getting Started with eIQ for i.MX RT
View full article
eIQ Software for i.MX application processors eIQ Machine Learning Software for iMX Linux - 5.4.3_1.0.0 GA for i.MX6/7 and i.MX8MQ/8MM/8MN/8QM/8QXP has been released. eIQ Machine Learning Software for iMX Linux - 5.4.24_2.1.0 BETA for i.MX8QXPlus, BETA for i.MX8MP, and ALPHA 2 for i.MX8DXL has been released. It contains machine learning support for Arm NN, TensorFlow and TensorFlow Lite, ONNX, and OpenCV.  For running on Arm Cortex A cores, these inference engines are accelerated with Arm NEON instructions. For running on the NPU (of the i.MX 8M Plus) and i.MX 8 GPUs, NXP has included optimizations with Arm NN and TensorFlow Lite inference engines.  For more information and complete details please be sure to check out the "NXP eIQ Machine Learning" chapter in the Linux User Guide (starting on L4.19 releases; L4.14 releases users should refer to NXP eIQ™ Machine Learning Software Development Environment for i.MX Applications Processors) . You can access corresponding sample applications at  https://source.codeaurora.org/external/imxsupport/eiq_sample_apps/ .   For more information on artificial intelligence, machine learning and eIQ Software please visit  AI & Machine Learning | NXP .     eIQ Software for i.MX RT crossover processors eIQ is now included in the MCUXpresso SDK package for i.MX RT1050 and i.MX RT1060. Go to  https://mcuxpresso.nxp.com and search for the SDK for your board On the SDK builder page, click on “Add software component” Click on “Select All” and verify the eIQ software option is now checked. Then click on “Save Changes” Download the SDK. It will be saved as a .zip file. eIQ projects can be found in the  \boards\<board_name>\eiq_examples  folder eIQ source code can be found in the  \middleware\eiq  folder   More details can be found in  this Community  post     on  how to get started with  eIQ  on i.MX RT devices.  
View full article
The eIQ demos for i.MX RT use arrays for input data for inferencing. The attached guide and scripts describe how to create custom input data (both images and audio) for use with the eIQ examples.
View full article
The attached project enables users to capture and save the camera data captured by a i.MXRT1060-EVK board onto a microSD card. This project does not do inferencing of a model. Instead it is meant to be used to generate images that can then be used on a PC for training a model. The images are saved in RGB NHWC format as a binary file on the microSD card, and then a Python script running on the PC can convert those binary files into the PNG image format.   Software requirements ==================== -   MCUXpresso IDE 11.2.x -   MCUXpresso SDK for RT1060 Hardware requirements ===================== - Micro USB cable -   IMXRT1060-EVK board   with included camera +   RK043FN02H-CT LCD - MicroSD card - Personal computer (Windows) - Micro SD card reader - Python 3.x installed Board settings ============== Camera and LCD connected to i.MXRT1060-EVK Prepare the demo ================ 1. Insert a micro SD card into the micro SD card slot on the i.MXRT1060-EVK (J39) 2. Open MCUXpresso IDE 11.2 and import the project using "Import project(s) from file system.." from the Quickstart panel. Use the "Archive" option to select the zip file that contains this project. 3. If desired, two #define values can be modified in camera_capture.c to adjust the captured image size: #define EXTRACT_HEIGHT 256 //Max EXTRACT_HEIGHT value possible is 271 (due to border drawing) #define EXTRACT_WIDTH 256 //Max EXTRACT_WIDTH value possible is 480 4. Build the project by clicking on "Build" in the Quickstart Panel 5. Connect a USB cable between the host PC and the OpenSDA port (J41) on the target board. 6. Open a serial terminal with the following settings: - 115200 baud rate - 8 data bits - No parity - One stop bit - No flow control 7. Download the program to the target board by clicking on "Debug" in the Quickstart Panel 8. Click on the "Resume" icon to begin running the demo. Running the demo ================ The terminal will ask for a classification name. This will create a new directory on the SD card with that name. The name size is limited to 5 characters because the FATFS file system supports only 8 characters in a file name, and three of those characters are used to number the images. For best results, the selection rectangle should be centered on the image and nearly (but not completely)  fill up the whole rectangle. The camera should be stabilized with your finger or by some other means to prevent shaking. Also ensure the camera lens has been focused as described in the instructions when connecting the camera and LCD. While running the demo, press the 'c' key to enter a new classification name, or press 'q' to quit the program and remove the micro SD card. Transfer the .bin files created on the SD card to your PC in the same directory that the Python script which can be found in the "scripts" directory to convert the images to PNG format. If the captured image is a square (width==height) the script can be called with: python convert_image.py directory_name which will convert all the .BIN files in the specified directory name to PNG files. If the captured image is not a square, the width and height can be specified at the command line: python convert_image.py directory_name width height Terminal Output ============== Camera SD Card Capture Extracted Image: Height x Width: 256x256 Please insert a card into board. Card inserted. Mounting SD Card Enter name of new class (must be less than 5 characters): test Creating directory test...... Press any key to capture image. Press 'c' to change class or 'q' to quit Writing file /test/test001.bin...... Write Complete Press any key to capture image. Press 'c' to change class or 'q' to quit Remove SD Card
View full article