eIQ FAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

eIQ FAQ

eIQ FAQ

This document will cover some of the most commonly asked questions we've gotten about eIQ and embedded machine learning.

Anything requiring more in-depth discussion/explanation will be put in a separate thread. All new questions should go into their own thread as well.

 

What is eIQ?

eIQ is a collection of libraries and development tools for building machine learning applications for NXP MCUs and apps processors. It allows users to run machine learning models on embedded devices. It’s Bring Your Own Model (BYOM) enablement, where the focus is on doing the inference of models on an embedded device with a variety of open source options.

 

How much does eIQ cost?

Free! NXP is making eIQ freely available as a basic enablement to jumpstart ML application development.It is also royalty free. 

 

What devices are supported by eIQ?

eIQ is available for the following i.MX application processors:

 

eIQ is available for the following i.MX RT crossover MCUs:

 

What inference engines are available in eIQ?

i.MX apps processors and i.MX RT MCUs support different inference engines.

 

Inference engines for i.MX:

  • TensorFlow Lite (Supported on both CPU and GPU/NPU)
  • ARM NN (Supported on both CPU and GPU/NPU)
  • OpenCV ( Supported on only CPU)
  • ONNX Runtime (Currently only supported on CPU)

Inference engines for i.MX RT1060 and RT1050:

Inference engines for i.MX RT685:

 

 

Can eIQ run on other MCU devices?

Porting guides have been made available as a reference for users interested in using eIQ on other devices. However only the RT1060, RT1050, and RT685 are officially supported at this time as part of eIQ for MCUs. 

 

How can I get eIQ?

For i.MX RT devices:

eIQ is included as part of MCUXpresso SDK. Make sure to select the “eIQ” middleware option for RT1050 and RT1060 devices, or the “DSP Neural network” middleware option for RT685. If interested in using Glow, the Glow tools package must also be installed.

 

For i.MX devices:

eIQ is distributed as part of the Yocto Linux BSP. Starting with the 4.19 release line there is a dedicated Yocto image that includes all the Machine Learning features: ‘imx-image-full’. For pre-build binaries refer to i.MX Linux Releases and Pre-releases pages.

 

What documentation is available for eIQ?

For i.MX RT devices: 

There are user guides for Glow, TensorFlow Lite, and CMSIS-NN inside the SDK documentation package when downloading the SDK with the MCUXpresso SDK builder. The Glow user guide can also be found here.

pastedImage_106.png

 

 

For i.MX devices:

The eIQ documentation for i.MX is integrated in the Yocto BSP documentation. Refer to i.MX Linux Releases and Pre-releases pages.

  • i.MX Reference Manual: presents an overview of the NXP eIQ Machine Learning technology.
  • i.MX Linux_User's Guide: presents detailed instructions on how to run and develop applications using the ML frameworks available in eIQ (currently ArmNN, TFLite, OpenCV and ONNX).
  • i.MX Yocto Project User's Guide: presents build instructions to include eIQ ML support (check sections referring to ‘imx-image-full’ that includes all eIQ features).

It is recommended to also check the i.MX Linux Release Notes which includes eIQ details.

 

For i.MX devices, what type of Machine Learning applications can I create? 

Following the BYOM principle described above, you can create a wide variety of applications for running on I.MX. To help kickstart your efforts, refer to PyeIQ – a collection of demos and applications that demonstrate the Machine Learning capabilities available on i.MX.

  • They are very easy to use (install with a single command, retrieve input data automatically)
  • The implementation is very easy to understand (using the python API for TFLite, ArmNN and OpenCV)
  • They demonstrate several types of ML applications (e.g., object detection, classification, facial expression detection) running on the different compute units available on i.MX to execute the inference (Cortex-A, GPU, NPU).

 

Can I use the python API provided by PyeIQ to develop my own application on i.MX devices?

For developing a custom application in python, it is recommended to directly use the python API for ArmNN, TFLite, and OpenCV. Refer to the i.MX Linux User’s Guide for more details.

 

You can use the PyeIQ scripts as a starting point and include code snippets in a custom application (please make sure to add the right copyright terms) but shouldn’t rely on PyeIQ to entirely develop a product.

 

The PyeIQ python API is meant to help demo developers with the creation of new examples.

 

What eIQ example applications are available for i.MX RT1060 and RT1050?

 For RT1060 and RT1050 there are several options all located in the \SDK_2.8.0_EVK-MIMXRT1060\boards\evkmimxrt1060\eiq_examples directory: 

pastedImage_1.png

 

 

What eIQ example applications are available for i.MXRT685? 

The RT685 SDK contains three different eIQ projects, all located in the \SDK_2.8.0_EVK-MIMXRT685\boards\evkmimxrt685\dsp_examples directory: 

  • dsp_cifar10 – CIFAR-10 model compiled with Glow. This is recommended to use for exploring eIQ on RT685. 
  • dsp_lenet_demo – Manually implemented neural network model for handwritten digit recognition with HiFi4 DSP calls for acceleration. Does not use Glow.
  • dsp_nn – Basic neural network unit tests for HiFi4 DSP calls. Does not use Glow.

 

What is the difference between TensorFlow, eIQ for TensorFlow Lite, and TensorFlow Micro?

Google created the TensorFlow framework  for designing and building neural network models. TensorFlow Lite includes a converter tool to allow those models to run on embedded systems by using a TensorFlow Lite inference engine running on the embedded system.

 

  • eIQ for TensorFlow Lite is NXP’s implementation of TF Lite for MCUs.
  • TensorFlow Micro is TensorFlow’s implementation of TF Lite for MCUs.

Both TensorFlow Lite eIQ and TensorFlow Micro allow users to run converted TensorFlow .tflite models on embedded devices. eIQ for TensorFlow Lite was created before TF Micro was available and contains optimizations for NXP devices.

 

How can I learn more about using TensorFlow Lite with eIQ?

There is a hands-on TensorFlow Lite lab available for RT1060 including an associated video.

There is also a i.MX TensorFlow Lite Lab that provide a step-by-step guide on how to get started with eIQ for TensorFlow Lite for i.MX devices. 

 

What is Glow?

Glow is a compiler developed by Facebook that turns a model into a machine executable binary for the target device. Both the model and the inference engine are compiled into the binary that is generated, which can then be integrated into a MCUXpresso SDK software project. The advantages of using a model compiler is that it can make use of optimizations for that particular model like any other compiler, and it can use software acceleration libraries like CMSIS-NN or dispatch instructions to hardware accelerators like the HiFi4 DSP on the i.MX RT685. Glow supports models in the ONNX format as well as Caffe2. Most models can be converted to the universal ONNX format

 

How can I learn more about using Glow with eIQ?

There are hands-on Glow labs for the RT1060 and RT685 available that provide a step-by-step guide to get started with eIQ for Glow. There is also a video available as well for using Glow with RT1060.

 

What application notes are available to learn more about eIQ?

 

Which inference engine should I use for my model? 

There are several options, and there's no one correct answer. However there are some basic guidelines:

  • Glow is very flexible and can be used with models converted to the universal ONNX format
  • Only models written in TensorFlow can use the TensorFlow Lite inference engine. 
  • CMSIS-NN implements specific NN features and can be used to implement a neural network directly with the API. There are also a script to convert Caffe models to CMSIS-NN. Typically CMSIS-NN is used to speed up other inference engines rather than being used directly to implement a model but is made available with eIQ for those who wish to use it. CMSIS-NN is also only targetted to Cortex M class devices.

 

What is the advantage of using eIQ instead of using the open-sourced software directly from Github?

eIQ supported inference engines work out of the box and are already tested and optimized, allowing for performance enhancements compared to the original code. eIQ also includes the software to capture the camera or voice data from external peripherals. eIQ allows you to get up and running within minutes instead of weeks. As a comparison, rolling your own is like grinding your own flour to make a pizza from scratch, instead of just ordering a great pizza from your favorite pizza place. 

pastedImage_107.png

 

Does eIQ include ML models? Do I use it to train a model?

eIQ is a collection of software that allows you to Bring Your Own Model (BYOM) and run it on NXP embedded devices. We believe our customers can create the model best suited for their particular application as every AI use-case is unique. eIQ provides the ability to run that specialized model on NXP’s embedded devices. 



There are several inference engine options like TensorFlow Lite and Glow that can be used to run your model. MCUXpresso SDK and the i.MX Linux releases come with several examples that use pre-created models that can be used to get a sense of what is possible on our platforms, and it is very easy to substitute in your own model into those examples.

 

eIQ is not software for generating or training models. eIQ is for doing the inferencing of models on an embedded system. However, we do provide some examples of using transfer learning techniques to repurpose or enhance existing models.

 

I’m new to AI/ML and don’t know how to create a model, what can I do?

A wide variety of resources are available for creating models, from labs and tutorials, to automated model generation tools like Google Cloud AutoMLMicrosoft Azure Machine Learning, or Amazon ML Services, to 3rd party partners like SensiML and Au-Zone that can help you define, enhance, and create a model for your specific application.

Alternatively if you have no interest in generating models yourself, NXP also offers several pre-built voice and facial recognition solutions that include the appropriate models already created for you. There are Alexa Voice Services, Local voice control, and face and emotion recognition solutions available. Note that these solutions are different from eIQ as they include the model as well as the appropriate hardware and so those devices are sold as unique part numbers and have a cost-optimized BOM to directly use in your final product.

  • eIQ is for those who want to use their own model.
  • The three solutions mentioned above are for those who want a full solution (including model) already created for them for those specific applications.

 

 

 

General AI/ML:

What is Artificial Intelligence, Machine Learning, and Deep Learning?

Artificial intelligence is the idea of using machines to do “smart” things like a human. Machine Learning is one way to implement artificial intelligence, and is the idea that if you give a computer a lot of data, it can learn how to do smart things on its own. Deep Learning is a particular way of implementing machine learning by using something called a neural network. It’s one of the more promising subareas of artificial intelligence today.

 

This video series on Neural Network basics provides an excellent introduction into what a neural network is and the basics of how one works. 

 

What are some uses for machine learning on embedded systems?

Image classification – identify what a camera is looking at

  • Coffee pods
  • Empty vs full trucks
  • Factory defects on manufacturing line
  • Produce on supermarket scale

Facial recognition – identifying faces for personalization without uploading that private information to the cloud

  • Home Personalization
  • Appliances
  • Toys
  • Auto

Audio Analysis

  • Wake-word detection
  • Voice commands
  • Alarm Analytics (Breaking glass/crying baby)

Anomaly Detection

  • Identify factory issues before they become catastrophic
  • Motor analysis
  • Personalized health analysis

 

What is training and inference?

Machine learning consists of two phases: Training and Inference

 

Training is the process of creating and teaching the model. This occurs on a PC or in the cloud and requires a lot of data to do the training. eIQ is not used during the training process.

 

Inference is using a completed and trained model to do predictions on new data. eIQ is focused on enhancing the inferencing of models on embedded devices.

 

What are the benefits for “on the edge” inference?

When inference occurs on the embedded device instead of the cloud, it’s called “on the edge”. The biggest advantage of on the edge inferencing is that the data being analyzed never goes anywhere except the local embedded system, providing increased security and privacy. It also saves BOM costs because there’s no need for WiFi or BLE to get data up to the cloud, and there’s no charge for the cloud compute costs to do the inferencing.  It also allows for faster inferencing since there’s no latency waiting for data to be uploaded and then the answer received from the cloud.

 

What processor do I need to do inferencing of models?

Inferencing simply means doing millions of multiple and accumulate math calculations – the dominant operation when processing any neural network -, which any MCU or MPU is capable of. There’s no special hardware or module required to do inferencing. However specialized ML hardware accelerators, high core clock speeds, and fast memory can drastically reduce inference time.

 

Determining if a particular model can run on a specific device is based on:

  • How long will it take the inference to run. The same model will take much longer to run on less powerful devices. The maximum acceptable inference time is dependent on your particular application.
  • Is there enough non-volatile memory to store the weights, the model itself, and the inference engine
  • Is there enough RAM to keep track of the intermediate calculations and output

 

As an example, the performance required for image recognition will be very dependent on the model is being used to do image recognition. This will vary depending on how many classes, what size of images to be analyzed, if multiple objects or just one will be identified, and how that particular model is structured. In general image classification can be done on i.MX RT devices and multiple object detection requires i.MX devices, as those models are significantly more complex.

 

eIQ provides several examples of image recognition for i.MX RT and i.MX devices and your own custom models can be easily evaluated using those example projects. 

 

How is accuracy affected when running on slower/simpler MCUs?

The same model running on different processors will give the exact same result if given the same input. It will just take longer to run the inference on a slower processor.

 

In order to get an acceptable inference time on a simpler MCU, it may be necessary to simplify the model, which will affect accuracy. How much the accuracy is affected is extremely model dependent and also very dependent on what techniques are used to simplify the model.

 

What are some ways models can be simplified?

  • Quantization – Transforming the model from its original 32-bit floating point weights to 8-bit fixed point weights. Requires ¼ the space for weights and fixed point math is faster than floating point math. Often does not have much impact on accuracy but that is model dependent.
  • Fewer output classifications can allow for a simpler yet still accurate model
  • Decreasing the input data size (e.g. 128x128 image input instead of 256x256) can reduce complexity with the trade-off of accuracy due to the reduced resolution. How much that trade-off is depends on the model and requires experimentation to find.
  • Software could rotate image to specific position using classic image manipulation techniques, which means the neural network for identification can be much smaller while maintaining good accuracy compared to case that neural network has to analyze an image that could be in all possible orientations.

 

What is the difference between image classification, object detection, and instance segmentation?

Image classification identifies an entire image and gives a single answer for what it thinks it is seeing. Object detection is detecting one or more objects in an image. Instance segmentation is finding the exact outline of the objects in an image.

 

Larger and more complex models are needed to do object detection or instance segmentation compared to image classification.

pastedImage_108.png

 

What is the difference between Facial Detection and Facial Recognition?

Facial detection finds any human face. Facial recognition identifies a particular human face. A model that does facial recognition will be more complex than a model that only does facial detection. 

pastedImage_109.png

 

How come I don’t see 100% accuracy on the data I trained my model on?

Models need to generalize the training data in order to avoid overfitting. This means a model will not always give 100% confidence , even on the data a model was trained on.

 

What are some resources to learn more about machine learning concepts? 

Labels (2)
Version history
Revision #:
5 of 5
Last update:
4 weeks ago
Updated by:
NXP Employee
 
Contributors