eIQ Frequently Asked Questions (FAQ)

Document created by Megan Hansen Employee on Jun 20, 2019Last modified by Anthony Huereca on Jun 27, 2019
Version 5Show Document
  • View in full screen mode

General eIQ Questions

 

What is eIQ™ Machine Learning Software?

eIQ Machine Learning Software is a collection of software and development tools for NXP microprocessors and microcontrollers to do inference of neural network AI models on embedded systems.

 

How much does eIQ cost?

Free! NXP is making eIQ freely available as a basic enablement to jumpstart ML application development.

 

Which devices are supported today?

For i.MX devices:

  • eIQ for i.MX processors officially supports all devices in the i.MX 8 Series.

For i.MX RT devices:

  • i.MX RT1060
  • i.MX RT1050

Check back frequently, as this list will expand over time.

 

Which processor should I use for my application? 

Determining the appropriate processor or microcontroller to use in your application can certainly be quite a challenge. Unfortunately, it’s also quite difficult to answer this question without the context of your specific requirements for inference time, model accuracy, cost, and more. However, with eIQ, NXP is providing enablement and tools to help developers rapidly prototype and evaluate ML solutions so you can pick the correct device for the job, whether microcontroller or application processor.

 

Are there benchmarks available from NXP?

Inference time and memory usage is heavily dependent on the particular model that is being used. Larger models will have a longer inference time and require more memory. All eIQ examples print out their inference time, so it can be easily calculated if a new model is used. Inference time and memory requirements will also differ depending on the framework and inference engine that is used.

 

 

As a rough estimate for inference times with different models, here are the inference times for the eIQ demos for i.MX RT on the RT1060. This is with IAR using the Release (highly optimized) compiler settings. Also note that the models are very different between CMSIS-NN and TensorFlow Lite, and thus it is not an apples to apples comparison as to which inference engine is faster: 

 

CMSIS-NN (ms) 

TensorFlow Lite (ms) 

CIFAR-10 Demo 

102 

70 

Keyword Spotting Demo 

19 

6 

Label Image Demo 

N/A 

237 

 

As for memory usage, the KWS CMSIS-NN demo requires 160KB of Flash and 36KB of RAM. The KWS TensorFlow Lite demo requires ~550KB of Flash (80KB for the model, 450KB for the inference engine) and 33KB of RAM.  

 

What other machine learning and artificial intelligence solutions are available to support application development using NXP microcontrollers and processors?

With a comprehensive support tool designed to work with our processing platforms, NXP helps customers easily implement Machine Learning across the landscape of automotive, industrial, and IoT applications.

Automotive

 

Industrial and IoT Third Party Software and Hardware options include:

 

NXP’s EdgeScale™ Solution includes:

  • Secure deployment of applications (including AI/ML) through docker containers for Layerscape devices.
  •  Turnkey Solutions
  • AVS Solution (Alexa Voice Solutions)
    • i.MX RT106A (part# SLN-ALEXA-IOT) Link
    • Enabled with OTA, execution from encrypted flash, far-field, Amazon® wake-word, Alexa for MCU client (NXP developed)
    • Use case: voice-based IoT products for consumer applications
  • Other turnkey solutions targeting vision and vibration (anomaly detection), coming soon for broad market
  • Anomaly detection and facial recognition solutions based on i.MX RT, i.MX 8M Mini

 

Which Inference Engines are included as part of eIQ?

At launch (June 2019), eIQ supports TensorFlow Lite and CMSIS-NN on the i.MX RT family; and eIQ supports Arm NN, ONNX, OpenCV DNN, TensorFlow, and TensorFlow Lite on the i.MX 8 Series applications processor family. Check back frequently as eIQ is under continual development.

 

For the i.MX RT microcontroller family:

  • TensorFlow Lite – used with TensorFlow framework models
  • CMSIS-NN – used with Caffe framework models

 

For the i.MX applications processor family:

  • ARM NN
  • ONNX
  • OpenCV DNN
  • TensorFlow
  • TensorFlow Lite

 

Why does NXP not create its own training models?

We believe our customers can create the model best suited for their particular application as every AI use-case is unique. eIQ provides the ability to run that specialized model on NXP’s embedded devices.

 

Does eIQ provide any tools for model training, quantization or pruning?

At this time, eIQ only provides tools for inferencing on the edge device such as i.MX RT microcontrollers or i.MX application processors. Training is assumed to happen elsewhere, most typically in the cloud on “big iron” server-class hardware.

 

Is there training material available for eIQ? 

There are presentations available that focus on eIQ for i.MX application processors as well as on eIQ for i.MX RT crossover processors. Keep an eye out for future training events near you as part of NXP Tech Days.  

 

 

 

eIQ Software for i.MX application processors

 

How do I get eIQ software for i.MX processors?

eIQ for i.MX processors is supported on the current L4.14.y series Yocto Linux release. For detailed instructions on how to add eIQ software to your Yocto build please visit eIQ Machine Learning Software for i.MX Linux 4.14.y.

 

Which devices are supported today?

eIQ for i.MX processors officially supports all devices in the i.MX 8 Series. Check back frequently as this list will expand over time.

 

Can eIQ be used with other NXP application processors such as i.MX 6 or Layerscape?

At this time eIQ officially supports all devices in the i.MX 8 Series. It is possible to build eIQ for other NXP application processors however only the i.MX 8 Series is officially supported.

 

Are non-Yocto Linux build environments or other distributions (e.g. Debian, Ubuntu) supported?

At this time eIQ officially supports the NXP Yocto Linux release. Non-Yocto build environments or other Linux distributions are not currently supported.

 

 

 

 

eIQ Software for i.MX RT crossover processors

 

How do I get eIQ software for microcontrollers?

eIQ is now included in the MCUXpresso SDK package for i.MX RT1050 and i.MX RT1060.

  1. Go to https://mcuxpresso.nxp.com and search for the SDK for your board
  2. On the SDK builder page, click on “Add software component”
  3. Click on “Select All” and verify the eIQ software option is now checked. Then click on “Save Changes”
  4. Download the SDK. It will be saved as a .zip file.

eIQ projects can be found in the \boards\<board_name>\eiq_examples folder

eIQ source code can be found in the \middleware\eiq folder

 

More details can be found in this Community post  on how to get started with eIQ on i.MX RT devices.  

 

Which devices are supported today?

eIQ for i.MX RT microcontrollers officially supports the i.MX RT1050 and i.MX RT1060. Check back frequently as this list will expand over time.

 

Can eIQ be used with other NXP MCUs?

It is possible to run eIQ on other i.MX RT devices, as well as other microcontroller families, however only the RT1060 and RT1050 devices are officially supported. The limiting factor is memory requirements and processor speed.

 

See this Community post for a porting guide which walks through porting eIQ to an LPC microcontroller. This same process could be used for other microcontroller families. 

 

For eIQ on i.MX RT crossover processors, how is the input data used by the example projects generated?

Camera and microphone input, and LCD output, for i.MX RT eIQ projects is under development and will be available later this year. Currently the data used in the examples is stored as an array in a header file.

 

See this Community post for how to generate your own input data that can be used with the eIQ i.MX RT examples. 

 

How can I get started with image recognition for the i.MX RT family? 

There is a lab available which describes how to do transfer learning on a model to teach it to recognize new categories of images. It can be customized to recognize other types of images depending on what the model is trained with.  

Attachments

    Outcomes