cancel
Showing results for 
Search instead for 
Did you mean: 

Getting Started with eIQ Software for i.MX Applications Processors

Ragan_Dunham
NXP Employee
NXP Employee
1 7 3,979

Machine Learning at the Edge:

eIQ Software for i.MX Applications Processors

Developing machine learning (ML) applications for embedded devices can be a daunting task. For the traditional embedded developer, the learning curve can be quite steep, as there are numerous decisions that must be made and new jargon that must be learned. Which framework should I choose? Which model best meets the requirements of my application and how do I know when it’s good enough? What “size” microcontroller or application processor do I need? The questions are many and figuring out how to get started can prove to be a challenge.

At NXP, we’re eager to be at the heart of your ML application development. We’ve just released our first machine learning software that integrates industry-leading technologies required to deploy ML-based applications to embedded devices. Whether you prefer to start with TensorFlow or Keras or Caffe frameworks, our new eIQ™ ML software development environment provides support for these popular frameworks and more, running on four inference engines – OpenCV, Arm® NN, Arm CMSIS-NN and TensorFlow Lite. Our goal with eIQ software is to provide broad enablement that helps inform your decision-making and allows you to create the best solution possible for your application.

To help you get started with eIQ software for i.MX applications processors, we’ve created a series of step-by-step tutorials that take you from unboxing a board, to deploying, to modeling, to inferencing at the edge using the i.MX 8M Mini EVK. We have examples of object detection, handwriting recognition, face detection and more – all implemented in a variety of frameworks and published in source to help get you running as quickly as possible.

To get started, follow the link to eIQ Software Sample Apps Overview where you’ll find detailed instructions and if you get stuck, visit the eIQ Machine Learning Software community where we’ll be waiting and ready to help.

Be sure to check back as we continue to explore the questions raised above and dive deeper on the key challenges that embedded developers face in creating ML-based applications. If there’s anything specific you’d like us to address, please let us know in the comments.

Happy inferencing!

7 Comments
msd11
Contributor II

Hello, is the machine learning model inference is done by Cpu runtime or Gpu runtime??

vanessa_maegima
NXP Employee
NXP Employee

Hi Dinesh,

At this time, the inference runs on CPU only.

msd11
Contributor II

In my application cpu is busy doing various other processes, so we want to do inference on GPU. Can you suggest a way out? 

vanessa_maegima
NXP Employee
NXP Employee

Hi Dinesh,

Inference on GPU is planned for future releases. The suggestion now is to wait for this support.

Thanks,
Vanessa

msd11
Contributor II

Hi Vanessa thanks for the reply, when can I expect the release with GPU inference support?? I mean a tentative schedule...

Ragan_Dunham
NXP Employee
NXP Employee

Dinesh, we are not ready to comment on the GPU schedule at this time. Thanks, Ragan

msd11
Contributor II

Hey Ragan thanks for the reply! Future release means you will release new hardware which supports gpu inference or software which does the same on the current hardware.(The one which I am using is imx8. We tried to run caffe model on it. It's working on cpu but not GPU. Any thoughts about this will be helpful). 

Labels