Ragan_Dunham

Getting Started with eIQ Software for i.MX Applications Processors

Blog Post created by Ragan_Dunham Employee on Jul 1, 2019

Machine Learning at the Edge:

eIQ Software for i.MX Applications Processors

Developing machine learning (ML) applications for embedded devices can be a daunting task. For the traditional embedded developer, the learning curve can be quite steep, as there are numerous decisions that must be made and new jargon that must be learned. Which framework should I choose? Which model best meets the requirements of my application and how do I know when it’s good enough? What “size” microcontroller or application processor do I need? The questions are many and figuring out how to get started can prove to be a challenge.

At NXP, we’re eager to be at the heart of your ML application development. We’ve just released our first machine learning software that integrates industry-leading technologies required to deploy ML-based applications to embedded devices. Whether you prefer to start with TensorFlow or Keras or Caffe frameworks, our new eIQ™ ML software development environment provides support for these popular frameworks and more, running on four inference engines – OpenCV, Arm® NN, Arm CMSIS-NN and TensorFlow Lite. Our goal with eIQ software is to provide broad enablement that helps inform your decision-making and allows you to create the best solution possible for your application.

To help you get started with eIQ software for i.MX applications processors, we’ve created a series of step-by-step tutorials that take you from unboxing a board, to deploying, to modeling, to inferencing at the edge using the i.MX 8M Mini EVK. We have examples of object detection, handwriting recognition, face detection and more – all implemented in a variety of frameworks and published in source to help get you running as quickly as possible.

 

To get started, follow the link to eIQ Software Sample Apps Overview where you’ll find detailed instructions and if you get stuck, visit the eIQ Machine Learning Software community where we’ll be waiting and ready to help.

Be sure to check back as we continue to explore the questions raised above and dive deeper on the key challenges that embedded developers face in creating ML-based applications. If there’s anything specific you’d like us to address, please let us know in the comments.

Happy inferencing!

Outcomes