What are suggested steps for Object Detection app on i.MXRT1170 using eIQ Portal

Showing results for 
Show  only  | Search instead for 
Did you mean: 

What are suggested steps for Object Detection app on i.MXRT1170 using eIQ Portal

Contributor IV


could you please provide some guidance how to move further with object detection application when you have your object detection model already trained with eIQ Portal?

I created example eIQ Portal project (*.eiqp), available at my Github where I described the workflow I followed. I believe it can be a starting point.

The repo also contains the final checkpoint after the training therefore model can be exported. To see model performance and explore the project:

  1. Clone or download the above repo
  2. Open eIQ Portal and choose the eiqp project (in this case fruits_object_detection.eiqp)
  3. Hit Select Model -> Restore Model -> choose checkpoint (only one available) and after that you can see as follows:


Model can be exported using Deploy button.

The questions are:

  1. Is there any tutorial available describing what export file type to choose (what is the most supported now by NXP)
  2. Is there any tutorial how to run object detection model on i.MXRT1170 (for example, input from camera - attached to i.MXRT1170 kits - is being processed through the model and predictions (bounding boxes with labels) & camera data are displayed on LCD - also attached to i.MXRT1170 kits)
    • or perhaps, do eIQ examples from i.MXRT1170 SDK (2.10.0) require big modifications to work with object detection models?

Thanks in advance for your support!

0 Kudos
1 Reply

NXP Employee
NXP Employee

Hi Marcin,

For question #1: 

What you choose to export the model as will depend on the inference engine you want to use. 

  • For TensorFlow Lite Micro - Must be .tflite file
  • For DeepViewRT  - Must be .rtm file
  • For Glow - Can be either .tflite or .onnx. Would recommend TFLite for best compatibility

 The "best" inference engine will depend on your particular application needs and the fastest inference time can also be model dependent. So it can be worth trying all three to see what works best for your particular model. 

For Question #2: 

The default i.MX RT SDK examples in MCUXpresso SDK 2.10 all use image classification models and so do not support displaying bounding boxes by default. However there already is code available for drawing a box and labels (the default selection box for those models) and that same code could be used to draw multiple bounding boxes once you get the x,y coordinates from the object detection model. So some code would also needed to be added to get that extra data from the model. So while it would require some work to implement, it should be fairly straight forward. But to answer your question there is not a tutorial at the moment for doing that for object detection models, but that's something we'll look at writing up in the future. 


0 Kudos