Hi,
could you please provide some guidance how to move further with object detection application when you have your object detection model already trained with eIQ Portal?
I created example eIQ Portal project (*.eiqp), available at my Github where I described the workflow I followed. I believe it can be a starting point.
The repo also contains the final checkpoint after the training therefore model can be exported. To see model performance and explore the project:
Model can be exported using Deploy button.
The questions are:
Thanks in advance for your support!
Hi Marcin,
For question #1:
What you choose to export the model as will depend on the inference engine you want to use.
The "best" inference engine will depend on your particular application needs and the fastest inference time can also be model dependent. So it can be worth trying all three to see what works best for your particular model.
For Question #2:
The default i.MX RT SDK examples in MCUXpresso SDK 2.10 all use image classification models and so do not support displaying bounding boxes by default. However there already is code available for drawing a box and labels (the default selection box for those models) and that same code could be used to draw multiple bounding boxes once you get the x,y coordinates from the object detection model. So some code would also needed to be added to get that extra data from the model. So while it would require some work to implement, it should be fairly straight forward. But to answer your question there is not a tutorial at the moment for doing that for object detection models, but that's something we'll look at writing up in the future.