i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference

132 Views
sunghyun96
Contributor I

Hello,

I would like to perform inference using DeepViewRT on the i.MX8M Plus board.

   1. How can I install DeepViewRT on the i.MX8M Plus and run inference on it?

   2. I have an ONNX model — how can I convert it into an RTM model for DeepViewRT?

   3. If I run inference directly with an ONNX model on the i.MX8M Plus, is it limited to CPU execution only?

Thank you in advance for your support.

0 Kudos
Reply
1 Reply

31 Views
danielchen
NXP TechSupport
NXP TechSupport

HI  Sunghyun96:

 

Please check UG10166:  i.MX Machine Learning User's Guider.

DeepViewRT inference enginine was removed.

danielchen_0-1760239877158.png

 

 

Regards

Daniel

0 Kudos
Reply