i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference

135 次查看
sunghyun96
Contributor I

Hello,

I would like to perform inference using DeepViewRT on the i.MX8M Plus board.

   1. How can I install DeepViewRT on the i.MX8M Plus and run inference on it?

   2. I have an ONNX model — how can I convert it into an RTM model for DeepViewRT?

   3. If I run inference directly with an ONNX model on the i.MX8M Plus, is it limited to CPU execution only?

Thank you in advance for your support.

0 项奖励
回复
1 回复

34 次查看
danielchen
NXP TechSupport
NXP TechSupport

HI  Sunghyun96:

 

Please check UG10166:  i.MX Machine Learning User's Guider.

DeepViewRT inference enginine was removed.

danielchen_0-1760239877158.png

 

 

Regards

Daniel

0 项奖励
回复