i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference

140件の閲覧回数
sunghyun96
Contributor I

Hello,

I would like to perform inference using DeepViewRT on the i.MX8M Plus board.

   1. How can I install DeepViewRT on the i.MX8M Plus and run inference on it?

   2. I have an ONNX model — how can I convert it into an RTM model for DeepViewRT?

   3. If I run inference directly with an ONNX model on the i.MX8M Plus, is it limited to CPU execution only?

Thank you in advance for your support.

0 件の賞賛
返信
1 返信

39件の閲覧回数
danielchen
NXP TechSupport
NXP TechSupport

HI  Sunghyun96:

 

Please check UG10166:  i.MX Machine Learning User's Guider.

DeepViewRT inference enginine was removed.

danielchen_0-1760239877158.png

 

 

Regards

Daniel

0 件の賞賛
返信