i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference

267件の閲覧回数
sunghyun96
Contributor I

Hello,

I would like to perform inference using DeepViewRT on the i.MX8M Plus board.

   1. How can I install DeepViewRT on the i.MX8M Plus and run inference on it?

   2. I have an ONNX model — how can I convert it into an RTM model for DeepViewRT?

   3. If I run inference directly with an ONNX model on the i.MX8M Plus, is it limited to CPU execution only?

Thank you in advance for your support.

0 件の賞賛
返信
3 返答(返信)

94件の閲覧回数
sunghyun96
Contributor I

Thank you for your reply.
On Linux 6.12.20-lts with the i.MX 8M Plus (EVK), I’m planning to run NPU inference using ONNX Runtime.
If there’s a recommended runtime to replace DeepViewRT (e.g., ORT vs. TFLite + VX), please let me know as well.

0 件の賞賛
返信

90件の閲覧回数
danielchen
NXP TechSupport
NXP TechSupport

Hi @sunghyun96 

The recommended replacement  for DeepViewRT on i.MX Plus is  TesnsorFlow Lite (TFlite) + VX Delegate.

VX Delegate is the official replacement for DeepViewRT,  it uses OpenVX under the hood to offload suppported operations to the NPU.

TFLite modes must be quantized (INT8) and converted using the eIQ toolkit to be compatible with the VX Delegate.

 

Since you are using ONNX Runtime,  I would suggest you convert your ONNX model to TFLite (INT8) using the eIQ  Toolkit or TensorFlow tools, then run it with TFLite + VX Delegate.

 

Regards

Daniel

0 件の賞賛
返信

166件の閲覧回数
danielchen
NXP TechSupport
NXP TechSupport

HI  Sunghyun96:

 

Please check UG10166:  i.MX Machine Learning User's Guider.

DeepViewRT inference enginine was removed.

danielchen_0-1760239877158.png

 

 

Regards

Daniel

0 件の賞賛
返信