i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

i.MX8M Plus: DeepViewRT Installation, ONNX Conversion, and CPU vs NPU Inference

302 次查看
sunghyun96
Contributor I

Hello,

I would like to perform inference using DeepViewRT on the i.MX8M Plus board.

   1. How can I install DeepViewRT on the i.MX8M Plus and run inference on it?

   2. I have an ONNX model — how can I convert it into an RTM model for DeepViewRT?

   3. If I run inference directly with an ONNX model on the i.MX8M Plus, is it limited to CPU execution only?

Thank you in advance for your support.

0 项奖励
回复
3 回复数

129 次查看
sunghyun96
Contributor I

Thank you for your reply.
On Linux 6.12.20-lts with the i.MX 8M Plus (EVK), I’m planning to run NPU inference using ONNX Runtime.
If there’s a recommended runtime to replace DeepViewRT (e.g., ORT vs. TFLite + VX), please let me know as well.

0 项奖励
回复

125 次查看
danielchen
NXP TechSupport
NXP TechSupport

Hi @sunghyun96 

The recommended replacement  for DeepViewRT on i.MX Plus is  TesnsorFlow Lite (TFlite) + VX Delegate.

VX Delegate is the official replacement for DeepViewRT,  it uses OpenVX under the hood to offload suppported operations to the NPU.

TFLite modes must be quantized (INT8) and converted using the eIQ toolkit to be compatible with the VX Delegate.

 

Since you are using ONNX Runtime,  I would suggest you convert your ONNX model to TFLite (INT8) using the eIQ  Toolkit or TensorFlow tools, then run it with TFLite + VX Delegate.

 

Regards

Daniel

0 项奖励
回复

201 次查看
danielchen
NXP TechSupport
NXP TechSupport

HI  Sunghyun96:

 

Please check UG10166:  i.MX Machine Learning User's Guider.

DeepViewRT inference enginine was removed.

danielchen_0-1760239877158.png

 

 

Regards

Daniel

0 项奖励
回复