Hello,
I would like to perform inference using DeepViewRT on the i.MX8M Plus board.
1. How can I install DeepViewRT on the i.MX8M Plus and run inference on it?
2. I have an ONNX model — how can I convert it into an RTM model for DeepViewRT?
3. If I run inference directly with an ONNX model on the i.MX8M Plus, is it limited to CPU execution only?
Thank you in advance for your support.
Thank you for your reply.
On Linux 6.12.20-lts with the i.MX 8M Plus (EVK), I’m planning to run NPU inference using ONNX Runtime.
If there’s a recommended runtime to replace DeepViewRT (e.g., ORT vs. TFLite + VX), please let me know as well.
Hi @sunghyun96
The recommended replacement for DeepViewRT on i.MX Plus is TesnsorFlow Lite (TFlite) + VX Delegate.
VX Delegate is the official replacement for DeepViewRT, it uses OpenVX under the hood to offload suppported operations to the NPU.
TFLite modes must be quantized (INT8) and converted using the eIQ toolkit to be compatible with the VX Delegate.
Since you are using ONNX Runtime, I would suggest you convert your ONNX model to TFLite (INT8) using the eIQ Toolkit or TensorFlow tools, then run it with TFLite + VX Delegate.
Regards
Daniel
HI Sunghyun96:
Please check UG10166: i.MX Machine Learning User's Guider.
DeepViewRT inference enginine was removed.
Regards
Daniel