Hello,
I would like to perform inference using DeepViewRT on the i.MX8M Plus board.
1. How can I install DeepViewRT on the i.MX8M Plus and run inference on it?
2. I have an ONNX model — how can I convert it into an RTM model for DeepViewRT?
3. If I run inference directly with an ONNX model on the i.MX8M Plus, is it limited to CPU execution only?
Thank you in advance for your support.
HI Sunghyun96:
Please check UG10166: i.MX Machine Learning User's Guider.
DeepViewRT inference enginine was removed.
Regards
Daniel