ONNX Runtime-based inference on i.MX 8M Plus (no execution provider other than cpu?)

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 
已解决

ONNX Runtime-based inference on i.MX 8M Plus (no execution provider other than cpu?)

跳至解决方案
224 次查看
cadietrich78
Contributor I

Dear NXP community,

 

I’m trying ONNXRuntime for NPU inference according to the instructions listed in the “i.MX Machine Learning User's Guide” (Chapter 6), but it seems that the only execution provider available is “cpu”. The hardware is the i.MX 8M Plus EVK, running Yocto lf-6.6.52-2.2.1, and the instructions are:

 

(dataset)

$ wget https://github.com/onnx/models/raw/refs/heads/main/validated/vision/classification/mobilenet/model/m...

$ tar -xzvf mobilenetv2-7.tar.gz

(call)

$ /usr/bin/onnxruntime-1.17.1/onnx_test_runner -j 1 -c 1 -r 1 -e cpu ./mobilenetv2-7/

 

I’ve tried acl, armnn, vsi_npu, nnapi, dnnl, rocm, migraphx, xnnpack, qnn, snpe, and coreml as execution providers (all options listed by onnx_test_runner, just to play safe), but the only one that seems to work is “cpu”. Is that the case?

 

Thanks in advance,

Carlos

标签 (1)
0 项奖励
回复
1 解答
205 次查看
Chavira
NXP TechSupport
NXP TechSupport

Hi @cadietrich78!

Thank you for reaching out to NXP Support!

You're absolutely right, as mentioned in our i.MX Machine Learning User's Guide, ONNX models can currently be executed only on the CPU when using our BSP.

 

Best regards,
Chavira

在原帖中查看解决方案

0 项奖励
回复
2 回复数
206 次查看
Chavira
NXP TechSupport
NXP TechSupport

Hi @cadietrich78!

Thank you for reaching out to NXP Support!

You're absolutely right, as mentioned in our i.MX Machine Learning User's Guide, ONNX models can currently be executed only on the CPU when using our BSP.

 

Best regards,
Chavira

0 项奖励
回复
201 次查看
cadietrich78
Contributor I
Thanks Chavira for the prompt answer. If I may, since ONNX Runtime is not available, what would be the best option for a quick inference test using C++ and YOLO-based models?
0 项奖励
回复