Dear NXP community,
I’m trying ONNXRuntime for NPU inference according to the instructions listed in the “i.MX Machine Learning User's Guide” (Chapter 6), but it seems that the only execution provider available is “cpu”. The hardware is the i.MX 8M Plus EVK, running Yocto lf-6.6.52-2.2.1, and the instructions are:
(dataset)
$ tar -xzvf mobilenetv2-7.tar.gz
(call)
$ /usr/bin/onnxruntime-1.17.1/onnx_test_runner -j 1 -c 1 -r 1 -e cpu ./mobilenetv2-7/
I’ve tried acl, armnn, vsi_npu, nnapi, dnnl, rocm, migraphx, xnnpack, qnn, snpe, and coreml as execution providers (all options listed by onnx_test_runner, just to play safe), but the only one that seems to work is “cpu”. Is that the case?
Thanks in advance,
Carlos
Solved! Go to Solution.
Hi @cadietrich78!
Thank you for reaching out to NXP Support!
You're absolutely right, as mentioned in our i.MX Machine Learning User's Guide, ONNX models can currently be executed only on the CPU when using our BSP.
Best regards,
Chavira
Hi @cadietrich78!
Thank you for reaching out to NXP Support!
You're absolutely right, as mentioned in our i.MX Machine Learning User's Guide, ONNX models can currently be executed only on the CPU when using our BSP.
Best regards,
Chavira