Dear NXP community,
I’m trying ONNXRuntime for NPU inference according to the instructions listed in the “i.MX Machine Learning User's Guide” (Chapter 6), but it seems that the only execution provider available is “cpu”. The hardware is the i.MX 8M Plus EVK, running Yocto lf-6.6.52-2.2.1, and the instructions are:
(dataset)
$ wget https://github.com/onnx/models/raw/refs/heads/main/validated/vision/classification/mobilenet/model/m...
$ tar -xzvf mobilenetv2-7.tar.gz
(call)
$ /usr/bin/onnxruntime-1.17.1/onnx_test_runner -j 1 -c 1 -r 1 -e cpu ./mobilenetv2-7/
I’ve tried acl, armnn, vsi_npu, nnapi, dnnl, rocm, migraphx, xnnpack, qnn, snpe, and coreml as execution providers (all options listed by onnx_test_runner, just to play safe), but the only one that seems to work is “cpu”. Is that the case?
Thanks in advance,
Carlos