ONNX Runtime-based inference on i.MX 8M Plus (no execution provider other than cpu?)

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

ONNX Runtime-based inference on i.MX 8M Plus (no execution provider other than cpu?)

ソリューションへジャンプ
253件の閲覧回数
cadietrich78
Contributor I

Dear NXP community,

 

I’m trying ONNXRuntime for NPU inference according to the instructions listed in the “i.MX Machine Learning User's Guide” (Chapter 6), but it seems that the only execution provider available is “cpu”. The hardware is the i.MX 8M Plus EVK, running Yocto lf-6.6.52-2.2.1, and the instructions are:

 

(dataset)

$ wget https://github.com/onnx/models/raw/refs/heads/main/validated/vision/classification/mobilenet/model/m...

$ tar -xzvf mobilenetv2-7.tar.gz

(call)

$ /usr/bin/onnxruntime-1.17.1/onnx_test_runner -j 1 -c 1 -r 1 -e cpu ./mobilenetv2-7/

 

I’ve tried acl, armnn, vsi_npu, nnapi, dnnl, rocm, migraphx, xnnpack, qnn, snpe, and coreml as execution providers (all options listed by onnx_test_runner, just to play safe), but the only one that seems to work is “cpu”. Is that the case?

 

Thanks in advance,

Carlos

ラベル(1)
0 件の賞賛
返信
1 解決策
234件の閲覧回数
Chavira
NXP TechSupport
NXP TechSupport

Hi @cadietrich78!

Thank you for reaching out to NXP Support!

You're absolutely right, as mentioned in our i.MX Machine Learning User's Guide, ONNX models can currently be executed only on the CPU when using our BSP.

 

Best regards,
Chavira

元の投稿で解決策を見る

0 件の賞賛
返信
2 返答(返信)
235件の閲覧回数
Chavira
NXP TechSupport
NXP TechSupport

Hi @cadietrich78!

Thank you for reaching out to NXP Support!

You're absolutely right, as mentioned in our i.MX Machine Learning User's Guide, ONNX models can currently be executed only on the CPU when using our BSP.

 

Best regards,
Chavira

0 件の賞賛
返信
230件の閲覧回数
cadietrich78
Contributor I
Thanks Chavira for the prompt answer. If I may, since ONNX Runtime is not available, what would be the best option for a quick inference test using C++ and YOLO-based models?
0 件の賞賛
返信