onnxruntime inference fails on I.MX8M Plus

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

onnxruntime inference fails on I.MX8M Plus

1,058 Views
mbrundler
Contributor II

Hi,

I built an I.MX8M Plus target image as well as the eIQ Machine Learning SDK, using yocto with the imx-5.10.52-2.1.0.xml manifest.

I use the sample C++ application provided by ONNX Runtime ( C_Api_Sample) and slightly modified it to do an inference on a simple model (in .onnx format) that uses only standard layers (only 2d convolution, batch norm, relu and maxpool). Our modified source is attached.

Executing it either in default mode (cpu) or with the Armnn execution provider produces the expected output tensor (i.e. the differences with the reference output tensor are only a matter of precision).

However executing with the ACL, VsiNpu or Nnapi execution providers outputs a wrong output tensor.

This is surprising since changing of execution provider only impacts a single line of code (the call to OrtSessionOptionsAppendExecutionProvider_xxx and its associated #include).

Note the original C_Api_Sample.cpp with its squeezenet model behaves OK in all modes.

Any idea ?

 

Labels (1)
0 Kudos
Reply
1 Reply

1,037 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Hi @mbrundler 

 

Can you provide your error log?

 

Best Regards

Zhiming

0 Kudos
Reply