The attached tflite model failed to run on IMX8M plus, while same model (which from AI benchmark apk) run perfectly okay in Android 10 mobile phone using Qualcomm chipset.
Procedure : Use latest NXP release for IMX8M plus
Download Google tensorflow prebuilt benchmark tools for aarch64 from https://www.tensorflow.org/lite/performance/measurement.
Run /home/root/linux_aarch64_benchmark_model --graph=ai_benchmark/pynet_quant.tflite --use_nnapi=true
STARTING!
Log parameter values verbosely: [0]
Graph: [ai_benchmark/pynet_quant.tflite]
Use NNAPI: [1]
NNAPI accelerators available: [vsi-npu]
Loaded model ai_benchmark/pynet_quant.tflite
INFO: Created TensorFlow Lite delegate for NNAPI.
Explicitly applied NNAPI delegate, and the model graph will be completely executed by the delegate.
The input model file size (MB): 10.4231
Initialized session in 242.487ms.
Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.
E [vsi_nn_op_eltwise_setup:156]Input size mismatch.
E [setup_node:448]Setup node[274] ADD fail
ERROR: NN API returned error ANEURALNETWORKS_BAD_DATA at line 4021 while running computation.
ERROR: Node number 168 (TfLiteNnapiDelegate) failed to invoke.
count=1 curr=718301
Please try the Android 11.0.0_1.0.0 (Linux 5.4.47 kernel). It support NNAPI 1.2.
@jimmychan I don't know what you are talking about.
I am testing on yocto built , not android.
If you have even try it yourself and read my log from the imx8+ machine , you should know that linux_aarch64_benchmark_model is for Linux aarch64, not Android.
I also met the same problem. Have you solved it?