I'm following the guide here:
https://www.nxp.com/docs/en/user-guide/IMX_ANDROID_TENSORFLOWLITE_USERS_GUIDE.pdf
For compiling the benchmark_model and running the acceleration demo, I am not able to achieve acceleration as the document suggests.
We have the imx8mpevk board installed with Android 13-2.0.0
> uname -a
Linux localhost 6.1.25-android14-11-maybe-dirty #1 SMP PREEMPT Thu Jan 1 00:00:00 UTC 1970 aarch64 Toybox
> ./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --use_nnapi=1
STARTING!
Log parameter values verbosely: [0]
Graph: [mobilenet_v1_1.0_224_quant.tflite]
Use NNAPI: [1]
dlopen failed: library "libneuralnetworks.so" not found
nnapi error: unable to open library libneuralnetworks.so
Loaded model mobilenet_v1_1.0_224_quant.tflite
INFO: Initialized TensorFlow Lite runtime.
NNAPI acceleration is unsupported on this platform.
Is the documentation out of date for the proper way to compile benchmark_model to support acceleration? Or is there something we might be missing?
I tried with --use_gpu and the results were slower model execution than the CPU.