I'm following the guide here:
https://www.nxp.com/docs/en/user-guide/IMX_ANDROID_TENSORFLOWLITE_USERS_GUIDE.pdf
For compiling the benchmark_model and running the acceleration demo, I am not able to achieve acceleration as the document suggests.
We have the imx8mpevk board installed with Android 13-2.0.0
> uname -a
Linux localhost 6.1.25-android14-11-maybe-dirty #1 SMP PREEMPT Thu Jan 1 00:00:00 UTC 1970 aarch64 Toybox
> ./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --use_nnapi=1
STARTING!
Log parameter values verbosely: [0]
Graph: [mobilenet_v1_1.0_224_quant.tflite]
Use NNAPI: [1]
dlopen failed: library "libneuralnetworks.so" not found
nnapi error: unable to open library libneuralnetworks.so
Loaded model mobilenet_v1_1.0_224_quant.tflite
INFO: Initialized TensorFlow Lite runtime.
NNAPI acceleration is unsupported on this platform.
Is the documentation out of date for the proper way to compile benchmark_model to support acceleration? Or is there something we might be missing?
I tried with --use_gpu and the results were slower model execution than the CPU.
FYI I was able to work around this issue by updating the client library to Tensorflowlite version 2.10.1. I chose 2.10 because I noticed that was the version that the Linux distribution example was using.
Version 2.10.1 w/ the benchmark_utility does not have the problem trying to find the neuralnetworks.so library.
Questions remain:
1) Is this just a case of outdated documentation?
2) If no, is there some technical reason to version 2.4 is still referenced in the tensorflowlite users guide?
3) If newer versions of TFlite are OK to use, is there a recommended version like 2.10, or would any newer version be appropriate?
Thanks.
Hello,
Thanks for update. We have built the image according to the guide and will test the benchmark model. So we will duplicate your findings. Anyway I will contact author of the documentation and check whether it is really not outdated.
regards