I use python to load and run my tflite model converted from tensorlfow(keras) resnet18 model
interpreter = tflite.Interpreter(model_path=model_name)
It works with float32 model and prints out
INFO: Created TensorFlow Lite delegate for NNAPI.
Applied NNAPI delegate.
561.8170 ms ± 0.1711 ms (std)
but when I use uint8 quantized model I've got
INFO: Created TensorFlow Lite delegate for NNAPI.
Failed to apply NNAPI delegate.
275.3198 ms ± 0.1635 ms (std)
I have another resnet18 model converted to tflite earlier. It works with NNApi and takes 34 ms.
My question is How should I convert models from tensorflow to make nnapi works in tflite on imx8.
Did you solve that? I'm having the same problem.