Hello!
I'm trying to run real-time face detection on I.MX 8M Plus. I have trained my .tflite model on pre-trained model ssd_mobilenet_v2_320x320_coco, converted and quantized it with TensorflowLite converter with interference_input_type=uint8, interference_output_type=float32
When I run the model I have this warnings:
WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.
WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.
WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.
WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.
WARNING: Operator CUSTOM (v1) refused by NNAPI delegate: Unsupported operation type.
Because of this my model does not fully run on NPU and more slower than I wanted.
I have tried other conversion types - none of them helped.
How could I get rid of this and have my model run faster fully on NPU?
I attach my model for you to examine.
Hello Creative,
adb shell setprop debug.nn.vlog 1
and then adb logcat | grep -i best
to see how your model is handled by NNAPI. Check https://developer.android.com/ndk/guides/neuralnetworks for more NNAPI related information
Regards