Tensorflow Lite operator refused by NNAPI Delegate

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

Tensorflow Lite operator refused by NNAPI Delegate

955 次查看
Creative
Contributor III

Hello!

I'm trying to run real-time face detection on I.MX 8M Plus. I have trained my .tflite model on pre-trained model ssd_mobilenet_v2_320x320_coco, converted and quantized it with TensorflowLite converter with interference_input_type=uint8, interference_output_type=float32

When I run the model I have this warnings:

WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.
WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.
WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.
WARNING: Operator PACK (v2) refused by NNAPI delegate: Unsupported operation type.
WARNING: Operator CUSTOM (v1) refused by NNAPI delegate: Unsupported operation type.

Because of this my model does not fully run on NPU and more slower than I wanted.

I have tried other conversion types - none of them helped.

How could I get rid of this and have my model run faster fully on NPU?

I attach my model for you to examine.

0 项奖励
1 回复

944 次查看
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello Creative,

  1. that's from an underlying NNAPI driver. When you delegate your model to NNAPI, the NNAPI runtime asks all the drivers to decide which driver / hardware is the right one to dispatch the an op to run on it.
  2. you can try
adb shell setprop debug.nn.vlog 1

and then adb logcat | grep -i best to see how your model is handled by NNAPI. Check https://developer.android.com/ndk/guides/neuralnetworks for more NNAPI related information

  1. use either one of them, not both.
  2. you may want to read related guides first, https://www.tensorflow.org/lite/performance/model_optimization

 

Regards

0 项奖励