How to add custom operators in tensorflow Lite?

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 
已解决

How to add custom operators in tensorflow Lite?

跳至解决方案
5,744 次查看
Ramson
Contributor IV

Hi, we are trying to deploy the open-source object detection model ( https://www.tensorflow.org/lite/examples/object_detection/overview) on iMX.RT.1176 evk kit. We imported the tensorflow_lite_label_image example from the SDK v2.9.0. We have converted the object detection model .tflite file to c array using xxd. and replaced the model data and model length in the model_data.h file. 

The object detection model contains custom operator TFLite_Detection_PostProcess as show below in the image. 

Ramson_0-1626930907325.png

How to add the custom operator to the tflite::MutableOpResolver ?

Since its not added, we are getting the following error:

ERROR: Encountered unresolved custom op: TFLite_Detection_PostProcess.

ERROR: Node number 63 (TFLite_Detection_PostProcess) failed to prepare.

Failed to allocate tensors!

Failed initializing model

Please help. Thanks in Advance 

Regards,

Ramson

标签 (1)
标记 (3)
0 项奖励
1 解答
5,620 次查看
david_piskula
NXP Employee
NXP Employee

Hello Ramson,

my first suggestion would be to move to SDK 2.10, as it contains all of the latest updates. With 2.10 we moved from TF Lite to TF Lite Micro, which is better optimized for MCUs. The inference engine still supports TF Lite models, it's just the computations and the library that are specifically optimized for ARM MCUs.

Next, switch to the AllOpsResolver first, to make sure the operation is actually supported by TF Lite (Micro). If that works, then you can open the ops cpp file in the source/model folder, register all the required ops and remove all the unnecessary ones.

If that fails, the only option left would be to implement or port an existing implementation of the operation into the tensorflow lite library.

Regards,

David

在原帖中查看解决方案

3 回复数
5,352 次查看
MarcinChelminsk
Contributor IV

@david_piskula @Ramson 

suggested solution from me:

  • source/model/model_mobilenet_ops.cpp add:
#include "tensorflow/lite/kernels/custom_ops_register.h"

and update resolver operations with (just add straight after existing resolver.AddBuiltin()

resolver.AddCustom("TFLite_Detection_PostProcess", tflite::ops::custom::Register_TFLite_Detection_PostProcess());
  •  eiq/tensorflow-lite/tensorflow/lite/kernels/custom_ops_register.h add:
TfLiteRegistration* Register_DETECTION_POSTPROCESS();
TfLiteRegistration* Register_TFLite_Detection_PostProcess() {
  return Register_DETECTION_POSTPROCESS();
}

the same requirements as in first post, i.e. i.MXRT1170-EVK, SDK2.9.0, tensorflow_lite_label_image, model from https://www.tensorflow.org/lite/examples/object_detection/overview 

0 项奖励
5,621 次查看
david_piskula
NXP Employee
NXP Employee

Hello Ramson,

my first suggestion would be to move to SDK 2.10, as it contains all of the latest updates. With 2.10 we moved from TF Lite to TF Lite Micro, which is better optimized for MCUs. The inference engine still supports TF Lite models, it's just the computations and the library that are specifically optimized for ARM MCUs.

Next, switch to the AllOpsResolver first, to make sure the operation is actually supported by TF Lite (Micro). If that works, then you can open the ops cpp file in the source/model folder, register all the required ops and remove all the unnecessary ones.

If that fails, the only option left would be to implement or port an existing implementation of the operation into the tensorflow lite library.

Regards,

David

5,543 次查看
Ramson
Contributor IV

Thanks for your suggestion @david_piskula . Since the operator is not present. As you said i have ported the existing implementation of the operator. Thanks again

0 项奖励