How to add custom operators in tensorflow Lite?

cancel
Showing results for 
Search instead for 
Did you mean: 

How to add custom operators in tensorflow Lite?

Jump to solution
1,708 Views
Ramson
Contributor IV

Hi, we are trying to deploy the open-source object detection model ( https://www.tensorflow.org/lite/examples/object_detection/overview) on iMX.RT.1176 evk kit. We imported the tensorflow_lite_label_image example from the SDK v2.9.0. We have converted the object detection model .tflite file to c array using xxd. and replaced the model data and model length in the model_data.h file. 

The object detection model contains custom operator TFLite_Detection_PostProcess as show below in the image. 

Ramson_0-1626930907325.png

How to add the custom operator to the tflite::MutableOpResolver ?

Since its not added, we are getting the following error:

ERROR: Encountered unresolved custom op: TFLite_Detection_PostProcess.

ERROR: Node number 63 (TFLite_Detection_PostProcess) failed to prepare.

Failed to allocate tensors!

Failed initializing model

Please help. Thanks in Advance 

Regards,

Ramson

Labels (1)
Tags (3)
0 Kudos
1 Solution
1,584 Views
david_piskula
NXP Employee
NXP Employee

Hello Ramson,

my first suggestion would be to move to SDK 2.10, as it contains all of the latest updates. With 2.10 we moved from TF Lite to TF Lite Micro, which is better optimized for MCUs. The inference engine still supports TF Lite models, it's just the computations and the library that are specifically optimized for ARM MCUs.

Next, switch to the AllOpsResolver first, to make sure the operation is actually supported by TF Lite (Micro). If that works, then you can open the ops cpp file in the source/model folder, register all the required ops and remove all the unnecessary ones.

If that fails, the only option left would be to implement or port an existing implementation of the operation into the tensorflow lite library.

Regards,

David

View solution in original post

3 Replies
1,316 Views
MarcinChelminsk
Contributor IV

@david_piskula @Ramson 

suggested solution from me:

  • source/model/model_mobilenet_ops.cpp add:
#include "tensorflow/lite/kernels/custom_ops_register.h"

and update resolver operations with (just add straight after existing resolver.AddBuiltin()

resolver.AddCustom("TFLite_Detection_PostProcess", tflite::ops::custom::Register_TFLite_Detection_PostProcess());
  •  eiq/tensorflow-lite/tensorflow/lite/kernels/custom_ops_register.h add:
TfLiteRegistration* Register_DETECTION_POSTPROCESS();
TfLiteRegistration* Register_TFLite_Detection_PostProcess() {
  return Register_DETECTION_POSTPROCESS();
}

the same requirements as in first post, i.e. i.MXRT1170-EVK, SDK2.9.0, tensorflow_lite_label_image, model from https://www.tensorflow.org/lite/examples/object_detection/overview 

0 Kudos
1,585 Views
david_piskula
NXP Employee
NXP Employee

Hello Ramson,

my first suggestion would be to move to SDK 2.10, as it contains all of the latest updates. With 2.10 we moved from TF Lite to TF Lite Micro, which is better optimized for MCUs. The inference engine still supports TF Lite models, it's just the computations and the library that are specifically optimized for ARM MCUs.

Next, switch to the AllOpsResolver first, to make sure the operation is actually supported by TF Lite (Micro). If that works, then you can open the ops cpp file in the source/model folder, register all the required ops and remove all the unnecessary ones.

If that fails, the only option left would be to implement or port an existing implementation of the operation into the tensorflow lite library.

Regards,

David

1,507 Views
Ramson
Contributor IV

Thanks for your suggestion @david_piskula . Since the operator is not present. As you said i have ported the existing implementation of the operator. Thanks again

0 Kudos