eIQ Glow - error in compilation of TensorFlow Lite model

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

eIQ Glow - error in compilation of TensorFlow Lite model

643 Views
a_mk
Contributor I

I am trying to compile a MobileNet model (TensorFlow Lite Integer Quantized) using below command

 

C:\nxp\Glow\bin\model-compiler.exe -model=vww_96_int8.tflite -emit-bundle=vww_model -backend=CPU -target=arm -mcpu=cortex-m7 -float-abi=hard -use-cmsis

 

 However I am getting error which reads as:

Error.cpp:123] exitOnError(Error) got an unexpected ErrorValue:
Error message: TensorFlowLite: Operator 'MEAN' modifies the output type registered in the model!

How can I get around this?

Tags (2)
0 Kudos
Reply
2 Replies

556 Views
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello,

The error message suggested you will need custom implementations for following operations; 

TensorListFromTensor, TensorListReserve or other that:

  • You can try converting your model using TFLITE_BUILTINS, SELECT_TF_OPS flags to reduce custom implementation of ops (only a subset of ops can be implemented by this method while others may still require custom implementation

 

Regards

 

0 Kudos
Reply

534 Views
a_mk
Contributor I

I have tried converting the model using TFLITE_BUILTINS, SELECT_TF_OPS flags. The error is still showing. I have also tried setting converter.allow_custom_ops=True

As a side note, I am able to run inference on TFLite model using python API tf.lite.Interpreter as shown here: https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python

0 Kudos
Reply