eIQ Glow - error in compilation of TensorFlow Lite model

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

eIQ Glow - error in compilation of TensorFlow Lite model

728件の閲覧回数
a_mk
Contributor I

I am trying to compile a MobileNet model (TensorFlow Lite Integer Quantized) using below command

 

C:\nxp\Glow\bin\model-compiler.exe -model=vww_96_int8.tflite -emit-bundle=vww_model -backend=CPU -target=arm -mcpu=cortex-m7 -float-abi=hard -use-cmsis

 

 However I am getting error which reads as:

Error.cpp:123] exitOnError(Error) got an unexpected ErrorValue:
Error message: TensorFlowLite: Operator 'MEAN' modifies the output type registered in the model!

How can I get around this?

タグ(2)
0 件の賞賛
返信
2 返答(返信)

641件の閲覧回数
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello,

The error message suggested you will need custom implementations for following operations; 

TensorListFromTensor, TensorListReserve or other that:

  • You can try converting your model using TFLITE_BUILTINS, SELECT_TF_OPS flags to reduce custom implementation of ops (only a subset of ops can be implemented by this method while others may still require custom implementation

 

Regards

 

0 件の賞賛
返信

619件の閲覧回数
a_mk
Contributor I

I have tried converting the model using TFLITE_BUILTINS, SELECT_TF_OPS flags. The error is still showing. I have also tried setting converter.allow_custom_ops=True

As a side note, I am able to run inference on TFLite model using python API tf.lite.Interpreter as shown here: https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python

0 件の賞賛
返信