Tensorflow Savedmodel to tflite conversion which supports imx8mp NPU

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

Tensorflow Savedmodel to tflite conversion which supports imx8mp NPU

287件の閲覧回数
subbareddyai
Contributor II

Hi,

I would like to convert the Tensorflow Savedmodel to tflite model which supports imx8mp NPU.

I followed the below steps with no success

python models/research/object_detection/exporter_main_v2.py \
--input_type image_tensor \
--pipeline_config_path training_dir/pipeline.config \
--trained_checkpoint_dir training_dir/checkpoint \
--output_directory exported-model

and I make sure its fixed shape model {
ssd {
image_resizer {
fixed_shape_resizer {
height: 320
width: 320
}
}
}
}

 

and also Ensure TFLite-Compatible Ops 

ssd {
feature_extractor {
type: "ssd_mobilenet_v2_fpn_keras"
use_depthwise: true
}
box_predictor {
convolutional_box_predictor {
use_depthwise: true
}
}
}

tflite conversion script 

import tensorflow as tf
import pathlib

saved_model_dir = "exported-model/saved_model"

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]

# Provide representative dataset for INT8 calibration
def representative_data_gen():
data_dir = pathlib.Path("dataset/val")
for img_path in data_dir.glob("*.jpg"):
img = tf.keras.preprocessing.image.load_img(img_path, target_size=(320, 320))
img = tf.keras.preprocessing.image.img_to_array(img)
img = img[tf.newaxis, ...] / 255.0
yield [img.astype("float32")]

converter.representative_dataset = representative_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

tflite_model = converter.convert()

with open("model_int8.tflite", "wb") as f:
f.write(tflite_model)

 

command to run the inference with model_int8.tflite 

$ USE_GPU_INFERENCE=0 \
python3 label_image.py -m model_int8.tflite \
-e /usr/lib/liblitert_vx_delegate.so 

 

please help me if these steps correct.  

all the above steps from the chatgpt.

0 件の賞賛
返信
4 返答(返信)

218件の閲覧回数
pengyong_zhang
NXP Employee
NXP Employee

Hi @subbareddyai 

I will be on vacation for a week and may not reply during this period. If you are in a hurry, you can create a new ticket. My other colleagues will support you. Thanks for your understand.

B.R

0 件の賞賛
返信

208件の閲覧回数
subbareddyai
Contributor II

Hi,

Thanks for the reply.

I will try to create new ticket on this meanwhile after your vacation please support here.

 

Thanks and Regards,

GV Subba Reddy

 

0 件の賞賛
返信

261件の閲覧回数
pengyong_zhang
NXP Employee
NXP Employee
0 件の賞賛
返信

245件の閲覧回数
subbareddyai
Contributor II

I tried converting using eiq tool and tflite. conversion with qunatization and int8  is success(Refer attached image for quantization settings that I have used for conversion) but when I use the tflite model with inference its saying the below error

Failed to load delegate or model with delegate. Trying without delegate. Error: Didn't find op for builtin opcode 'EXP' version '2'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?
Registration failed.

Traceback (most recent call last):
File "inference_quant_int8.py", line 50, in <module>
interpreter = Interpreter(model_path=MODEL_PATH, experimental_delegates=[delegate])
File "/home/root/miniforge3/envs/tflite/lib/python3.8/site-packages/tflite_runtime/interpreter.py", line 455, in __init__
_interpreter_wrapper.CreateWrapperFromFile(
ValueError: Didn't find op for builtin opcode 'EXP' version '2'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?
Registration failed. 

 

タグ(1)
0 件の賞賛
返信