How can I get the full integer quantization tflite model? (for object detection)

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

How can I get the full integer quantization tflite model? (for object detection)

756件の閲覧回数
dpcivl
Contributor III

I'm doing my projects for object detection with i.MX 8M Plus evk board.

I'm struggle with the problem that the NPU of the board can't compute float32 data. 
(It can't use VX delegate but XNNpack.)

I want to convert pb file to tflite file, but it didn't work. Accuraccy was so bad. (pb file from Kaggle pre-trained model)

Furthermore, I made tflite model with full integer quantization and its classes' information were broke up. 

I also tried the EIQ toolkit and the number of output tensors in the model was two, which I can't understand how to deal with it.

I want the full integer quantized model for using VX delegate during realtime object detection.

How can I get it?

0 件の賞賛
返信
2 返答(返信)

733件の閲覧回数
Zhiming_Liu
NXP TechSupport
NXP TechSupport

For pb->tflite convert, recommand you convert pb to onnx using python script.

Then open onnx file in eIQ, convert onnx file to tflite model and then set uint8/int8 quantization(For NPU, you should use in8/uint8, not 32bit).

You need import dataset(at least 300+ images) when you set the quantization paramters.

0 件の賞賛
返信

730件の閲覧回数
dpcivl
Contributor III

Thank you for your solution.

But I'm still lost about two output tensors. (1x1917x91, 1x1917x1x4)

I'm used to using four output tensors with many examples. (boxes, classes, scores, num_detections)

I don't know how to deal with this problem.

Is there any document for the way I can handle two tensors of mobilenet-SSD V2 model?

0 件の賞賛
返信