Issue with Fully Integer Quantization - DataStore Not Supported

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Issue with Fully Integer Quantization - DataStore Not Supported

387 Views
hamedtaheri
Contributor I

Dear IMX Support Team,

I am encountering an issue when attempting to quantize my model using the eIQ converter (windows version). Despite providing 100 images in the imgs folder for calibration, the converter appears to ignore them and instead uses a white noise dataset. The error message states that datastore datasets are not supported. Below is the command I used and the corresponding output:

 

C:\nxp\eIQ_Toolkit_v1.11.4>eiq-converter --plugin eiq-converter-tflite --source workspace\models\deeplab_v3\best_model.h5 --dest workspace\models\deeplab_v3\best_model.tflite --default_shape 1,256,256,3 --quantize --quantize_format uint8 --quant_normalization signed --samples imgs


eiq-converter 0.0.0
Using Converter Plugin : eiq-converter-tflite
We do not support datastore datasets currently
Using white noise dataset generation
2024-06-05 16:20:32.735326: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
C:\nxp\eIQ_Toolkit_v1.11.4\python\Lib\site-packages\tensorflow\lite\python\convert.py:947: UserWarning: Statistics for quantized inputs were expected, but not specified; continuing anyway.
warnings.warn(
2024-06-05 16:20:35.228063: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:378] Ignored output_format.
2024-06-05 16:20:35.228221: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:381] Ignored drop_control_dependency.
2024-06-05 16:20:35.229380: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: C:\Users\Hamed\AppData\Local\Temp\tmpnh78xhxx
2024-06-05 16:20:35.234016: I tensorflow/cc/saved_model/reader.cc:51] Reading meta graph with tags { serve }
2024-06-05 16:20:35.234215: I tensorflow/cc/saved_model/reader.cc:146] Reading SavedModel debug info (if present) from: C:\Users\Hamed\AppData\Local\Temp\tmpnh78xhxx
2024-06-05 16:20:35.242521: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:382] MLIR V1 optimization pass is not enabled
2024-06-05 16:20:35.244744: I tensorflow/cc/saved_model/loader.cc:233] Restoring SavedModel bundle.
2024-06-05 16:20:35.302547: I tensorflow/cc/saved_model/loader.cc:217] Running initialization op on SavedModel bundle at path: C:\Users\Hamed\AppData\Local\Temp\tmpnh78xhxx
2024-06-05 16:20:35.327293: I tensorflow/cc/saved_model/loader.cc:316] SavedModel load for tags { serve }; Status: success: OK. Took 97907 microseconds.
2024-06-05 16:20:35.357246: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-06-05 16:20:35.429021: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2245] Estimated count of arithmetic ops: 4.502 G ops, equivalently 2.251 G MACs
fully_quantize: 0, inference_type: 6, input_inference_type: FLOAT32, output_inference_type: FLOAT32
2024-06-05 16:20:36.773066: I tensorflow/compiler/mlir/lite/flatbuffer_export.cc:2245] Estimated count of arithmetic ops: 4.502 G ops, equivalently 2.251 G MACs
SUCCESS: Converted
Source: workspace\models\deeplab_v3\best_model.h5
Destination: workspace\models\deeplab_v3\best_model.tflite
Source: workspace\models\deeplab_v3\best_model.h5
Destination: workspace\models\deeplab_v3\best_model.tflite

 

 

Given that datastore datasets are not supported, how can I perform fully integer quantization using my calibration data? Is there an alternative method or workaround to ensure that my provided images are utilized for calibration?

Thanks

0 Kudos
Reply
3 Replies

359 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Hi @hamedtaheri 

You can use MODEL TOOL gui to finish Quantizatio with dataset in eIQ portal.

347 Views
hamedtaheri
Contributor I

Hi @Zhiming_Liu 

Thank you for the information. I successfully converted my file.

The only issue is that my model, which works perfectly with the .h5 format, does not segment anything when I convert it to a fully integer tflite.

I tried converting it to FP16, but surprisingly, on my IMX 8mPlus, the inference time increased with FP16 instead of decreasing.

Could you please let me know if the IMX 8mPlus supports FP16?

Thanks,

 

0 Kudos
Reply

337 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

Hi @hamedtaheri 

Could you please let me know if the IMX 8mPlus supports FP16?

-->Support, but int8/unit8 quantization have best performance. 

 

Best Regards

Zhiming