eiq glow quantization

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

eiq glow quantization

598 次查看
flee-elemind
Contributor I

We are using the eIQ toolkit to compile some simple machine learning models. For space and inference time reasons we create a simple TFlite model without any quantization, and quantize using the glow model-profiler and compiler to integer operation

However, notable is that the Glow compiler docs claim to quantize everything as int8s, while the generated bundle accepts an unsigned uint8_t input. Does this mean that the model must be quantized to uint8s to use the range of the incoming data properly?

Is there a specific recommended quantization schema in the model profiler (int8 symmetric, uint8 symmetric, etc.)?

Is there a specific recommended way of matching input datatypes (floats) to the model's expected inputs (integers)

 

beb 

标签 (1)
0 项奖励
回复
2 回复数

522 次查看
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello,

Yes, the model need to be quantizied and it used uint8 symmetric.

Regards

0 项奖励
回复

551 次查看
flee-elemind
Contributor I

fleeelemind_0-1691700918560.png

Attached is the generated .dot file for the beginning of our model

You can see that the placeholder input accepts floats and immediately quantizes it with parameters [S:0.065829024 O:2] to feed into the rest of the model. Do I need to quantize the inputs into the placeholder, or will that node take care of it for me?

In that case, how do I input floating point numbers to the inference function? Just memcpy them in and let it take care of the conversion?

In this way 

memcpy(bundleInpAddr, test_input, sizeof(test_input));

where test_input is a snippet of data, I do not seemingly update the inference function (it always returns the same probabilities) regardless of the input data.

Please advise

0 项奖励
回复