eiq glow quantization

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

eiq glow quantization

599件の閲覧回数
flee-elemind
Contributor I

We are using the eIQ toolkit to compile some simple machine learning models. For space and inference time reasons we create a simple TFlite model without any quantization, and quantize using the glow model-profiler and compiler to integer operation

However, notable is that the Glow compiler docs claim to quantize everything as int8s, while the generated bundle accepts an unsigned uint8_t input. Does this mean that the model must be quantized to uint8s to use the range of the incoming data properly?

Is there a specific recommended quantization schema in the model profiler (int8 symmetric, uint8 symmetric, etc.)?

Is there a specific recommended way of matching input datatypes (floats) to the model's expected inputs (integers)

 

beb 

ラベル(1)
0 件の賞賛
返信
2 返答(返信)

523件の閲覧回数
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello,

Yes, the model need to be quantizied and it used uint8 symmetric.

Regards

0 件の賞賛
返信

552件の閲覧回数
flee-elemind
Contributor I

fleeelemind_0-1691700918560.png

Attached is the generated .dot file for the beginning of our model

You can see that the placeholder input accepts floats and immediately quantizes it with parameters [S:0.065829024 O:2] to feed into the rest of the model. Do I need to quantize the inputs into the placeholder, or will that node take care of it for me?

In that case, how do I input floating point numbers to the inference function? Just memcpy them in and let it take care of the conversion?

In this way 

memcpy(bundleInpAddr, test_input, sizeof(test_input));

where test_input is a snippet of data, I do not seemingly update the inference function (it always returns the same probabilities) regardless of the input data.

Please advise

0 件の賞賛
返信