ONNX quantised Model

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

ONNX quantised Model

899 Views
mahanad
Contributor I

Hi,

How can I reduce the inference time for an onnx model..  It's taking roughly 6 seconds.. 

I tried to quantise the model using eiq toolkit but when I tried to load the model it's giving me the following error...

terminate called after throwing an instance of 'Ort::Exception'
  what(): Fatal error: QLinearAdd is not a registered function/op
Aborted

 

onnx.png

 

 

 

 

 

Thanks in advance...

0 Kudos
Reply
2 Replies

881 Views
mahanad
Contributor I

Can it be used with onnx? According to the documentation, it's only for tflite (I might be wrong)

0 Kudos
Reply

890 Views
Zhiming_Liu
NXP TechSupport
NXP TechSupport

You need use vx delegrate in your inference code.

Please see this guide :

https://www.nxp.com.cn/docs/en/user-guide/IMX-MACHINE-LEARNING-UG.pdf

0 Kudos
Reply