Error when converting Pytorch model to DeepViewRT model

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Error when converting Pytorch model to DeepViewRT model

535 Views
Anonymous123
Contributor I

Hi community,

I'm seeking your help in solving the following issue ..

I installed eIQ Toolkit 1.7.3 on Windows 10 and I want to convert my Pytorch model to DeepViewRT (.rtm) model, so I can eventually deploy the .rtm model on the i.MX 8M Plus processor.

I followed this guide, I converted the Pytorch model to ONNX model, then as described in the guide, I tried to convert the ONNX model to a quantized ONNX model.

However, the conversion process terminated with an unhelpful error message: "object of type 'NoneType' has no len()".

I used the Model Tool from the eIQ Portal when I did the conversion.

 

 Thanks in advance and best regards.

0 Kudos
Reply
2 Replies

498 Views
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello,

 

It looks like your model is passed on size, you can download and evaluated the performance with:

https://github.com/microsoft/onnxruntime.git

 

Regards

0 Kudos
Reply

492 Views
Anonymous123
Contributor I

Hi @Bio_TICFSL,

Actually, when I changed the batch size when exporting from Pytorch to ONNX to 1, the conversion was done successfully.. 

However, I encountered this only when converting from ONNX to quantized ONNX, for example, I managed to convert the same model  (Pytorch to ONNX to Tensorflow saved model format to quantized Tensorflow Lite) with larger batch sizes.

So how I can get rid of this error, according to the Guide I mentioned, I have to concert the ONNX to a quantized ONNX before converting to RTM, and I can't get rid of this error.

Regards

0 Kudos
Reply