How to run a pytorch model on i.MXRT1060 EVK using eiq?

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

How to run a pytorch model on i.MXRT1060 EVK using eiq?

3,254件の閲覧回数
saurabhdesai269
Contributor I

Hello,

I am trying to run a pytorch model on i.MXRT1060 processor. I converted the model to .onnx format to .pb format to .tflite format and finally to .h format to run it on the processor. But it is throwing an error while trying to run the model on the board using tensorflow_lite_label_image example from updated eiq examples in the SDK for i.MXRT1060. I have attached the error. Any of solution/suggestion is appreciated. Thank you in advance.

ラベル(1)
タグ(2)
0 件の賞賛
6 返答(返信)

2,943件の閲覧回数
david_piskula
NXP Employee
NXP Employee

Hello Saurabh,

it might be worth mentioning that when I ran into a similar problem, I managed to solve it by using TF v1.13.2 for the conversion. Definitely try the v1.11 version first, as Anthony suggested, but If you run into issues with it, you can also try using the higher versions to see if that helps.

David

0 件の賞賛

2,943件の閲覧回数
saurabhdesai269
Contributor I

Hey David,

I tried v1.13.2 but I am getting an error while converting from .pb to .tflite. Error is as follows - 

"Check failed: coords_array.data_type == ArrayDataType::kInt32 Only int32 indices are supported". Can you help me with this? Conversion works fine with version v1.14.

Thanks

Saurabh Desai

0 件の賞賛

2,942件の閲覧回数
david_piskula
NXP Employee
NXP Employee

Hi Saurabh,

I found a few mentions of this issue on the web but no concrete solution. It seems to be caused by this check tensorflow/resolve_constant_gather.cc at ae26958577cdd426ee9f7a5668619aea626f0a22 · tensorflow/tenso... 

One user who also tried going pytorch > onnx > tflite reported the same issue and the advice he received was to try to explicitly cast his view parameters to int.

x.view(-1, self.num_flat_features(x))

However, he never responded so I'm not sure if it actually helped him.

Another thread suggested that quantization might be the root cause.

I'm afraid that 1.14 likely brought some new features that are necessary for your conversion to work, which means you'll probably have to wait for the next eIQ release, unless the integer conversion happens to help.

Just a final thought, how are you converting to tflite? Are you using only the "TFLITE_BUILTINS" flag or "SELECT_TF_OPS" as well? "SELECT_TF_OPS" can also use operations not supported by the eIQ tflite inference engine.

David

0 件の賞賛

2,942件の閲覧回数
anthony_huereca
NXP Employee
NXP Employee

Hi Saurabh, 

  The "Didn't find op for builtin opcode" error occurs when the Tensorflow Lite inference engine cannot find that specific operand that the model is using, and doesn't then know how to map that operand to the inference engine to run. 

  There have been some operands that have been added to the latest version of TensorFlow, that the current release of eIQ does not support since it is based off TensorFlow Lite v1.11. Perhaps you are using TensorFlow 1.14 or 2.0? We'll have an updated eIQ release to support TensorFlow 1.14 by the end of the year that may fix this issue then. In the meantime, you can try using TensorFlow v1.11 to generate the .pb file and see if that is able to fix the error. 

-Anthony 

0 件の賞賛

2,942件の閲覧回数
saurabhdesai269
Contributor I

Hello Anthony,

Yes, I am using TensorFlow v1.14 to generate .pb file. I tried installing v1.11 but versions below v1.13 are not available on pip anymore. Is there any other way that I could get the lower version of TensorFlow? 

Thanks,

Saurabh

0 件の賞賛

2,942件の閲覧回数
david_piskula
NXP Employee
NXP Employee

Hi Saurabh,

you can try building from source:

GitHub - tensorflow/tensorflow at r1.11 

Release TensorFlow 1.11.0 · tensorflow/tensorflow · GitHub 

Build from source on Windows  |  TensorFlow 

However, since you can't convert with 1.13.2, I don't know if 1.11.0 will be able to do so either.

0 件の賞賛