eIQ export TFLite model, output format can't got the correct finish the inference

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

eIQ export TFLite model, output format can't got the correct finish the inference

1,362件の閲覧回数
HenryYeh
NXP Employee
NXP Employee

because we would like to deploy the TF Lite model into our edge device (iMX8M Plus EVK), and we following the guide for the model training, but we cannot show the result successful.

I've used the Model tool to check the in/output result.

also compare with the Google example,

 

2024-03-14 11_42_28-IMG_3263.JPG _- Photos.jpg

  

2024-03-14 11_42_47-IMG_3264.JPG ‎- Photos.jpg

  

2024-03-14 11_43_29-Chat _ iMx8 Study Regular Meeting. _ NXP _ henry.yeh_1@nxp.com _ Microsoft Teams.jpg

 

and saw the model from eIQ export was 'MLIR' , but the Google's one was TOCO, not sure it was any impact or how to transfer that. 

 

0 件の賞賛
返信
3 返答(返信)

1,272件の閲覧回数
HenryYeh
NXP Employee
NXP Employee

 I installed the eIQ(1.8) and used the default model (path: workspace\models\mobilenet_ssd_v3\)
then using the model tool to check the result (in/output) 

2024-03-21 16_58_07-mobilenet_ssd_v3.jpg

but I try to used the eIQ training model (tflite) but show different output filed.

0 件の賞賛
返信

1,257件の閲覧回数
brian14
NXP TechSupport
NXP TechSupport

Hi @HenryYeh

Thank you for your reply.

In this case you will need to export the model after the training process.

brian14_0-1711139699561.png

0 件の賞賛
返信

1,320件の閲覧回数
brian14
NXP TechSupport
NXP TechSupport

Hi @HenryYeh

Could you please point to the Google's example that you used?

 

0 件の賞賛
返信