eIQ export TFLite model, output format can't got the correct finish the inference

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

eIQ export TFLite model, output format can't got the correct finish the inference

1,221 次查看
HenryYeh
NXP Employee
NXP Employee

because we would like to deploy the TF Lite model into our edge device (iMX8M Plus EVK), and we following the guide for the model training, but we cannot show the result successful.

I've used the Model tool to check the in/output result.

also compare with the Google example,

 

2024-03-14 11_42_28-IMG_3263.JPG _- Photos.jpg

  

2024-03-14 11_42_47-IMG_3264.JPG ‎- Photos.jpg

  

2024-03-14 11_43_29-Chat _ iMx8 Study Regular Meeting. _ NXP _ henry.yeh_1@nxp.com _ Microsoft Teams.jpg

 

and saw the model from eIQ export was 'MLIR' , but the Google's one was TOCO, not sure it was any impact or how to transfer that. 

 

0 项奖励
回复
3 回复数

1,131 次查看
HenryYeh
NXP Employee
NXP Employee

 I installed the eIQ(1.8) and used the default model (path: workspace\models\mobilenet_ssd_v3\)
then using the model tool to check the result (in/output) 

2024-03-21 16_58_07-mobilenet_ssd_v3.jpg

but I try to used the eIQ training model (tflite) but show different output filed.

0 项奖励
回复

1,116 次查看
brian14
NXP TechSupport
NXP TechSupport

Hi @HenryYeh

Thank you for your reply.

In this case you will need to export the model after the training process.

brian14_0-1711139699561.png

0 项奖励
回复

1,179 次查看
brian14
NXP TechSupport
NXP TechSupport

Hi @HenryYeh

Could you please point to the Google's example that you used?

 

0 项奖励
回复