eIQ export TFLite model, output format can't got the correct finish the inference

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

eIQ export TFLite model, output format can't got the correct finish the inference

306 Views
HenryYeh
NXP Employee
NXP Employee

because we would like to deploy the TF Lite model into our edge device (iMX8M Plus EVK), and we following the guide for the model training, but we cannot show the result successful.

I've used the Model tool to check the in/output result.

also compare with the Google example,

 

2024-03-14 11_42_28-IMG_3263.JPG _- Photos.jpg

  

2024-03-14 11_42_47-IMG_3264.JPG ‎- Photos.jpg

  

2024-03-14 11_43_29-Chat _ iMx8 Study Regular Meeting. _ NXP _ henry.yeh_1@nxp.com _ Microsoft Teams.jpg

 

and saw the model from eIQ export was 'MLIR' , but the Google's one was TOCO, not sure it was any impact or how to transfer that. 

 

0 Kudos
3 Replies

216 Views
HenryYeh
NXP Employee
NXP Employee

 I installed the eIQ(1.8) and used the default model (path: workspace\models\mobilenet_ssd_v3\)
then using the model tool to check the result (in/output) 

2024-03-21 16_58_07-mobilenet_ssd_v3.jpg

but I try to used the eIQ training model (tflite) but show different output filed.

0 Kudos

201 Views
brian14
NXP TechSupport
NXP TechSupport

Hi @HenryYeh

Thank you for your reply.

In this case you will need to export the model after the training process.

brian14_0-1711139699561.png

0 Kudos

264 Views
brian14
NXP TechSupport
NXP TechSupport

Hi @HenryYeh

Could you please point to the Google's example that you used?

 

0 Kudos