Help Needed: Running Custom TensorFlow Lite Model using efficientnet_lite on NPU

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

Help Needed: Running Custom TensorFlow Lite Model using efficientnet_lite on NPU

823 次查看
adarshkv
Contributor I

Hello NXP Community,

I’m working on deploying a custom TensorFlow Lite model on the i.MX8M Plus following the guidelines provided in the TensorFlow Lite Model Maker documentation. While the model works fine on the CPU, it fails to run on the NPU despite enabling libvx_delegate.so.

Request:

  • Model Compatibility: How can I ensure my .tflite model is compatible with the i.MX8M Plus NPU?
  • Configuration: Are there specific settings needed for NPU execution?
  • Troubleshooting: What are common steps for resolving issues where a model runs on CPU but not on NPU?

I’ve attached the .tflite model and detection code for reference.

Thank you for your help!

Best regards,
Adarsh K V

0 项奖励
回复
1 回复

784 次查看
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello,

It looks like the model is not compatible with Tensor flow lite that came in the eIQ of MX8Mplus, you need to create a model with the tensor flow lite 2.1v

Regards

0 项奖励
回复