Help Needed: Running Custom TensorFlow Lite Model using efficientnet_lite on NPU

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

Help Needed: Running Custom TensorFlow Lite Model using efficientnet_lite on NPU

733件の閲覧回数
adarshkv
Contributor I

Hello NXP Community,

I’m working on deploying a custom TensorFlow Lite model on the i.MX8M Plus following the guidelines provided in the TensorFlow Lite Model Maker documentation. While the model works fine on the CPU, it fails to run on the NPU despite enabling libvx_delegate.so.

Request:

  • Model Compatibility: How can I ensure my .tflite model is compatible with the i.MX8M Plus NPU?
  • Configuration: Are there specific settings needed for NPU execution?
  • Troubleshooting: What are common steps for resolving issues where a model runs on CPU but not on NPU?

I’ve attached the .tflite model and detection code for reference.

Thank you for your help!

Best regards,
Adarsh K V

0 件の賞賛
返信
1 返信

694件の閲覧回数
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello,

It looks like the model is not compatible with Tensor flow lite that came in the eIQ of MX8Mplus, you need to create a model with the tensor flow lite 2.1v

Regards

0 件の賞賛
返信