Help Needed: Running Custom TensorFlow Lite Model using efficientnet_lite on NPU

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Help Needed: Running Custom TensorFlow Lite Model using efficientnet_lite on NPU

174 Views
adarshkv
Contributor I

Hello NXP Community,

I’m working on deploying a custom TensorFlow Lite model on the i.MX8M Plus following the guidelines provided in the TensorFlow Lite Model Maker documentation. While the model works fine on the CPU, it fails to run on the NPU despite enabling libvx_delegate.so.

Request:

  • Model Compatibility: How can I ensure my .tflite model is compatible with the i.MX8M Plus NPU?
  • Configuration: Are there specific settings needed for NPU execution?
  • Troubleshooting: What are common steps for resolving issues where a model runs on CPU but not on NPU?

I’ve attached the .tflite model and detection code for reference.

Thank you for your help!

Best regards,
Adarsh K V

0 Kudos
Reply
1 Reply

135 Views
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hello,

It looks like the model is not compatible with Tensor flow lite that came in the eIQ of MX8Mplus, you need to create a model with the tensor flow lite 2.1v

Regards

0 Kudos
Reply