Hello NXP Community,
I’m working on deploying a custom TensorFlow Lite model on the i.MX8M Plus following the guidelines provided in the TensorFlow Lite Model Maker documentation. While the model works fine on the CPU, it fails to run on the NPU despite enabling libvx_delegate.so.
Request:
- Model Compatibility: How can I ensure my .tflite model is compatible with the i.MX8M Plus NPU?
- Configuration: Are there specific settings needed for NPU execution?
- Troubleshooting: What are common steps for resolving issues where a model runs on CPU but not on NPU?
I’ve attached the .tflite model and detection code for reference.
Thank you for your help!
Best regards,
Adarsh K V