Hello NXP Community,
I’m working on deploying a custom TensorFlow Lite model on the i.MX8M Plus following the guidelines provided in the TensorFlow Lite Model Maker documentation. While the model works fine on the CPU, it fails to run on the NPU despite enabling libvx_delegate.so.
Request:
I’ve attached the .tflite model and detection code for reference.
Thank you for your help!
Best regards,
Adarsh K V
Hello,
It looks like the model is not compatible with Tensor flow lite that came in the eIQ of MX8Mplus, you need to create a model with the tensor flow lite 2.1v
Regards