Hi there,
I have exported full quantized int8 YOLOv8n Object Detection model from ultralytics. I have converted it to run on NPU using neutron converter from latest eIQ Toolkit version 1.17 and I have tried to execute it on i.MX95 hardware.
I have tried both converted and non-converted model with NPU delegate but seems like only the Neutron Graph which is present on converted model is going to execute on NPU.
When I compare the raw outputs from both model, converted model is giving multiple false positives with more than 95% score. The same script I am using for both converted and non-converted model but having issue with converted model only. I have validated with multiple approaches, getting same everytime.
When I have inspect both the models with netron app, I have found major architectural changes in converted model.
Here, some points I want to ask is,
1. Is the lastest object detection architectures like YOLOv8 and YOLOv11 is supported on Neutron Converter with eIQ Toolkit version 1.17? If yes then what are steps you are following please let me know.
2. Do you have tested the YOLOv8 and YOLOv11 with NPU on i.MX95? If yes then can you please send the model for verification and Post-Processing steps as well.
3. If we want to execute the operations apart from Neutron Graph on Neutron NPU in i.MX95 then what is the process?
4. If we want to execute the models mentioed above on GPU in i.MX95 then what is the process?
I have gone through the Machine Learning User Guide as well but haven't found related details. If you want any details from my side to debug this then feel free to ask.
Thanks,
Vatsal