We were running the YOLOv8 TensorFlow Lite model on Verdin iMX8M Plus
We observed that setting or unsetting the value of USE_GPU_INFERENCE does not make an impact on the inference time.
So, we would like to know what ensures the activation of NPU processing power
what bsp version do you use?