Hi,
I have been testing the TensorFlow Lite Object Detection demo on several Android BSP releases for the i.MX8MQ EVK, but the GPU delegate cannot be enabled successfully.
The inference works correctly on CPU, but when switching to GPU delegate, initialization fails and inference does not start.
| SoC | NXP i.MX8MQ |
| Android BSP | android-16.0.0_1.0.0 / android-15.0.0_2.0.0 / android-14.0.0_1.2.0 |
| Demo | TensorFlow Lite Object Detection (from TensorFlow) |
| GPU | Vivante GC7000Lite |
| NPU | Not used |
The app runs fine with CPU inference, but when enabling GPU delegate, it fails to initialize.
Here are some of the relevant logcat messages during GPU mode:
2025-09-11 20:05:58.595 libEGL E call to OpenGL ES API with no current context (logged once per thread)
2025-09-11 20:05:55.994 Surface E IGraphicBufferProducer::setBufferCount(0) returned Invalid argument
2025-09-11 20:05:56.086 BaseTaskApi W Closing an already closed native libin the code:
if (CompatibilityList().isDelegateSupportedOnThisDevice) {
baseOptionsBuilder.useGpu()
activeDelegate = "GPU"
} else {
objectDetectorListener?.onError("GPU is not supported on this device") // also print this !!!!!!!!!!!!
}
I have check:
已解决! 转到解答。
Hi hank:
The stock TFLite GPU delegate is not supported on i.MX8MQ Android BSPs , this is expected due to GPU backend compatibility and NXP's chosen acceleration strategy. Use VX (TIM-VX) delegate on i.MX8M, and Neutron delegate on i.MX95.
Hi hank:
The stock TFLite GPU delegate is not supported on i.MX8MQ Android BSPs , this is expected due to GPU backend compatibility and NXP's chosen acceleration strategy. Use VX (TIM-VX) delegate on i.MX8M, and Neutron delegate on i.MX95.