Hi,
I have been testing the TensorFlow Lite Object Detection demo on several Android BSP releases for the i.MX8MQ EVK, but the GPU delegate cannot be enabled successfully.
The inference works correctly on CPU, but when switching to GPU delegate, initialization fails and inference does not start.
Test Environment
Item Version / Detail
| SoC | NXP i.MX8MQ |
| Android BSP | android-16.0.0_1.0.0 / android-15.0.0_2.0.0 / android-14.0.0_1.2.0 |
| Demo | TensorFlow Lite Object Detection (from TensorFlow) |
| GPU | Vivante GC7000Lite |
| NPU | Not used |
Problem Details
The app runs fine with CPU inference, but when enabling GPU delegate, it fails to initialize.
Here are some of the relevant logcat messages during GPU mode:
2025-09-11 20:05:58.595 libEGL E call to OpenGL ES API with no current context (logged once per thread)
2025-09-11 20:05:55.994 Surface E IGraphicBufferProducer::setBufferCount(0) returned Invalid argument
2025-09-11 20:05:56.086 BaseTaskApi W Closing an already closed native lib
in the code:
if (CompatibilityList().isDelegateSupportedOnThisDevice) {
baseOptionsBuilder.useGpu()
activeDelegate = "GPU"
} else {
objectDetectorListener?.onError("GPU is not supported on this device") // also print this !!!!!!!!!!!!
}
I have check:
In /vendor/lib64/, there are libraries such as libOpenCL.so, libtim-vx.so,
Is direct GPU Delegate unsupported on i.MX8MQ even in Android BSP? (Similar to this Android 13 thread, where VX was recommended for Android 14+.)