shows that either armnn and tflite successfully work with GPU. When I followed links I noticed that pyeIQ's
armnn/inference.py intendly sets backends as CpuRef and CpuAcc, but NO GpuAcc.
tflite/inference.py doesn't use gpu_delegates, while default backend is cpu.
Is that article fake or misunderstanding?
TensorFlow Lite does not have Python bindings, like C++ (CPU, GPU/NPU), for delegates. By default, it uses NNAPI delegate (when you run a demo, you can see it by the following log message: INFO: Created TensorFlow Lite delegate for NNAPI). NNAPI delegate automatically delegates the inference to GPU/NPU.
About Arm NN, it does work with GPU as described in the table (the table is to inform if it is supported or not, and not necessary is the default - we will try to let this more clear in the next version), but you do need to change in the code from Cpu to VsiNpu in order to run inference on GPU/NPU.
PyeIQ focuses on MPlus, so we decided that our default, in this case, would be CPU, ONLY because this particular model (fire detection: float32) is not quantized (uint8) and when this happens CPU works better. If you had a quantized model, please change to VsiNpu that will work way faster :smileyhappy:
Thank you for a fast response!
I'm digging into both armnn and tfite in relation to gpu, so
Could you please clarify few details:
1. does BSP contain default tflite branch without gpu_delegates?
2. am I correct armnn doesn't support GpuAcc backend? what is VsiNpu?
I think demos should have a simple flag cpu/gpu/npu to change backend where it applicable.
Table with cpu/gpu support confuses when code actually have no such gpu option. (armnn/inference.py)
Answering your questions:
As Diego mentioned this article and sample example is not fake and is demo package implemented by NXP.
Did you tried sample example your self or can you help me understand the reason of misunderstanding?
The switch_video is a special case, such as it is classified as an application and not a demo. Part of it is written in C++, that's why you can choose between CPU or GPU/NPU. This application was developed for real-time comparison in the performance when you run inference on CPU or use hardware acceleration with GPU/NPU.