Fake info in PyeIQ article?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
the article
PyeIQ - A Python Framework for eIQ on i.MX Processors
shows that either armnn and tflite successfully work with GPU. When I followed links I noticed that pyeIQ's
armnn/inference.py intendly sets backends as CpuRef and CpuAcc, but NO GpuAcc.
furthermore
tflite/inference.py doesn't use gpu_delegates, while default backend is cpu.
Is that article fake or misunderstanding?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Aleksandr,
TensorFlow Lite does not have Python bindings, like C++ (CPU, GPU/NPU), for delegates. By default, it uses NNAPI delegate (when you run a demo, you can see it by the following log message: INFO: Created TensorFlow Lite delegate for NNAPI). NNAPI delegate automatically delegates the inference to GPU/NPU.
About Arm NN, it does work with GPU as described in the table (the table is to inform if it is supported or not, and not necessary is the default - we will try to let this more clear in the next version), but you do need to change in the code from Cpu to VsiNpu in order to run inference on GPU/NPU.
PyeIQ focuses on MPlus, so we decided that our default, in this case, would be CPU, ONLY because this particular model (fire detection: float32) is not quantized (uint8) and when this happens CPU works better. If you had a quantized model, please change to VsiNpu that will work way faster :smileyhappy:
Thanks,
Diego
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for a fast response!
I'm digging into both armnn and tfite in relation to gpu, so
Could you please clarify few details:
1. does BSP contain default tflite branch without gpu_delegates?
2. am I correct armnn doesn't support GpuAcc backend? what is VsiNpu?
I think demos should have a simple flag cpu/gpu/npu to change backend where it applicable.
Table with cpu/gpu support confuses when code actually have no such gpu option. (armnn/inference.py)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Aleksandr,
Answering your questions:
No, it is not BSP related. TensorFlow Lite bindings for Python do not offer a way for delegating, it uses NNAPI delegate (which is going to delegate for GPU/NPU if available, otherwise it uses CPU). Actually there is only one possible delegate you can choose, which is TPU, but it is not a hardware solution provided and supported by NXP.
The ArmNN of our BSP does not support GpuAcc, it supports VsiNpu instead. VsiNpu is the backend provided for hardware acceleration, it is similar to TensorFlow Lite NNAPI delegate, which means it is going to delegate inference for GPU/NPU.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry for a angry heading, I was very frustrated by inconsistent information and many failed builds while trying to use gpu. (I did it wrong way, my bad)
Thanks again)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
‌,
As Diego mentioned this article and sample example is not fake and is demo package implemented by NXP.
Did you tried sample example your self or can you help me understand the reason of misunderstanding?
-Manish
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you for a response,
I've found a demo, that i missed, Switch Detection Video - pyeiq that could change backend. I wll check it up
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Aleksandr,
The switch_video is a special case, such as it is classified as an application and not a demo. Part of it is written in C++, that's why you can choose between CPU or GPU/NPU. This application was developed for real-time comparison in the performance when you run inference on CPU or use hardware acceleration with GPU/NPU.
Thanks,
Diego