I noticed that the gl2ext.h header from the Vivante GPU userspace package contains additional symbols such as glTexDirectTiledMapVIV, indicating support for tiled frames.
I do know that the tile format used by the i.MX8m Hantro decoder and the Vivante GPU are compatible. So, tiled frames produced by the decoder could be directly used by the GPU. In theory, this would yield a performance gain, since the (de)tiling stages would not be necessary.
However, the documentations do not mention any tiled frame support. Also, the additional definitions in gl2ext.h indicate support for tiled I420 frames, not tiled NV12 ones, which is bad, because it seems to me that the Hantro decoder cannot produce tiled I420 frames, only tiled NV12 ones.
Is it still possible to combine VPU and GPU like this? Is there documentation about tiling support in the VIV direct textures available anywhere? And could the GPU also handle tiled NV12 frames? Or is it perhaps possible to make the Hantro decoder produced tiled I420 frames?
Also, should I even use direct textures for this? Or should I rather go the DMABUF + EGLImage way (together with the EGL_EXT_image_dma_buf_import_modifiers extension)? Is the latter supported even by the Vivante drivers?
Finally, the whole reason I am asking this is the performance gain. What degree of performance gain could I expect from using tiled frames? Minor, or significant? If it is only minor, it may not be worth the effort.