With the i.MX8M, we are needing to decode and play back 60fps 4K content to a 4K display using the 3D GPU. 4K @ 60 fps equates to ~497Megapixels/sec. The 3D GPU is theoretically capable of 1.6Gigapixels/sec. Can we expect that with more than 3x the fill rate the GPU should not have issues sampling and rendering this content fullscreen?
The GC7000lite GPU appears to support the Vivante extension glTexDirectVIV that allows sampling directly from YUV video textures. I've done some reading on this extension and it appears that with this extension, the GPU is able to sample directly from the texture in a linear format (which would save the DDR cost of first tiling the video frame before sampling from it.) With this Vivante direct texture extension on the GC7000lite, are the YUV pixels converted to RGB in the texture sampler (essentially for free) or is this done in the fragment shader? It seems to me that if the color space conversion is done in the fragment shader, even though we are dealing with 497Megapixels/sec, the platform would not have sufficient performance to maintain full frame rate as it typically takes at least 3 GPU shader instructions to do a YUV to RGB color space conversion.
We are aware that the i.MX8M does have hardware support in the DCSS for directly scanning the video out and blending it with the GUI using the 2nd and 3rd DCSS planes. Using the 2nd and 3rd DCSS planes is not compatible with our use case though and we do require running video through the 3D GPU.