We are trying to optimize HD video playback.
Rendering through the eglSwapBuffer() is a little slow, and if we create a texture of the size 1920*1080p, and render it using the GLES API, it takes more than 100ms to get the frame rendered callback. To make matters worse, the rendering happens through 3D GPU, and Weston is also using the 3D GPU to do composition.
We found out that cinemo uses a strategy to dowscale and do color space conversion of the video frames in 2D graphics unit, and finally renders by using wl_viv API.
I am able to downscale and do a color space conversion for Full HD frames at 30FPS, but for rendering it is not clear how to use the buffer from wl_viv API.
Ideally, the same physical memory offset that is used by 2D graphics unit to write the decoded frame, should be shared with the wayland surface. This should not need any memcpy operation. And there should be no need to use the 3D GPU to write to the surface.
wayland-viv-client-protocol.h has a create_buffer function, but there is no documentation on the parameters or example on how to use this function.
Can you please help us in getting some direction so that we can proceed further ?