My project involves sending a raw stream of H264-encoded video over network and rendering with OpenGL ES 2.0 it on an i.MX6D based board. I started out with software decoding (using libavcodec) and uploading with glTexImage2D. I increased the efficiency of the software decoding method by using glTexDirectVIV. However, I now want to increase efficiency further by using the hardware decoding facility on the i.MX6 platform. I have read the documentation PDF for libvpu (http://hands.com/~lkcl/eoma/iMX6/VPU_API_RM_L3.0.35_1.1.0.pdf), as well as some code in another project using the library (lhttps://github.com/irtimmer/limelight-embedded/blob/master/jni/nv_imx_dec/nv_imx_dec.c). However, both the documentation and the aforementioned example only explain how to display the decoded frames using V4L, but I need to do it with OpenGL ES 2.0. Can anyone provide me with an example of this? (Preferably using glTexDirectVIV/glTexDirectVIVMap to increase efficiency)
Also, the documentation for glTexDirectVIV and glTexDirectVIVMap seems to be quite sparse. (Especially in the case of the latter.) Can someone explain to me how exactly they should be used in a scenario like this?