I know this method, I hope if I could use it. But it requires eglfslext.h and some other so files which is released in the linux driver amd-gpu-bin-mx51. Currently, I don't have any idea to use them under Android. And I can't understand why they don't release a Android version of GPUSDK. As I have tested, glTexture2D is not the major problem, since it only take about 0.01 sec and also the eglSwapBuffer. The pure rendering time of a 800x480 image in rgb565 format takes about 0.06-0.07 sec. I hope this phenomena is also cause by the problem which can be solved by the suggested method. But I doubt that if the Freescale engineers had done this in Android before.
Since not everyone would have seen that solution, I copied the solution here.
---------------------------------------------------------------------------------------
Video to Texture Streaming - i.MX53 processor
The low fps when streaming a video to texture is due to the glTexture2D function, which is highly used for most programmers. This function is relatively slow, since it writes some buffers before actually copying the data to the GPU, which is processed and then displayed.
When I was working on an Augmented Reality demo, it was running on about 15 fps with images with resolution of 320x240, if I wanted to display higher resolution for a better look of the application, it dropped to 7fps, pretty bad.
On a recent research on how to improve the frame rate of my application I found that we can write our data (image) directly to the GPU buffer and displays it without using the glTexture2D function.
The application used for this test (the video can be found at the end of this post) simply get image from the webcam and use it as a texture to a plane. The webcam captures a live youtube video stream being displayed on my desktop monitor and send the data for processing at 30fps (maximum speed at 800x640). This application has 2 threads: one for video capturing and the another one for rendering. While running the render thread, it now reaches 80 fps for a 800x640 images !
Freescale´s OpenGL ES API gives you some extra functions that allows to write directly to the GPU buffer.
Below you can find a piece of code, which do all the magic:
void LoadGLTextures (EGLDisplay egldisplay, IplImage *texture)
{
//Setup eglImage
char* imageBuffer = NULL;
static int start = 0;
EGLint attribs[] = { EGL_WIDTH, TEXTURE_W,
EGL_HEIGHT, TEXTURE_H,
EGL_IMAGE_FORMAT_FSL, EGL_FORMAT_BGRA_8888_FSL, EGL_NONE};
if (! start)
{
g_imgHandle = eglCreateImageKHR(egldisplay, EGL_NO_CONTEXT, EGL_NEW_IMAGE_FSL, NULL, attribs);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, g_imgHandle);
start = 1;
}
//printf ("init --> g_imgHandle: 0x%08x\n", (int)g_imgHandle);
eglQueryImageFSL(egldisplay, g_imgHandle, EGL_CLIENTBUFFER_TYPE_FSL, (EGLint *)&imageBuffer);
memcpy (imageBuffer, texture->imageData, texture->imageSize);
return;
}
As you can see it is pretty simple, we create an Image and it is passed to g_imageHandle and then we initialize the texture for this image handle, note that it is only initialized once.
Once got the image and texture initialized, we use the function eglQueryImageFSL which gives us the pointer to the GPU buffer, and then, the data is written to the GPU buffer using memcpy.
Andre Luiz Vieira da Silva said: