CPU-GPU Interaction

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

CPU-GPU Interaction

1,024 Views
abhilashabraham
Contributor I

I'm having a custom board(Yocto 1.5)  with IMX6Q used for capturing Y16 data in 1040x768 resolution. I have V4L2 interface for doing the capture and the dequeued data is processed(by CPU) and send over remote system via a socket. Now I want to use the GPU for doing the processing. I did the following based on inputs from various threads. I'm a newbie to image processing and GPU.

What I want is that the dequeued buffer (V4L2 dequeue) need to be given to the GPU who does the processing and I can read back the processed data to be sent to the remote system. I'm planning to use the following steps:

glTexDirectVIVMap(GL_TEXTURE_2D, 768, 1240, <not sure what value for Y16>, &buffers[buf.index].start, &(buffers[buf.index].offset));

glTexDirectInvalidateVIV(GL_TEXTURE_2D);

Now the problem is I'm hitting runtime failure during initialisation.

I understood we need to have vertex and fragment shaders, need them to be compiled and linked to a program,  which is then marked as 'Active'. Also I should create a texture and bind it to TEXTURE_2D. Following was my initialisation code:

int Init()

{

  GLuint texture_id = 0;

  // Allocates one texture handle

  glGenTextures (1, &texture_id);

  GLbyte vShaderStr[] =

  "attribute vec4 vPosition; \n"

  "void main() \n"

  "{ \n"

  " gl_Position = vPosition; \n"

  "} \n";

  GLbyte fShaderStr[] =

  "precision mediump float; \n"

  "void main() \n"

  "{ \n"

  " gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n"

  "} \n";

  GLuint vertexShader;

  GLuint fragmentShader;

  GLuint programObject;

  GLint linked;

  // Load the vertex/fragment shaders

  vertexShader = LoadShader(GL_VERTEX_SHADER, vShaderStr);

  fragmentShader = LoadShader(GL_FRAGMENT_SHADER, fShaderStr);

  // Create the program object

  programObject = glCreateProgram();

  if(programObject == 0)

  return 0;

  glAttachShader(programObject, vertexShader);

  glAttachShader(programObject, fragmentShader);

  // Bind vPosition to attribute 0

  glBindAttribLocation(programObject, 0, "vPosition");

  // Link the program

  glLinkProgram(programObject);

  // Check the link status

  glGetProgramiv(programObject, GL_LINK_STATUS, &linked);

  if(!linked)

  {

  GLint infoLen = 0;

  glGetProgramiv(programObject, GL_INFO_LOG_LENGTH, &infoLen);

  if(infoLen > 1)

  {

  char* infoLog = malloc(sizeof(char) * infoLen);

  glGetProgramInfoLog(programObject, infoLen, NULL, infoLog);

  //esLogMessage("Error linking program:\n%s\n", infoLog);

  free(infoLog);

  }

  glDeleteProgram(programObject);

  return FALSE;

  }

  glUseProgram(programObject);

  glBindTexture(GL_TEXTURE_2D,  texture_id);

  return TRUE;

}

GLuint LoadShader(const char *shaderSrc, GLenum type)

{

  GLuint shader;

  GLint compiled;

  // Create the shader object

  shader = glCreateShader(type);

  if(shader == 0)

  return 0;

  // Load the shader source

  glShaderSource(shader, 1, &shaderSrc, NULL);

  // Compile the shader

  glCompileShader(shader);

  // Check the compile status

  glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);

  if(!compiled)

  {

  GLint infoLen = 0;

  glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);

  if(infoLen > 1)

  {

  char* infoLog = malloc(sizeof(char) * infoLen);

  glGetShaderInfoLog(shader, infoLen, NULL, infoLog);

  //esLogMessage("Error compiling shader:\n%s\n", infoLog);

  free(infoLog);

  }

  glDeleteShader(shader);

  return 0;

  }

  return shader;

}

On doing this I could  find that the glGenTexture(my first step) was giving an error of 0x502(INVALID OPERATION). On further reading I could find that this is due to absence of EGLInitialisation which requires a display. But I dont have a display, as mentioned above. Please let me know:

1. How to do the initialisation for my requirement?

2. Whether framebuffer is required? If yes, how to configure it for my requirement? Should I do any tweaking in yocto? I am thinking of doing DISTRO_FEATURES_remove="x11 wayland" as per some discussion threads.

3. Should I be using x11 instead of frame buffer?

4. How can I read a GPU processed data? Is there an API? Can glReadPixels help?

Labels (4)
0 Kudos
3 Replies

620 Views
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hi Abhilash,

GL_TEXTURE is used on OpenGL ES 1.1, EGL is as well used on OpenGL ES 1.1 and 2.0, looks like the GL API call the EGLBindTexture and that the reason you will need to include EGL support for your case.

X11 wayland will give you more stability on using GPU, and it enable EGL as well.

For procesing data from GPU you should check the vGPU sdk tools and hat includes GPU vProfiler in order to see, how vertex, textures and Fragments streams are loaded and processing by GPU. For 3.10.17 please check: I.MX_6_VIVANTE_VDK_150_TOOLS

Hope this helps

0 Kudos

620 Views
abhilashabraham
Contributor I

Thanks for the information.

The initialisation error was due to some missing elements in kernel and I got it fixed.

Can you give me some input on my last question: "How can I read a GPU processed data? Is there an API?"

My requirement is to send the processed frame to a remote system via socket(NO display). To my understanding I should be doing a swapbuffer and read fb0 for first frame, again swap and read fb1 for next frame and swap and read fb0,...so on. Am I right? Or should I just call the swapbuffer and read from /dev/fb0 alone? Also I should be doing a mmap of /dev/fb* to get the data.

Is there any API which can do this copying to user space?

0 Kudos

620 Views
dilipkumar
Contributor III

You can use the standard opengl API glReadPixels to copy the opengl rendered data to cpu memory. But this process would be very slow compared to mmaping the framebuffer and accessing that data directly. If performance is not an issue glReadPixels is the easiest way to go.

0 Kudos