AnsweredAssumed Answered

CPU-GPU Interaction

Question asked by Abhilash Abraham on Jun 1, 2015
Latest reply on Jul 30, 2015 by Dilip Kumar

I'm having a custom board(Yocto 1.5)  with IMX6Q used for capturing Y16 data in 1040x768 resolution. I have V4L2 interface for doing the capture and the dequeued data is processed(by CPU) and send over remote system via a socket. Now I want to use the GPU for doing the processing. I did the following based on inputs from various threads. I'm a newbie to image processing and GPU.

 

What I want is that the dequeued buffer (V4L2 dequeue) need to be given to the GPU who does the processing and I can read back the processed data to be sent to the remote system. I'm planning to use the following steps:

 

glTexDirectVIVMap(GL_TEXTURE_2D, 768, 1240, <not sure what value for Y16>, &buffers[buf.index].start, &(buffers[buf.index].offset));

glTexDirectInvalidateVIV(GL_TEXTURE_2D);

 

Now the problem is I'm hitting runtime failure during initialisation.

 

I understood we need to have vertex and fragment shaders, need them to be compiled and linked to a program,  which is then marked as 'Active'. Also I should create a texture and bind it to TEXTURE_2D. Following was my initialisation code:

 

int Init()

{

 

  GLuint texture_id = 0;

 

  // Allocates one texture handle

  glGenTextures (1, &texture_id);

 

 

  GLbyte vShaderStr[] =

  "attribute vec4 vPosition; \n"

  "void main() \n"

  "{ \n"

  " gl_Position = vPosition; \n"

  "} \n";

  GLbyte fShaderStr[] =

  "precision mediump float; \n"

  "void main() \n"

  "{ \n"

  " gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); \n"

  "} \n";

  GLuint vertexShader;

  GLuint fragmentShader;

  GLuint programObject;

  GLint linked;

 

  // Load the vertex/fragment shaders

  vertexShader = LoadShader(GL_VERTEX_SHADER, vShaderStr);

  fragmentShader = LoadShader(GL_FRAGMENT_SHADER, fShaderStr);

  // Create the program object

  programObject = glCreateProgram();

  if(programObject == 0)

  return 0;

  glAttachShader(programObject, vertexShader);

  glAttachShader(programObject, fragmentShader);

  // Bind vPosition to attribute 0

  glBindAttribLocation(programObject, 0, "vPosition");

  // Link the program

  glLinkProgram(programObject);

  // Check the link status

  glGetProgramiv(programObject, GL_LINK_STATUS, &linked);

  if(!linked)

  {

  GLint infoLen = 0;

  glGetProgramiv(programObject, GL_INFO_LOG_LENGTH, &infoLen);

  if(infoLen > 1)

  {

  char* infoLog = malloc(sizeof(char) * infoLen);

  glGetProgramInfoLog(programObject, infoLen, NULL, infoLog);

  //esLogMessage("Error linking program:\n%s\n", infoLog);

  free(infoLog);

  }

  glDeleteProgram(programObject);

  return FALSE;

  }

 

  glUseProgram(programObject);

  glBindTexture(GL_TEXTURE_2D,  texture_id);

  return TRUE;

}

 

GLuint LoadShader(const char *shaderSrc, GLenum type)

{

  GLuint shader;

  GLint compiled;

  // Create the shader object

  shader = glCreateShader(type);

  if(shader == 0)

  return 0;

  // Load the shader source

  glShaderSource(shader, 1, &shaderSrc, NULL);

  // Compile the shader

  glCompileShader(shader);

  // Check the compile status

  glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);

  if(!compiled)

  {

  GLint infoLen = 0;

  glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);

  if(infoLen > 1)

  {

  char* infoLog = malloc(sizeof(char) * infoLen);

  glGetShaderInfoLog(shader, infoLen, NULL, infoLog);

  //esLogMessage("Error compiling shader:\n%s\n", infoLog);

  free(infoLog);

  }

  glDeleteShader(shader);

  return 0;

  }

  return shader;

}

 

On doing this I could  find that the glGenTexture(my first step) was giving an error of 0x502(INVALID OPERATION). On further reading I could find that this is due to absence of EGLInitialisation which requires a display. But I dont have a display, as mentioned above. Please let me know:

 

1. How to do the initialisation for my requirement?

2. Whether framebuffer is required? If yes, how to configure it for my requirement? Should I do any tweaking in yocto? I am thinking of doing DISTRO_FEATURES_remove="x11 wayland" as per some discussion threads.

3. Should I be using x11 instead of frame buffer?

4. How can I read a GPU processed data? Is there an API? Can glReadPixels help?

Outcomes