iMX6 GPU development without X11

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

iMX6 GPU development without X11

8,025件の閲覧回数
konstantynproko
Contributor III

Hello,

I'm working on a project where I need to use GPU for converting Bayer image form a CMOS camera to RGB or YUV format. I've found very nice implementation of Bayer OpenGL shaders and converted in to OpenGL ES2. I don't have X11 environment because we are using images for iMX6 VPU H.264 encoding. To be able to render an image I'm using virtual framebuffer driver which emulates framebuffers with adjustable parameters. Also I'm using latest Freescale libGLESv2, libEGL and libGAL libraries and generating display, window and surface using FB extention: fbGetDisplayByIndex, fbCreateWindow, etc... Furthermore, I can see virtual framebuffer driver is accessed for /dev/fb0 and memory is mapped. I'm reading resulting image using glReadPixels() call.

Running all this beauty on iMX6 gives me garbage, although I can see some rendering from original Bayer image but it is almost unrecognizable. I've changed different configurations for rendering, input types, formats but it didn't help. Also I exported and used glTexDirectVIV, glTexDirectInvalidateVIV calls to load data but still bad results.

I'm checking ALL errors, none reported.

Also I've compiled my code for my development X86 Linux with X11 and rendering exactly the same code on Nvidia GPU with X11 window minus specific Vivante GPU stuff. In this cdase image looks great, so shaders and opengles2 specific code works fine. The only difference is that I'm using GL_LUMINANCE to load bayer image in X11 case and GL_RGB in iMX6 case because I guess framebuffer display does not support LUMINANCE type.

I've tried configuring and using LDB display driver and native MXC framebuffer, results where the same.

Also there is a great forum on this topic which i followed and used several suggestions but for no results.

i.MX6: camera image processing with 16 bit per color channel (IPU->GPU->VPU) under Linux

Did anybody have any experience with X11-less GPU processing on iMX6?

Thank you very much!

Regards,

Konstantyn

11 返答(返信)

2,839件の閲覧回数
konstantynproko
Contributor III

Fixed: It was a problem with missmatched libEGL, libGAL and libGLESv2 vs galcore driver.

0 件の賞賛
返信

2,839件の閲覧回数
Bio_TICFSL
NXP TechSupport
NXP TechSupport

Hi Konstantyn,

Also, this solution works without X11 and use pixel shader to do the conversion.

How to insert GPU shading between camera capture and preview window

regards

2,839件の閲覧回数
konstantynproko
Contributor III

Hi,

Yes, you are right. I've looked at that forum thread. The problem is that I don't have a native window and X11 on my system and render everything in memory:

Bayer image received from CMOS camera, sent to GPU for conversion to RGBA. I'm using virtual framebuffer driver in which I map contiguous DMA memory pre-allocated for this purpose. When GPU renders image, I'm setting up IPU to convert RGBA to YUV420 Planar which is excepted by VPU. From here we have a dual pipe-line for JPEG and H264 encoding which is offloaded from our device using RTSP streaming protocol.

I'm using frame buffer VIV extentions to create display and window to render directly to FB.

BTW, this Bayer shader implementation by Morgan McGuire is excellent:

http://graphics.cs.williams.edu/papers/BayerJGT09/

I've adopted it for OpenGL ES2 and slightly modified to work on imx6 vivante gpu.

Here is my shader for those who interested. Also, if somebody needs implementation of any portion of my pipeline, I would be glad to help. I've spent too much time on it :smileyhappy:.

Regards,

Konstantyn

2,839件の閲覧回数
johnstone
Contributor I

Konstantyn,

In this post you wrote:  " Also, if somebody needs implementation of any portion of my pipeline, I would be glad to help. I've spent too much time on it"

I'd very much like to see your entire source code for this.

I have to do something similar, but instead of Bayer processing of a CMOS camera, I have to 'screen-grab' an existing OpenGL image, color-convert it with the IPU and then JPEG encode it with the VPU and finally output it to a movie file.

Can you help?

Thanks.

John.

0 件の賞賛
返信

2,839件の閲覧回数
jdepedro
Contributor IV

Hi Konstantyn,

Thanks for sharing the code! Could you indicate how did you used it in your pipeline?

I am trying to use the 'glshader' gstreamer element, which has a location property which allows to specify a GLSL file. Your code is two files, though (vertex shader + fragment shader). Do I need to perform some kind of conversion to join those file into just one? did you use another gstreamer element?

I would appreciate those details. Thanks.

0 件の賞賛
返信

2,839件の閲覧回数
edison_fernande
Contributor III

Hi Konstantyn,

I have a similar use case and was wondering if you can give me some hints. In my case I have a 2592x1944 bayer frame and I need to convert that to JPEG in the fastest way possible.

I'm using the GPU to convert to RGB and my first approach was to use glTexSubImage2D to upload every frame to the GPU but with that approach I'm getting a framerate of about 1fps with glTexSubImage2D being the bottle neck. My second attempt was using gltexDirectVIV but as it only supports RGB and YUV formats I have to manually move my bayer input buffer into a RGBA buffer (every bayer byte in the R byte of the destination buffer) so the shader can properly work and this process is also taking too much time so I was wandering, how did you update the GPU texture? and what frame rate are you getting with your implementation?

I see that you are using the IPU for the RGB to YUV conversion but in my case I think the frame is too big for the IPU.

Thanks for your help.

Regards,

Edison

0 件の賞賛
返信

2,839件の閲覧回数
sergkot
Contributor II

I fixed  code.

I replace

  PATTERN += (kA.xyz * A).xyzx + (kE.xyw * E).xyxz;

on

  PATTERN += (kA.xyz * A).xyzx;

  PATTERN += (kE.xyw * E).xyxz;

It is magic. It working. But I stumbled with another problem.

Image on display delayed on ~0.5 sec.

0 件の賞賛
返信

2,839件の閲覧回数
sergkot
Contributor II

In my test on OpenGL one frame 1600x1200 pixels demosaic take ~80ms.

On OpenCV demosaic take ~100ms.

Miscellaneous Image Transformations — OpenCV 2.4.12.0 documentation  (CV_BayerBG2BGR)

0 件の賞賛
返信

2,839件の閲覧回数
sergkot
Contributor II

Hi, Konstantyn.

Can you  share code to work with your shaders?

I have а strange bug.

GPU not draws image on display when in PS code have line:

         PATTERN += (kA.xyz * A).xyzx + (kE.xyw * E).xyxz;

GPU draws image when I comment out this line. But image not right.

0 件の賞賛
返信

2,839件の閲覧回数
konstantynproko
Contributor III

Update: As it turns out my problem is with basic floating point arithmetic in fragment shader. For some reason the floating point value N multiplied by 1.0 (for simplicity) does not give me N. For some reason the value becomes grabage not in the 0.0 - 1.0 range. That is why when shader calculates other color channels for the RGB pixel value only one channel which is not involved in math operations is valid. This is very strange.

Here is the good test example:

vec3 MyColor = texture2D(source, center.xy).rgb;

gl_FragColor.rgb = MyColor;

In this case everything is great and I can see all channels populated with correct normalized colors (image is grey because GL_LUMINANCE copies the same luminance value to each RGB channel and Alpha = 1).

Here is the same test, only with one additional step:

vec3 MyColor = texture2D(source, center.xy).rgb;

vec3 ModifiedColor = MyColor * 1.0;

gl_FragColor.rgb = ModifiedColor;

In this case all color channel values are garbage and they normalized to 0.0 by the GPU.

Did anybody have the same experience?

Thanks

Regards,

Konstantyn

0 件の賞賛
返信

2,839件の閲覧回数
konstantynproko
Contributor III

Update: I've enabled iMX6 LDB driver and using MXC frame buffer. I was able to set GL_LUMINANCE as input parameter to loading 2D texture. I'm reading back GL_RGBA pixels using GL_UNSIGNED_BYTE type. After stripping alpha channel I can get correct image in a single color channel only. Looking at hex dump of the image I can see R, G and Alpha channels are 0 and B channel is a valid image. When I disable rendering function glDrawArrays() and use glClear() call with glClearColor() set to R, G, B values, I can see those colors correctly.

So here is the new problem:

How to get GPU render all color channels.

Thanks

Regards,

Konstantyn

0 件の賞賛
返信