iMX6DL Vivante (5.0.11.p8) banding (gradient) issue

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

iMX6DL Vivante (5.0.11.p8) banding (gradient) issue

1,604 Views
rostyslavkhudol
Contributor III

Dear All,

Recently we've updated Vivante from 4.6.9 to 5.0.11.p8 by backporting it from the 3.14.52_1.1.0_ga branch (located in the official git repo). The corresponding userspace drivers (OpenGL implementation etc) were downloaded and installed from http://downloads.yoctoproject.org/mirror/sources/

We've managed to run our sw stack (a video streaming application) after the update and it seemed to work well, except that we've noticed a banding issue, when displaying color gradients (framebuffer dumps attached).

We have a custom device based on iMX6DL running a customized Linux Kernel 3.10.53 (originally taken from the official git repo) + Fedora 23. We're not using any video backends such as X11, QT, Wayland or DFB, just a plain framebuffer with 16bpp color depth.

The video relate command line parameters: 

video=mxcfb0:dev=hdmi,bpp=16

We've managed to reproduce the issue by downloading the latest gpu samples from Freescale (fsl-gpu-sdk-2.3) compiling and running the one (S09_VIV_direct_texture) we'd got inspired from. The original image was moving so we just modified code to display a gradient picture without any rotation.

It turned out, that the issue disappears if we remove this line

glClear(GL_COLOR_BUFFER_BIT);

which is unacceptable.

Here is the modified sample code (from S09_VIV_direct_texture::Draw() method):

...

// glClearColor(0.0f, 0.5f, 0.5f, 1.0f);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT); // commenting this line makes the issue disappear
glUniformMatrix4fv(m_locTransformMat, 1, GL_FALSE, g_transformMatrix /*m_matTransform.DirectAccess()*/);

...

To eliminate a possibility of our HW and SW stack "malicious" impact we've tested the same sample on iMX6DL Solo Sabre SD (running the corresponding images/filesystem from fsl-L3.14.28_1.0.0_iMX6qdls_Bundle) and managed to reproduce the issue.

However, with the latest iMX6QP Sabre SD (running the corresponding images/filesystem from L4.1.15-1.0.0-ga_images_MX6QPSABRESD) the issue isn't being observed anymore.

Both boards had color depth = 16bpp set.

We've tried chaning the color depth from 16 to 32 and it seemed to fix the issue, however it's not acceptable for us at the moment since we started observing some weird IPU behaviour (after ~1h rendering, image starts flickering and we see some warning in the kernel log) + everything worked well with the old vivante version and 16bpp depth. 

Any help will be appreciated!

5 Replies

984 Views
claymontgomery
Contributor IV

That banding is exactly what I would expect to see with a 16 bit frame buffer. I'm not sure why you say that using a 32 bit frame buffer is unacceptable because that is by far the more normal use case for OpenGL.

I realize that the older (4.6.9) driver would allow apps to run with the FB set to either 16 or 32 bits, but I suspect that setting was overridden and it always rendered 32 bits, regardless. The new version has the ability to actually render to a 16 bit FB properly. I think that is a new feature, but I have not seen that documented anywhere.

The IPU is usually not involved in OpenGL rendering and if you are streaming video into OpenGL you should not need the IPU at all.

Regards, Clay

984 Views
rostyslavkhudol
Contributor III

Hello Clay,

Thanks a lot for your input.

However, I've got a few questions:

1. If this is expected behaviour with a 16-bit frame buffer, then why there's no banding without

glClear(GL_COLOR_BUFFER_BIT);

being called?

2. If the older driver always rendered 32 bits, how could it possibly be displayed on a 16-bit frame buffer?

3. In regards to the IPU involvement: I've seen a few ioctl() calls (from OpenGL) to a framebuffer device which indirectly called IPU API, f.x. : 

MXCFB_SET_PREFETCH

I don't really know/understand if it's somehow connected with the issue we're talking about, but just to clarify that the IPU is actually involved sometimes.

Again, thanks a lot for answering.

Regards, Rostyslav

984 Views
claymontgomery
Contributor IV

Rostyslav,

Can you please explain in more detail how you got the 5.0.11.p8 driver?  I looked at the 3.14.52_1.1.0_ga branch in the official git repo and I downloaded the user space drivers from gpu-viv-bin-mx6q-3.10.31-1.1.0-beta-hfp.bin.

But the drivers in that package seem to be older and OpenGL ES 2.0 reports that it is 5.0.11.p1.19959, not p8.

Thanks, Clay

0 Kudos

984 Views
rostyslavkhudol
Contributor III

Clay,

I took this imx-gpu-viv-5.0.11.p8.3-hfp.bin one, I believe. At some point, they've changed the naming convention.

984 Views
claymontgomery
Contributor IV

Rostyslav,

     Look at how your EGL initialization code works. Such code typically queries what FB formats are available from the EGL driver and chooses from what is available.  In particular, the ConfigAttribs list passed to eglChooseConfig() is critical. I suspect that the difference between when you get banding and no banding might be because your EGL init code has chosen a different FB format. Vivante has improved their EGL, so it is probably making more choices available to your EGL init code than what it did before. You can't just rely upon what is reported by:

    cat  /sys/class/graphics/fb0/bits_per_pixel

    My point about the IPU is that the reason to use OpenGL ES on the i.MX6 is often to avoid using the IPU to do color space conversion and scaling of video frames. Some calls to the IPU kernel driver will still occur.  That is fine.

Regards, Clay

0 Kudos