Hi,
We need to convert an RGB image into NV12 format using OpenGL on i.MX8QM / Linux。
The attached GL shader code and application can run successfully on other HW platform (e.g, NVidia Jetson).
However, the converted result on IMX8QM is wrong.
In our demo application, a pure blue image(RGBA888, 1024x768) is converted to a green image (NV12),
which is obviously wrong.
Following is the vertex shader and fragment shader code:
//Vertex shader
char vShaderStr[] =
"#version 300 es \n"
"layout(location = 0) in vec4 a_position; \n"
"layout(location = 1) in vec2 a_texCoord; \n"
"out vec2 v_texCoord; \n"
"void main() \n"
"{ \n"
" gl_Position = a_position; \n"
" v_texCoord = a_texCoord; \n"
"}
//off-screen fragment shader, RGB to YUV
char fFboShaderStr[] =
"#version 300 es\n"
"precision mediump float;\n"
"in vec2 v_texCoord;\n"
"layout(location = 0) out vec4 outColor;\n"
"uniform sampler2D s_TextureMap;\n"
"uniform float u_Offset;\n"
"const vec3 COEF_Y = vec3( 0.257, 0.504, 0.098);\n"
"const vec3 COEF_U = vec3(-0.148, -0.291, 0.439);\n"
"const vec3 COEF_V = vec3( 0.439, -0.368 ,-0.071);\n"
"const float UV_DIVIDE_LINE = 2.0 / 3.0;\n"
"void main()\n"
"{\n"
" vec2 texelOffset = vec2(u_Offset, 0.0);\n"
" if(v_texCoord.y <= UV_DIVIDE_LINE) {\n"
" vec2 texCoord = vec2(v_texCoord.x, v_texCoord.y * 3.0 / 2.0);\n"
" vec4 color0 = texture(s_TextureMap, texCoord);\n"
" vec4 color1 = texture(s_TextureMap, texCoord + texelOffset);\n"
" vec4 color2 = texture(s_TextureMap, texCoord + texelOffset * 2.0);\n"
" vec4 color3 = texture(s_TextureMap, texCoord + texelOffset * 3.0);\n"
"\n"
" float y0 = dot(color0.rgb, COEF_Y) + 0.063;\n"
" float y1 = dot(color1.rgb, COEF_Y) + 0.063;\n"
" float y2 = dot(color2.rgb, COEF_Y) + 0.063;\n"
" float y3 = dot(color3.rgb, COEF_Y) + 0.063;\n"
" outColor = vec4(y0, y1, y2, y3);\n"
" }\n"
" else {\n"
" vec2 texCoord = vec2(v_texCoord.x, (v_texCoord.y - UV_DIVIDE_LINE) * 3.0);\n"
" vec4 color0 = texture(s_TextureMap, texCoord);\n"
" vec4 color1 = texture(s_TextureMap, texCoord + texelOffset);\n"
" vec4 color2 = texture(s_TextureMap, texCoord + texelOffset * 2.0);\n"
" vec4 color3 = texture(s_TextureMap, texCoord + texelOffset * 3.0);\n"
"\n"
" float v0 = dot(color0.rgb, COEF_V) + 0.502;\n"
" float u0 = dot(color1.rgb, COEF_U) + 0.502;\n"
" float v1 = dot(color2.rgb, COEF_V) + 0.502;\n"
" float u1 = dot(color3.rgb, COEF_U) + 0.502;\n"
" outColor = vec4(v0, u0, v1, u1);\n"
" }\n"
"}";
Basically, this shader program is from
The same shader code and application code leads to different conversion results, on IMX8QM and Nvidia HW.
Anything special with OpenGL on IMX8QM?