Changing IPU Color Space Conversion parameters

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Changing IPU Color Space Conversion parameters

Changing IPU Color Space Conversion parameters

Inside IPU there are two block where color space conversion can be made: IC (Image Converter) and DP (Display processor).

On Linux, the CSC parameters are located at IPU (IC and DP) drivers, linux/drivers/mxc/ipu3 folder.

All negative coefficients are represented using two's complement.

Linux Image Converter driver:

The parameters are set on function _init_csc:

http://git.freescale.com/git/cgit.cgi/imx/linux-2.6-imx.git/tree/drivers/mxc/ipu3/ipu_ic.c?h=imx_3.1...

static void _init_csc(struct ipu_soc *ipu, uint8_t ic_task, ipu_color_space_t in_format,
 ipu_color_space_t out_format, int csc_index)
{
 /*
 * Y = 0.257 * R + 0.504 * G + 0.098 * B + 16;
 * U = -0.148 * R - 0.291 * G + 0.439 * B + 128;
 * V = 0.439 * R - 0.368 * G - 0.071 * B + 128;
 */
 static const uint32_t rgb2ycbcr_coeff[4][3] = {
 {0x0042, 0x0081, 0x0019},
 {0x01DA, 0x01B6, 0x0070},
 {0x0070, 0x01A2, 0x01EE},
 {0x0040, 0x0200, 0x0200}, /* A0, A1, A2 */
 };

 /* transparent RGB->RGB matrix for combining
 */
 static const uint32_t rgb2rgb_coeff[4][3] = {
 {0x0080, 0x0000, 0x0000},
 {0x0000, 0x0080, 0x0000},
 {0x0000, 0x0000, 0x0080},
 {0x0000, 0x0000, 0x0000}, /* A0, A1, A2 */
 };

/* R = (1.164 * (Y - 16)) + (1.596 * (Cr - 128));
  G = (1.164 * (Y - 16)) - (0.392 * (Cb - 128)) - (0.813 * (Cr - 128));
  B = (1.164 * (Y - 16)) + (2.017 * (Cb - 128); */
 static const uint32_t ycbcr2rgb_coeff[4][3] = {
 {149, 0, 204},
 {149, 462, 408},
 {149, 255, 0},
 {8192 - 446, 266, 8192 - 554}, /* A0, A1, A2 */
 };

Linux Display Processor driver:

The parameters are set on constants (rgb2ycbcr_coeff and ycbcr2rgb_coeff):

http://git.freescale.com/git/cgit.cgi/imx/linux-2.6-imx.git/tree/drivers/mxc/ipu3/ipu_disp.c?h=imx_3...

/* Y = R * 1.200 + G * 2.343 + B * .453 + 0.250;
  U = R * -.672 + G * -1.328 + B * 2.000 + 512.250.;
  V = R * 2.000 + G * -1.672 + B * -.328 + 512.250.;*/
static const int rgb2ycbcr_coeff[5][3] = {
 {0x4D, 0x96, 0x1D},
 {-0x2B, -0x55, 0x80},
 {0x80, -0x6B, -0x15},
 {0x0000, 0x0200, 0x0200}, /* B0, B1, B2 */
 {0x2, 0x2, 0x2}, /* S0, S1, S2 */
};

/* R = (1.164 * (Y - 16)) + (1.596 * (Cr - 128));
  G = (1.164 * (Y - 16)) - (0.392 * (Cb - 128)) - (0.813 * (Cr - 128));
  B = (1.164 * (Y - 16)) + (2.017 * (Cb - 128); */
static const int ycbcr2rgb_coeff[5][3] = {
 {0x095, 0x000, 0x0CC},
 {0x095, 0x3CE, 0x398},
 {0x095, 0x0FF, 0x000},
 {0x3E42, 0x010A, 0x3DD6}, /*B0,B1,B2 */
 {0x1, 0x1, 0x1}, /*S0,S1,S2 */
};
Labels (4)
Tags (2)
Attachments
Comments

Hello,rogeriopimentel

Both IC and DP have the capability of CSC, so how to decide where the CSC to be done.

Thank you!

ZongbiaoLiao

Hi 宗標廖​,

On your project, if the only purpose of CSC is to correctly display the image, you can choose DP to make the CSC. Otherwise you can choose IC.

Rgds,

Rogerio

Hi everyone!

We plan to use Image Converter (IC) in IPU to adjust the parameters of YUV stream from a CMOS camera.
Does IC convert the stream up to 1024 x 1024 at a time?
If so, do we need to split the image to adjust the FullHD stream (1920 x 1080)?

I'd appreciate your response.

Kind Regards,

Tetsuo Maeda

Does IC convert the stream up to 1024 x 1024 at a time?

[Rogerio] It's input: 4096x4096 and output: 1024x1024.


If so, do we need to split the image to adjust the FullHD stream (1920 x 1080)?

[Rogerio] Yes, need to split the image. If the purpose is to only display the image, I would suggest to use the Display Processor CSC insteade of IC CSC. So you won't need to split the image.

Best regards,

Rogerio 

Hi Rogerio-san,

Thank you for the clarification.
We have tried to split the stream to four parts and do the conversion.
But because of the limitation of the transfer rate from IC to system memory (Up to 100Mpixels/sec) , the frame rate of the converted stream was around 20fps.
When we do the same conversion for the stream of 1280 x 720, we got the stream of 30fps.
Although the reference manual "IMX6DQRM" says that the output rate from IC to system memory is "Up to 100Mpixel/sec (eg. 1920 x 1080 30fps) on page 2736", is it actually difficult to output the FHD stream?

I'd appreciate your advice.

Regards,

Tetsuo.

Hi tetsuomaeda‌,

Could you please tell me more about your application? Do you need to make this conversion to display the video on display? Or to save the video into some local storage or send by ethernet?

Best regrds,

Rogerio

Hi Rogerio-san,

Our application is recording a video stream from a CMOS camera to an SD card.

Flow is as follows.

Camera => CSI => DRAM => (IC => DRAM) x 4 => VPU

At first, we were thinking the following flow.

Camera => CSI => IC => DRAM => VPU.

But because of the limitation (up to1024 x 1024) of the output from IC, we have to split the stream so that the image size becomes less than 1024 x 1024.

As the result of stream splitting, it seems to become difficult to output the full HD stream of 30fps from IC to DRAM.

If we can avoid the splitting, we might be able to support FHD 30fps recording.

Thanks in advance for your advice.

Regards,

Tetsuo Maeda

Hi tetsuomaeda‌,

What i.MX6 are you using? (e.g. i.MX6Quad, Dual,Dual Lite?)

For i.MX6 versions that has 2 IPUs, you can use both IPUs at the same time, so the bandwidth will double. For those i.MX6 with PxP, you can use it to perform CSC. It can reach 1080p@50fps.

Best regards,

Rogerio

Hi Rogerio-san,

Very sorry for my late response.
Actually, we are using QUAD and two IPU would be available!
We have not thought about it.
Are there sample codes that are using two ICs in parallel?

Regads,

Tetsuo Maeda

Hi tetsuomaeda‌,

You can call mxc_ipu device (using IOCTL) twice using Linux threads. Device driver will automatically choose the second IPU when the first one is busy.

See attached on the DOC (file csc_exe3.zip) a source file using threads.

Best regards,

Rogerio 

Hello,

We plan to use imx6 (and imx8  too) with LVDS 24-bit, but we need the alpa, so it seems we need RGBA8888 to RGBA6666 conversion (we don't mind the color reduced quality, we need the alpha)

I saw that in the past others had the same need, but they didn't seem to achieve it:

RGBA/ARGB LVDS output from i.MX6 

Is it possible to do RGBA8888 to RGBA6666 conversion using the method in this article ?

Should we use the IC or DP conversion ? Is this the best way to achieve it or can we do it from ioctl framebuffer ? 

Can you please give some points how we should do the conversion ? 

Thank you very much,

ran

Hi,

On latest ipu driver, display processor can only accept the following formats:

int _ipu_pixfmt_to_map(uint32_t fmt){     
switch (fmt) {     c
ase
IPU_PIX_FMT_GENERIC:     
case IPU_PIX_FMT_RGB24:          
return 0;     
case IPU_PIX_FMT_RGB666:          
return 1;     
case IPU_PIX_FMT_YUV444:          
return 2;     
case IPU_PIX_FMT_RGB565:          
return 3;     
case IPU_PIX_FMT_LVDS666:          
return 4;     
case IPU_PIX_FMT_VYUY:          
return 6;     
case IPU_PIX_FMT_UYVY:          
return 8;     
case IPU_PIX_FMT_YUYV:          
return 10;     
case IPU_PIX_FMT_YVYU:          
return 12;     
case IPU_PIX_FMT_GBR24:     
case IPU_PIX_FMT_VYU444:          
return 13;     
case IPU_PIX_FMT_BGR24:          
return 14;
     }
     return -1;}

See the code above on ipu_disp.c\ipu3\mxc\drivers - linux-imx - i.MX Linux kernel 

To make the LVDS also outputs the alpha value, I would recommend to use DP (Display Processor) instead of IC (Image Converter).

You can try to set the frame buffer to RGB8888 (RGB32) and add a new format RGBA6666 by creating a new display microcode to display.

For an example about how to create a new display microcode, you can check the following document:

Patch to Support BT656 and BT1120 Output For i.MX6 BSP 

Best regards,

Rogerio

Hello Rogerio,

Thank you very much for the information!!

We plan to use qt application, with LVDS output 24-bit.

I am not sure about how to set framebuffer with qt. I said before RGBA8888, but actually the processing can be also reduced to lower resolution if it will make it easier for us (is it better that we use other format with qt than RGBA8888 to make the transfer to RGBA6666 easier ?) 

We don't use any input in imx6 and just do the graphics in imx:

qt application graphics in imx6 -> lvds output(with alpha) -> blender (belnder not in imx)

So we actually do the blending with video outside imx6.

I'll take a deeper study of your answer now.

Thank you very much,

ran

"is it better that we use other format with qt than RGBA8888 to make the transfer to RGBA6666 easier ?"

I think it's better to keep the framebuffer in RGBA8888 (also called as RGB32) because this is an already known and existent format. The display processor microcode will get these 32 bits from framebuffer and "re-format" to RGB6666. With DP microcode you can created any waveform you need.

Just remember to set the framebuffer to RGB32 with local alpha blending. The local alpha uses one value of alpha for each pixel while gloal alpha uses only one value to the entire screen.

Best regards,

Rogerio

 

Hello Rogerio,

If I may please ask the following:

1. In the above list (from code snippet) of accepted formats, I don't find RGB32 formats  (it seems that all formats for DP are limited to 24 bits). 

But shouldn't DP accept should reformat to 32-bit format containing alpha (such formats are used with HDMI devices for example) ?

2. kernel parameters usually appear as following:

mxcfb0:dev=ldb,LDB-XGA,if=RGB666

Does it mean that for a new microcode RGBA6666, it should be something like (after adding  IPU_PIX_FMT_RGBA6666):

mxcfb0:dev=ldb,LDB-XGA,if=RGBA6666

I am not sure why in the patch for bt656, there is dev=BT656 , BT656 is a device or format ?

Thank you,

ran

Hello Rogerio,

I took a look in the patch for BT656, and it seems very complex patch.

Currently it seems that using imx6 and imx8 in our next product without this feature available, and adding this format ourselves pus a lot of risk on the project.

Is there a chance that NXP will add a patch support for ARGB6666 ? (It seems that this feature is very much required by several people, so I think it is quite important.)

Thank you very much,

ran

Hi,

I have never seen a display that uses alpha information, all blending processing is made before sending the image to display.

The case from threadRGBA/ARGB LVDS output from i.MX6  uses an FPGA in place of a display, this is why they wanted alpha to be sent with the image.

Do you mind to let me know your use case? Maybe we can find other solution.

Will i.MX drive directly the LCD that needs alpha? Could you let me know the LCD model?

About NXP to add support, we have a team who customers can contract to develop SW.

NXP Professional Services|NXP 

Best regards,

Rogerio

Right , I understand now.

Our use case is:

dsp (RGB)---------------|

                                    |

                     FPGA (blending)--->>-------DISPLAY

                                   |

imx (LVDS) -------------|

We use FPGA which recieves the LVDS output from imx as an input, and also a dsp is connected to FPGA. The FPGA does the blending into a display.

In imx we plan to use a Qt application.

Any suggestions are appreciated,

Thank you very much for the help,

ran

I would suggest (if possible on your project) to connect the DSP to i.MX and let i.MX makes the image blending.

dsp(RGB) -----> i.MX (makes the blending from dsp + QT graphics) ------> Display

If not possible to make the configuration above, I would recomend to program the display microcode on i.MX as I mentioned ealier.

Best regards,

Rogerio

Thank you very much Rogerio!

I understand. We are familiar with the usecase of "doing it inside imx", but our system engineer is afraid that the latency will grow, becuase we can't allow that.
I also sent request to the team you've mentioned before for this
 develop SW. Yet, I think that this kind of feature should be a public feature, because there is such need from  several customers, as we saw in other posts in the community. 

Thanks again,

Ran

No ratings
Version history
Last update:
‎09-10-2020 01:40 AM
Updated by: