iMX6 IPU Color Space Conversion with Alpha

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

iMX6 IPU Color Space Conversion with Alpha

Jump to solution
3,763 Views
phopkins
Contributor III

I'm having an issue converting color space: NV12 to ABGR(32).  I'm taking an NV12 frame and converting it to ABGR to render into a bitmap for Android.  However, I cannot seem to set the Alpha level for the conversion.  When the ABGR buffer is populated, all the alpha bytes are 0.  I want them to be 0xFF or 255, so I can see my bitmap!  Any idea on how to set the Alpha level when converting from NV12 to ABGR?  If I manually set the alpha, I can see my image, but it's an expensive process that takes too long for motion video and must be done in hardware.

here's some code that I'm using to convert( NOTE: these are snippits of working code for reference, not critiquing!):

// Input image size and format

task.input.width = width;

task.input.height = height;

task.input.format = IPU_PIX_FMT_NV12;

// Output image size and format

task.output.width = width;

task.output.height = height;

task.output.format = IPU_PIX_FMT_ABGR32;

// Perform color space conversion

ioctl(fd_ipu, IPU_QUEUE_TASK, &task);

// manual alpha set, expensive process!!

int* src = outbuf;

int* dst = out;

int len = osize/4;

while(len--)

{

  *dst++ = *src++ | 0xFF000000;

}

1 Solution
1,779 Views
rogerio_silva
NXP Employee
NXP Employee

Hi Philip,

I confirmed, it's not possible to add a value to A channel using IPU. It will always change the RGB value instead of change only the A channel.

View solution in original post

0 Kudos
12 Replies
1,779 Views
frantisekhacker
Contributor I

Hello Philip,

if it is not crucial to use IPU, use VPU.

VPU can fill the A channel properly (also in comparable time per frame as IPU).

I use G2D API for that work (but G2D can only do CSC into RGB(A) formats, not into YUV!).

So for other CSC into YUV formats i use IPU, as you can see:

Example (both CSC thru IPU and thru VPU, path depends on bool value):

#include <linux/ipu.h>
#include "g2d.h"

... // open functions for "/dev/mxc_ipu" also for g2d handle (and vpu_Init) are omitted

    struct ipu_task                 ipu_resizer_task;
    struct g2d_surface              vpu_resizer_task_input;
    struct g2d_surface              vpu_resizer_task_output;

    memset( &ipu_resizer_task,        0, sizeof(ipu_resizer_task) );
    memset( &vpu_resizer_task_input,  0, sizeof(vpu_resizer_task_input) );
    memset( &vpu_resizer_task_output, 0, sizeof(vpu_resizer_task_output) );

    // config the G2D
    g2d_enable( p_vpu_g2d_handle, G2D_BLEND );
    g2d_enable( p_vpu_g2d_handle, G2D_GLOBAL_ALPHA );     // G2D_GLOBAL_ALPHA valid only if G2D_BLEND enabled

    // source credentials
    ipu_resizer_task.input.paddr                = //source frame phy addr;
    ipu_resizer_task.input.deinterlace.enable   = 0;
    ipu_resizer_task.input.deinterlace.motion   = 0;

    vpu_resizer_task_input.planes[0]            = // source frame phy addr;
    vpu_resizer_task_input.blendfunc            = G2D_SRC_ALPHA;
    vpu_resizer_task_input.global_alpha         = 0xff;                         // use this value as default ALPHA, for those reason we only use VPU instead of IPU
    vpu_resizer_task_input.clrcolor             = 0x00;                         // special blending dimension, not used
    vpu_resizer_task_input.rot                  = G2D_ROTATION_0;

// width, height stride settings on source and dest frame...
    ipu_resizer_task.input.format              = IPU_PIX_FMT_....;
    ipu_resizer_task.input.width               = p_src_video_frame->width;
    ipu_resizer_task.input.height              = p_src_video_frame->height;
    ipu_resizer_task.output.format              = IPU_PIX_FMT_RGBA32;     ipu_resizer_task.output.width              = p_dst_video_frame->width;     ipu_resizer_task.output.height             = p_dst_video_frame->height;     vpu_resizer_task_input.format              = G2D_....;     vpu_resizer_task_input.width               = p_src_video_frame->width;     vpu_resizer_task_input.height              = p_src_video_frame->height;    vpu_resizer_task_input.stride              = p_src_video_frame->width;     vpu_resizer_task_output.format              = G2D_RGBA8888;     vpu_resizer_task_output.width               = p_dst_video_frame->width;     vpu_resizer_task_output.height              = p_dst_video_frame->height;     vpu_resizer_task_output.stride              = p_dst_video_frame->width;     vpu_resizer_task_output.planes[0]            = // dest frame phy addr     if ( say_if_vpu_is_needed )     {         // G2D (VPU) can handle only RGB(A) destination formats (not YUV)         // G2D (VPU) can fill ALPHA channel in conversions form YUV to RGBA formats (and also unfortunately impacts R,G,B channel by ALPHA value)         g2d_blit(             p_vpu_g2d_handle,             &vpu_resizer_task_input,             &vpu_resizer_task_output);         g2d_finish( p_vpu_g2d_handle );     }     else     {         // IPU can not fill ALPHA channel in conversions into RGBA formats (leaves it as 0x00)         retcode = ioctl(             ipu_fd,             IPU_QUEUE_TASK,             &ipu_resizer_task );     } Best regards, F.Hacker
0 Kudos
1,779 Views
rogerio_silva
NXP Employee
NXP Employee

Hi Philip,

You can set the "task.overlay.alpha.gvalue = i".

Here is a test code you can take as example:

https://github.com/rogeriorps/ipu-examples/blob/master/mx6/alphablending/example1/alpha_ex1.c

It fills the screen with solid colors and change the global alpha every 0.1 secs.

0 Kudos
1,779 Views
phopkins
Contributor III

I've read into and played the overlay stuff in the IPU.  It seems to only involve an overlay image ontop of the input buffer.  I don't need to overlay something, I just need the output alpha to be viewable.  Perhaps there is more to the overlay functionality that I currently understand.  The linked example does not deal with an ABGR image buffer.

0 Kudos
1,779 Views
rogerio_silva
NXP Employee
NXP Employee

What do you want to do is to "disable" the alpha for all pixels? Doing this, all pixels on your image will be 100% on, right?

0 Kudos
1,779 Views
phopkins
Contributor III

Yes, that would be correct.  I want my RGB image to have 4 bytes per pixel with the alpha pixel to "disabled" or 255.

0 Kudos
1,779 Views
rogerio_silva
NXP Employee
NXP Employee

Could you send me an ABGR sample picture? I'd like to make a test with it.

0 Kudos
1,779 Views
phopkins
Contributor III

You can use the example that you linked me.  If you dig around, you can find the CSC example from the same example set.  There is an NV12 raw picture.  Try to convert it to RGBA.  You will see all the alpha bytes are zero.  You can try any 32bit RGB colorspace conversion.

0 Kudos
1,779 Views
rogerio_silva
NXP Employee
NXP Employee

Yes... Let me see if I can find a solution...

0 Kudos
1,779 Views
rogerio_silva
NXP Employee
NXP Employee

Hi Philip,

I made some tests and I wasn't able to only change the alpha channel of AGBR. Whenever I changed the alpha using IPU, the result was always alpha being applied on RGB, leaving A = 0 for all pixels.

I'll ask internally if there is a way to do it.

0 Kudos
1,780 Views
rogerio_silva
NXP Employee
NXP Employee

Hi Philip,

I confirmed, it's not possible to add a value to A channel using IPU. It will always change the RGB value instead of change only the A channel.

0 Kudos
1,779 Views
phopkins
Contributor III

Alright.  We have a work around for now.  I know we can play an AVI that is mjpeg and it decodes with using about 1% CPU, so that tells me there may be another method decode and view jpegs.  Thank you for your efforts and research!

1,779 Views
phopkins
Contributor III

I can send you one, but I think we may be missing the big picture here.  Let me describe what is going on:

Summary:

The objective here is to receive a JPG stream (MJPEG) over the network, decode it using the VPU, and display it on a view in Android.  Effectively, the user will see a real time video stream within our Android application.  We must use a view for Android, not a direct FB write.  We also must using an mjpeg stream.

Steps:

1. We receive a JPG over the network.  This jpg, if saved to file, will open in any picture viewer.

2.  We use the vpu to decode this jpeg.  Colorspace goes from jpg->NV12.

3. Since the output buffer of the decoder is NV12, as cannot render this in an Android view

4. The most efficient way to render a frame in Android within Java is: onDraw + canvas.drawBitmap (int[], offset, stride, ....).  The format of the int[] must be ARGB, 4 bytes per pixel with an alpha.

5. in order to convert NV12 to ARGB is to use the CSC on the IPU.

6. I set up the input/ouput to go from NV12 -> RGBA.

7. CSC task is ran

8. I take the output of the CSC and pass it back to Java

9. Java takes the int[] and renders it using the functions show in Step 4.

The issue here is the output show in Step 8 has the alpha values set to 0.  So when Step 9 renders the frame, it is invisible, since the alpha is 0.  NV12 colorspace does not contain alpha values, however ARGB does.  So during the conversion process, there must be a default alpha value that is used, since there was no alpha to begin with.  I need to figure out this "default" value location.  At least I hope there a default alpha value we can setup in some register somewhere.

0 Kudos