MX31 IPU Graphics/Video Combining problem

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

MX31 IPU Graphics/Video Combining problem

ソリューションへジャンプ
3,742件の閲覧回数
IronMike
Contributor I
Hi All,

For our new product it is necessary to blend an image on top of a video frame.

I'm using Windows CE 5.0 and it's post processing driver to accomplish the task (pp.dll).

The video frame input is stored system memory (YUV 4:4:4)
The overlay image is also stored in system memory (RGB 24bpp) (alpha value 128, no color keying)
The output image should also be stored in system memory (RGB 24bpp).

So my use case is:    MEN -> PP -> MEM

I've enabled color space conversion step 1  (YUV -> RGB, I assume this converts the video input image to a similar RGB format as the overlay image).

For some reason I get a distorted output image as can be seen in out.bmp. I don't have a clue what is wrong.

If I make the overlay completely opaque the result is identical to the overlay which is correct.
So somehow the input image gets messed up.

To troubleshoot i've created a chess pattern as input video. The overlay image and the resulting combined image are presented by the following links:

http://users.pandora.be/dreamspace/out.bmp
http://users.pandora.be/dreamspace/overlay.bmp


// This is my code to initialize the combining features of the post processor
----------------------------------------------------------------------------------------------------------------------

    // Open camera
    m_hPPDll = CreateFile(TEXT("POP1:"),        // "special" file name
                            GENERIC_READ|GENERIC_WRITE,                    // desired access
                            FILE_SHARE_READ|FILE_SHARE_WRITE,   // sharing mode
                            NULL,                               // security attributes (=NULL)
                            OPEN_EXISTING,                      // creation disposition
                            FILE_ATTRIBUTE_NORMAL,              // flags and attributes
                            NULL);                              // template file (ignored)

    if(m_hPPDll == INVALID_HANDLE_VALUE)   
    {
        return false;
    }

    ppConfigData configData;

    memset(&configData,0,sizeof(configData));

    // enable overlay
    configData.bCombining = true;

    // configure overlay image coming from GPU
    configData.alpha = 128;
    configData.colorKey = 0xaaaaaaaa; // unused color value
    configData.CSCEquation = ppCSCY2R_A1;
    configData.CSCEquation2 = ppCSCNoOp;

    // Input image
    configData.inputSize.width    = m_Width;
    configData.inputSize.height = m_Height;
    configData.inputStride        = configData.inputSize.width;
    configData.inputFormat        = ppFormat_YUV444;
    configData.inputDataWidth    = ppDataWidth_8BPP;

    // Overlay
    configData.inputCombDataWidth    = ppDataWidth_24BPP;
    configData.inputCombFormat        = ppFormat_RGB;
    configData.inputCombSize.width    = m_Width;
    configData.inputCombSize.height = m_Height;
    configData.inputCombStride = configData.inputCombSize.width*3;
    configData.inputCombRGBPixelFormat.component0_offset = 0;
    configData.inputCombRGBPixelFormat.component1_offset = 8;
    configData.inputCombRGBPixelFormat.component2_offset = 16;
    configData.inputCombRGBPixelFormat.component0_width = 8;
    configData.inputCombRGBPixelFormat.component1_width = 8;
    configData.inputCombRGBPixelFormat.component2_width = 8;

    // Output image format
    configData.outputFormat            = ppFormat_RGB;
    configData.outputSize.width        = m_Width;
    configData.outputSize.height    = m_Height;
    configData.outputStride            = configData.outputSize.width*3;
    configData.outputDataWidth        = ppDataWidth_24BPP;

    configData.outputRGBPixelFormat.component0_offset = 0;
    configData.outputRGBPixelFormat.component1_offset = 8;
    configData.outputRGBPixelFormat.component2_offset = 16;
    configData.outputRGBPixelFormat.component0_width = 8;
    configData.outputRGBPixelFormat.component1_width = 8;
    configData.outputRGBPixelFormat.component2_width = 8;

    // issue the IOCTL to configure the PP
    if (!DeviceIoControl(m_hPPDll,     // file handle to the driver
        PP_IOCTL_CONFIGURE,       // I/O control code
        &configData,              // in buffer
        sizeof(ppConfigData),    // in buffer size
        NULL,                     // out buffer
        0,                        // out buffer size
        0,                        // number of bytes returned
        NULL))                    // ignored (=NULL)
    {
        return false;
    }

    return true;

----------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------
This is the code to tell the post processor to process a new frame
----------------------------------------------------------------------------------------------------------------------------------

bool  AddBuffers(TRAF_UINT8 * pYUVImage, TRAF_UINT8 * pOverlay, TRAF_UINT8 * pOut)
{
    ppBuffers pBuffers;

    memset(&pBuffers,0,sizeof(ppBuffers));

    pBuffers.inputBuf = (UINT32*)pYUVImage;
    pBuffers.inputUBufOffset = m_Width * m_Height;
    pBuffers.inputVBufOffset = pBuffers.inputUBufOffset + m_Width * m_Height;
    pBuffers.inBufLen = m_Width*m_Height*3;

    pBuffers.inputCombBuf = (UINT32*)pOverlay;
    pBuffers.inCombBufLen = m_Width*m_Height*3;
   
    pBuffers.outputBuf = (UINT32*)pOut;
    pBuffers.outBufLen = m_Width*m_Height*3;

    if (!DeviceIoControl(m_hPPDll,     // file handle to the driver
        PP_IOCTL_ENQUEUE_BUFFERS,  // I/O control code
        &pBuffers,                     // in buffer
        sizeof(ppBuffers),         // in buffer size
        NULL,                      // out buffer
        0,                         // out buffer size
        0,                         // number of bytes returned
        NULL))                     // ignored (=NULL)
    {
        return false;
    }

    return true;
}


I hope someone has experience with this stuff..

Regards,

Mike
タグ(1)
0 件の賞賛
返信
1 解決策
1,536件の閲覧回数
IronMike
Contributor I
Okay,

I figured out is causing the conversion problems....

The synchronous display driver "ipu_sdc.dll" used by windows ce is build on top of the post processor.

So the post processor is actually active after the driver is loaded. I assume this conflicts with what my application is trying to do with the post processor.


Fortunatly, I found an alternative:

The view finder channel pre-processor is also able to apply an overlay to an image.
The camera image is fed to both the encoding and viewfinder channel of the preproccesor.

The encoding channel provides a clean YUV420 video stream in memory
the viewfinding channel also provides a YUV420 video stream but combined with an overlay to memory (no display involved!!!).
This second stream is then send to the onboard hantro MPEG encoder.

1) To accomplish this, I had the adjust the camera.dll driver to be able to active and configure the combining channel.
2) To use both streams simultaneously I had to make sure I could instantiate 2 camera pints. ("PIN1:") & ("PIN2:"). The camera driver shipped with CE could not accomplish this.  Therefore I had to add an extra entry PIN reference with index "2" to the registry . And I had to adjust the cameradriver.cpp file, so the the secdon PIN instance driver was loaded at startup (via the ActivateDevice function)..

Cool!!!

Unfortunatly, I have one little problem left:

http://users.pandora.be/dreamspace/MPEG_output.bmp

After the 2 color conversions steps the resulting image contains alternating dark columns.

My formats:  Overlay input : RGB16  - Input image YUV420 - Output YUV420

My equations:
CSC1 Equation = Y2RA1
CSC2 Equation = R2YA1

I assumed CSC1 is for converting the input image to something similar of the overlay. So In my case RGB
I assumed CSC2 is for converting the combined image to another output format.  In my case, RGB internal to YUV420.

Are there correct assumptions?

Mike












元の投稿で解決策を見る

0 件の賞賛
返信
2 返答(返信)
1,536件の閲覧回数
jerkins750i
Contributor I
anybody knows common CSC values used for YUV420 to RGB24 conversion?

thanks!!
0 件の賞賛
返信
1,537件の閲覧回数
IronMike
Contributor I
Okay,

I figured out is causing the conversion problems....

The synchronous display driver "ipu_sdc.dll" used by windows ce is build on top of the post processor.

So the post processor is actually active after the driver is loaded. I assume this conflicts with what my application is trying to do with the post processor.


Fortunatly, I found an alternative:

The view finder channel pre-processor is also able to apply an overlay to an image.
The camera image is fed to both the encoding and viewfinder channel of the preproccesor.

The encoding channel provides a clean YUV420 video stream in memory
the viewfinding channel also provides a YUV420 video stream but combined with an overlay to memory (no display involved!!!).
This second stream is then send to the onboard hantro MPEG encoder.

1) To accomplish this, I had the adjust the camera.dll driver to be able to active and configure the combining channel.
2) To use both streams simultaneously I had to make sure I could instantiate 2 camera pints. ("PIN1:") & ("PIN2:"). The camera driver shipped with CE could not accomplish this.  Therefore I had to add an extra entry PIN reference with index "2" to the registry . And I had to adjust the cameradriver.cpp file, so the the secdon PIN instance driver was loaded at startup (via the ActivateDevice function)..

Cool!!!

Unfortunatly, I have one little problem left:

http://users.pandora.be/dreamspace/MPEG_output.bmp

After the 2 color conversions steps the resulting image contains alternating dark columns.

My formats:  Overlay input : RGB16  - Input image YUV420 - Output YUV420

My equations:
CSC1 Equation = Y2RA1
CSC2 Equation = R2YA1

I assumed CSC1 is for converting the input image to something similar of the overlay. So In my case RGB
I assumed CSC2 is for converting the combined image to another output format.  In my case, RGB internal to YUV420.

Are there correct assumptions?

Mike












0 件の賞賛
返信