Deinterlacing fields, resizing AND applying overlay - using the VDIC, IC etc...

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Deinterlacing fields, resizing AND applying overlay - using the VDIC, IC etc...

Jump to solution
4,592 Views
fcs
Contributor IV

I am looking at sections 45.3.1.5.2 and 45.3.1.5.4 of iMX53RM.pdf, relating to the VDIC and IC.

Presently I have a demo app running which previews live PAL video at full screen, with an overlay, and h264 encodes this to a USB stick. It is exactly what we want but for one thing, the two PAL fields appear one above the other, squashed to half their proper height, on the screen and in the encoded video. So we need to deinterlace into a progressive stream.

The demo app that came with our capture card has an example of full screen live video preview but with the two fields deinterlaced.

I am a little concerned that the hardware may not be capable of what we want: Can we set up the VDIC to do the deinterlacing, and the IC to do resizing and combining? Ideally we would also like the IC to be able to write the result to both the display and an area in RAM (is that possible, and where is RAM is that?)

However at this point I am not sure what ioctl calls I should be doing to accomplish the above, I don’t have much idea how the ioctl calls relate to these hardware components. I presume it is mostly done by activating DMA channels using drivers/dma/ipu/ipu_idmac.c

At the moment my demo uses the gstreamer plugins with a small executable I run to set up the alpha colour key settings to combine the graphics and live video.

I use the following command line to get gstreamer running:

gst-launch-0.10 -e mfw_v4lsrc sensor-width=720 sensor-height=576 capture-width=720 capture-height=576 preview=true preview-width=1024 preview-height=768 fps-n=25 bg=true ! mfw_vpuencoder ! avimux name=mux ! filesink location=/mnt/usb1/PALfps25g.avi

Note “bg=true” is required for the overlay to work.

Then I run the executable to effect the overlay, essentially this is:

          struct mxcfb_color_key alpha;

          int parsed;

          if ( argc >= 4 && sscanf( argv[ 3 ], "%d", &parsed ) )

            {

            alpha.color_key = parsed;

            }

          else

            {

            alpha.color_key = 128;

            }

          printf( "Setting key colour to %d\n", alpha.color_key );

          alpha.enable = 1;

          if (ioctl(fd_fb, MXCFB_SET_CLR_KEY, &alpha) < 0)

          {

            fprintf( stderr, "Set alpha color key failed\n");

          }

          close(fd_fb );

where fd_fb is the file descriptor for /dev/fb0

But I am not sure what I do to get deinterlacing happening, is it something like this:

      memset(&fmt, 0, sizeof(fmt));

          fmt.type = V4L2_BUF_TYPE_VIDEO_OVERLAY;

      printf( "Querying buffer type %d\n", fmt.type );

        if (ioctl(fd_output_v4l, VIDIOC_G_FMT, &fmt) < 0)

        {

        perror( "get format failed\n");

                return -2;

        }

            fmt.fmt.win.w.width= width;

            fmt.fmt.win.w.height= height;

            fmt.fmt.win.field = field;

          printf( "Setting format %dx%d, field %d\n", fmt.fmt.win.w.width, fmt.fmt.win.w.height, fmt.fmt.win.field );

         

          if (ioctl(fd_output_v4l, VIDIOC_S_FMT, &fmt) < 0)

            {

            perror( "set format failed\n");

                return -1;

            }

           

The above doesn’t have any effect on live video, I am not sure if I should be during VIDIOC_STREAMOFF and VIDIOC_STREAMON before and after, but that seems to cause a hang.

Labels (3)
1 Solution
2,242 Views
yossishukron
Contributor III

In the original capture example that come with our Linux release using the ADV7180 TV decoder we do not support de interlacing.

We have a patch that adds de interlacing support to the Capture example you can use the same method to add de interlacing support.

This is a link to the patch discussion:

https://community.freescale.com/docs/DOC-93633

View solution in original post

0 Kudos
Reply
7 Replies
2,243 Views
yossishukron
Contributor III

In the original capture example that come with our Linux release using the ADV7180 TV decoder we do not support de interlacing.

We have a patch that adds de interlacing support to the Capture example you can use the same method to add de interlacing support.

This is a link to the patch discussion:

https://community.freescale.com/docs/DOC-93633

0 Kudos
Reply
2,242 Views
fcs
Contributor IV

The VDI seems to deinterlace in such a way that screen elements only one pixel high (eg parts of a small font) are lost due to the motion settings (whether 0,1 or 2). Hence I am doing deinterlacing by changing the drivers to use the CPMEM parameters ILO and Stride as described in the reference manual.

0 Kudos
Reply
2,243 Views
yossishukron
Contributor III

can you point me to the place in the reference manual that is describing de interlacing using CPMEM and stride. the only de interlacing mechanism I know about is the VDIC.

0 Kudos
Reply
2,243 Views
fcs
Contributor IV

Tables 45-11 and 45-12
 

Interlace Offset

ILO

20 bits

W1[77:58]

2nd double buffer destination address for RGB and Y:U:V (Y pointer) formats.

An interlaced data stored in the memory can be read as a consecutive progressive only if FW*BPP is a multiplication of 8.The actual physical address value is divided by 8 (i.e. this parameter includes bits [22:3] of the actual address).

This value is signed

I believe this is the memory offset added to the starting address of the destination buffer, for one of the fields (which one depends on whether interlacing is Bottom-Top or Top-Bottom I think), when a frame is being sent thru this channel. Each subsequent line can then be written at twice the normal line stride, with Stride Line twice the number of bytes in each scan line. 

 

Stride Line SL

14 bits

W1[115:102]

Address vertical scaling factor in bytes for

memory access. Also number of maximum bytes

in row according to memory limitations.

SL

00000000000000 = 00001 bytes

00000000000001 = 00002 bytes

...................................................

11111111111111 = 16384 bytes

You can see an example of this in driver include file ipu_param_mem.h: _ipu_ch_param_set_interlaced_scan

Also have a look at my other posts.

0 Kudos
Reply
2,243 Views
yossishukron
Contributor III

I understand but what you are actually doing her is copy creating 1 progressive frame from the two interlaced frames you are not running any de interlacing antilogarithm and it would not work very well for a fast moving picture.

this is how we did it in the old IPU ( before IPUV3 ) when we did not have the VDIC.

0 Kudos
Reply
2,243 Views
fcs
Contributor IV

We have video containing text with one-pixel-high lines in it. The VDIC doesn't deinterlace this video adequately - bits of the text disappear because whole lines are blurred or thrown out. We can tolerate ragged-edged deinterlacing artefacts in fast-moving video. We also achieve much lower latency this way.

My current problem is that when I encode video using the gstreamer element mfw_vpuencoder, CPU usage is very high for very simple tasks eg

gst-launch-0.10 -vvv videotestsrc pattern=12 ! video/x-raw-yuv,framerate=25/1,width=720,height=576 ! ffmpegcolorspace ! mfw_vpuencoder codec-type=std_h263 ! queue ! avimux name=mux ! filesink location=/tmp/x.mp4

uses 68% of CPU on our i.MX53, which is excessive. Encoding live video takes about 74% but can only manage about 5 frames per second. I have deduced that mfw_vpuencoder is the problem, as when I remove it from the pipeline CPU usage falls to about 10% or less. I thought VPU encoding shouldn't be using much CPU at all? Does mfw_gst_vpu_encoder.c need to be rewritten so it is more efficient?

0 Kudos
Reply
2,243 Views
yossishukron
Contributor III

I understand about your video.

Regarding the CPU load when you use the VPU I will suggest to isolate the VPU decoding

and to check again if you have CPU load issue.

You can save the data you want to send to the VPU to a file and then offline send this data to the VPU

And see what the CPU load is.

Also to send the data to the VPU you can use the Gstreamer and the VPU unit_test and see if there is any difference.