In some cases it is desired to directly have progressive content available from a TV-IN interface through the V4L2 capture device. In the BSP, HW accelerated de-interlacing is only supported in the V4L2 output stream. Below is a patch created against a rather old BSP version that adds support for de-interlaced V4L2 capture. The patch might need to be adapted to newer BSPs, However, the logic and functionality is there and should shorten the development time.
This patch adds another input device to the V4L2 framework that can be selected to perform the deinterlacing on the way to memory. The selection is done by passing the index “2” as an argument to the VIDIOC_S_INPUT V4L2 ioctl. Attached is also a modified the tvin unit test to give an example of how to use the new driver. An example sequence for running the test is as follows:
modprobe mxc_v4l2_capture
./mxc_v4l2_tvin_vdi.out -ow 720 -oh 480 -ol 10 -ot 20 -f YU12
Some key things to note:
This driver does not support resize or color space conversion on the way to memory.
The requested format and size should match what can be provided directly by the sensor.
The driver was tested on a Sabre AI Rev A board running Linux 12.02.
This code is not an official delivery and as such no guarantee of support for this code is provided by Freescale.
Hi,
I am working on ipu, my target is capture the data from adv7180 and resize it, then put it into vpu and encoder to jpeg picture.
I use your patch to solve the problem de-interlace.
Now i want to resize the data captured from (720x480) to (640x480), I think the MEM_VDI_PRP_VF_MEM support the resize.
Can you give me some ideas about what to modify in your patch file.
Thank you.
I think the main thing you need to do is to set the enc.mem_prp_vf_mem.out_width and enc.mem_prp_vf_mem.out_height variables appropriately. Currently, these are fixed in vdi_enc_setup() to be the same dimensions as for the input:
enc.mem_prp_vf_mem.out_width = cam->v2f.fmt.pix.width; |
enc.mem_prp_vf_mem.out_height = cam->v2f.fmt.pix.height; |
You should be able to change this to use a value passed by the application and the resize should be performed for you.
Philip_FSL,
Thank you very much.Now i can resize the capture memory.
And in your file:ipu_vdi_enc.c , it use queue_work() in callback of IPU_IRQ_CSI0_OUT_EOF interrupt.
is it ok to direct call vdi_work_func() here?
Thank you very much.
This should probably be OK. Although, it is generally best to do as little as possible in the context of the interrupt routine.
Hi, Philip
Thank you very much.
I have another question now.
Now mxc_v4l2_tvin capture data from adv7180 with H/V sync information ememded in data.
But I want to let it use the external H/V sync signal from adv7180.
I want to let adv7180 work as a cmos sensor.
Do you know how to do this?
Thank you.
Hi. I would recommend that you move this discussion to the discussions area as we are now going beyond the scope of the initial post.
Hi Gao,
Can you please start a new discussion on ADV 7180 question to help keep discussions limited to single topics?
Thank you,
I got the patch working on our i.MX53 based board. Recording de-interlaced video with Gstreamer works but one thing is missing: the preview.
Starting a pipe with enabled preview does not work:
gst-launch -v mfw_v4lsrc capture-mode=2 preview=true ! fakesink
I can see no preview on the display and no data arrives at the fakesink. After a few seconds gst-launch aborts with the error message:
ERROR: v4l2 capture: mxc_v4l_dqueue timeout enc_counter 0
So what would I have to do, to add preview support to the de-interlacing task chain?
I assume the goal is to encode and preview in parallel.
In that case, there are probably two options available.
1) tee the stream from the v4lsrc and route one stream to v4lsink and the other to the decoder
2) Merge the preview driver with the deinterlace driver that this patch creates to make a preview + deinterlace driver and then modify the v4l capture driver to route the stream to both the preview and the decoder.
The second options is likely to be more complicated and introduce synchronization related issues that are not easy to address within the V4L framework. So, I would recommend taking the first route if possible.
We found the VDI deinterlacing algorithms weren't good for video with small-font text in the video as it tends to blur or remove whole lines. Instead using deinterlacing in the CSI -> MEM channel works well for our needs (see ILO and Stride settings in CPMEM) however this requires substantial changes to the drivers.
Hello,
BSP: L3.0.35_4.0.0
CVBS PAL/NTSC video is input to Video decoder ADV7180, and ADV7180 connect to I.MX6Q CSI BT656.
The de-interlace is set, but the quality is not good. Please check the picture attached.
I have patch the file into kernel follow the instruction in I.MX6Q + ADV7180.
https://community.freescale.com/docs/DOC-93633
It seem that some thread has occupancy the ipu task 1.
This patch I have used in imx53 platform before, It can run normally.
But when I run the mxc_v4l2_tvin, I got the error as bellow:
root@freescale ~$ ./mxc_v4l2_tvin -ow 720 -oh 480
FIXME-------ioctl_g_fmt_cap
NOW IS PALTV decoder chip is adv7180_decodeFIXME-------ioctl_g_fmt_cap
r
NOW IS PALNOW IS PAL
In mxc_v4l2_s_std ff
rawdata0: 0x00, rawdata1: 0x00
FIXME-------ioctl_g_fmt_cap
NOW IS PALVIDIOC_G_FMT failed
driver=mxc_vout, card=DISP3 FG, bus=, version=0x00000000, capabilities=0x04000002
fmt RGB565: fourcc = 0x50424752
fmt BGR24: fourcc = 0x33524742
fmt RGB24: fourcc = 0x33424752
fmt RGB32: fourcc = 0x34424752
fmt BGR32: fourcc = 0x34524742
fmt NV12: fourcc = 0x3231564e
fmt UYVY: fourcc = 0x59565955
fmt YUYV: fourcc = 0x56595559
fmt YUV422 planar: fourcc = 0x50323234
fmt YUV444: fourcc = 0x34343459
fmt YUV420: fourcc = 0x32315559
fmt YVU420: fourcc = 0x32315659
fmt TILED NV12P: fourcc = 0x50564e54
fmt TILED NV12F: fourcc = 0x46564e54
fmt YUV444 planar: fourcc = 0x50343434
UYVY
imx-ipuv3 imx-ipuv3.0: IPU Warning - IPU_INT_STAT_10 = 0x00000001
imx-ipuv3 imx-ipuv3.0: IDMAC21's EBA0 is not 8-byte aligned
imx-ipuv3 imx-ipuv3.0: IDMAC21's EBA1 is not 8-byte aligned
start time = 473 s, 145035 usFIXME-------ioctl_g_fmt_cap
NOW IS PALNOW IS PAL
imx-ipuv3 imx-ipuv3.0: IPU Warning - IPU_INT_STAT_10 = 0x00000001
FIXME-------ioctl_g_fmt_cap
NOW IS PALNOW IS PAL
mc_pfuze 1-0008: recv failed!:-5,bf
COULD NOT SET GP VOLTAGE!!!!
FIXME-------ioctl_g_fmt_cap
imx-ipuv3 imx-ipuv3.0: ERR[0xbfcd7200-no:0x220]ipu task_id:1 busy!
imx-ipuv3 imx-ipuv3.0: ERR:[0xbfcd7200] no-0x220 state: ipu busy
imx-ipuv3 imx-ipuv3.0: ERR: [0xbfcd7200] no-0x220,state 8: ipu busy
NOW IS PALNOW IS PAL
imx-ipuv3 imx-ipuv3.0: ERR: no-0x220,ipu_queue_task err:-125
mxc_v4l2_output mxc_v4l2_output.0: display work fail ret = -125
FIXME-------ioctl_g_fmt_cap
imx-ipuv3 imx-ipuv3.0: ERR[0xbfcd7400-no:0x230]ipu task_id:1 busy!
imx-ipuv3 imx-ipuv3.0: ERR:[0xbfcd7400] no-0x230 state: ipu busy
imx-ipuv3 imx-ipuv3.0: ERR: [0xbfcd7400] no-0x230,state 8: ipu busy
NOW IS PALNOW IS PAL
imx-ipuv3 imx-ipuv3.0: ERR: no-0x230,ipu_queue_task err:-125
mxc_v4l2_output mxc_v4l2_output.0: display work fail ret = -125
FIXME-------ioctl_g_fmt_cap
imx-ipuv3 imx-ipuv3.0: ERR[0xba25e600-no:0x240]ipu task_id:1 busy!
imx-ipuv3 imx-ipuv3.0: ERR:[0xba25e600] no-0x240 state: ipu busy
imx-ipuv3 imx-ipuv3.0: ERR: [0xba25e600] no-0x240,state 8: ipu busy
NOW IS PALNOW IS PAL
imx-ipuv3 imx-ipuv3.0: ERR: no-0x240,ipu_queue_task err:-125
mxc_v4l2_output mxc_v4l2_output.0: display work fail ret = -125
VIDIOC_QBUF failed
imx-ipuv3 imx-ipuv3.0: handler already installed on irq 0
imx-ipuv3 imx-ipuv3.0: CSI irq 0 in use
Hi Philip_FSL
Does the above add-csi-deinterlace-capture.patch.zip patch will support De-interlaced PAL format?
I have added the patch in our BSP and NTSC is working fine with De-interlaced format. But when I tried with PAL, I'm getting captured image with a little top portion missing out. Though I can able to control the scrolling and even the image also coming as proper w.r.t color, I still not able to make the top portion of the captured image into proper. The bottom and sides of the same captured image is proper.
Please provide solution on how to resolve this issue.
Thanks in advance,
Ajith
Hi all
I am do a work about capturing video on imx6. I have use your patch about de-interlace,now I have serverl problem. MEM_VDI_PRP_VF_MEM,MEM_VDI_PRP_VF_MEM_P,MEM_VDI_PRP_VF_MEM_N
channels is fed to VDIC?I understand the path is CSI->MEM,then the video data fed from MEM->VDIC->IC->MEM. There are three fields data, f(n-1),f(n),f(n+1).They respectively flow the three channels on above. What do you think of my understanding right?Now my video data quality is not very good,Is it the deinterlaceing problem?
camera senser is tvp5151,data format is bt656(YCbCr4:2:2 interlaced),dispaly is 7inch lcd 24bits
think you!
Hi all,
The above patch is now more than one year old, and "is not an official delivery and as such no guarantee of support for this code is provided by Freescale". Is there an "official delivery" of video deinterlacer code for i.MX6 ?
We have tested the patch with PAL camera input and are not fully satisfied by the result (as exposed by i.MX6 PAL deinterlacer artifact)
Moreover, if the patch running in a gstreamer pipeline for each of the 2 CSI seems stable without any other traffic, it doesn't comply with additional traffic (Ethernet 80Mb/s incoming + 50Mb/s outgoing + SATA 100Mb/s outgoing) : it usually runs less than one hour before ending up in DMA channel busy that can't be freed, for which the process can't be killed and only a reset can make it recover ! The root cause is usually because an overflow is detected on VDI, such as :
imx-ipuv3 imx-ipuv3.1: IPU Warning - IPU_INT_STAT_5 = 0x00000500
imx-ipuv3 imx-ipuv3.1: IPU Warning - IPU_INT_STAT_9 = 0x00000001
ERROR: v4l2 capture: mxc_v4l_dqueue timeout enc_counter 0
... or:
imx-ipuv3 imx-ipuv3.1: IPU Warning - IPU_INT_STAT_5 = 0x00000400
imx-ipuv3 imx-ipuv3.0: IPU Warning - IPU_INT_STAT_5 = 0x00000400
ERROR: v4l2 capture: mxc_v4l_dqueue timeout enc_counter 0
Last point, we have tested the software deinterlacer of gstreamer (deinterlace plugin) with a pipeline such as:
gst-launch -v --gst-debug=2 -e mfw_v4lsrc device=/dev/video0 fps-n=25 ! deinterlace ! vpuenc codec=avc framerate-nu=25 bitrate=4194304 ! rtph264pay ! udpsink sync=false -v host=10.100.57.5 port=5000
It doesn't work, even when specifying other kind of method parameter (tomsmocomp / greedyh / greedyl / linear / scalerbob) for the deinterlace plugin.
As a result, we still have no acceptable deinterlacing solution for our i.MX6 platform and are quite disappointed.
Thanks in advance.
Hi all,
You’ll find below a fix for the VDI code that solved our problem using PAL video input. In fact, the problem isn’t specific to PAL as the VDI fields were not correctly referenced, which explains the artifacts…
Could anyone (at Freescale or other forum members) review this code, test it in their configuration and share the results on the forum?
Our feeling is that this fix makes now the VDI usable but performance is still the main issue: IDMA can’t recover from a video frame overflow, and the occurrence probability becomes high when you cumulate different I/O traffic (Ethernet and SATA in our case). In our case, we usually have a crash using i.MX6Q in less than 1 hour in the following configuration:
And then, the average CPU load is around 25%.
Moreover, we have updated the “tvin” application adding 3 new options:
1. -vi <capture_num> to allow specifying your capture device (/dev/video0 by default)
2. -vo <output_num> to allow specifying your output device (/dev/video16 by default)
3. -dv to disable VDI (always activated otherwise)
Example:
mxc_v4l2_tvin -ow 720 -oh 576 -vi 0 -vo 16 -m 0
mxc_v4l2_tvin -ow 720 -oh 576 -vi 1 -vo 17 -dv
Fix for the VDI code
--- a/drivers/media/video/mxc/capture/ipu_vdi_enc.c
+++ b/drivers/media/video/mxc/capture/ipu_vdi_enc.c
@@ -31,6 +31,9 @@
#define CAMERA_TRACE(x)
#endif
+#define PREVIOUS_INDEX ((cam->ping_pong_vdi + 2) %3)
+#define CURRENT_INDEX ( cam->ping_pong_vdi )
+#define NEXT_INDEX ((cam->ping_pong_vdi + 1) %3)
/*
* Function definitions
@@ -47,8 +50,7 @@ static void vdi_work_func(struct work_struct *work)
err = ipu_update_channel_buffer(cam->ipu, MEM_VDI_PRP_VF_MEM_P,
IPU_INPUT_BUFFER, 0,
- cam->vdi_enc_bufs[(cam->ping_pong_vdi + 2) %3] +
- cam->v2f.fmt.pix.bytesperline);
+ cam->vdi_enc_bufs[PREVIOUS_INDEX]);
if (err != 0) {
ipu_clear_buffer_ready(cam->ipu, MEM_VDI_PRP_VF_MEM_P,
@@ -57,8 +59,7 @@ static void vdi_work_func(struct work_struct *work)
err = ipu_update_channel_buffer(cam->ipu, MEM_VDI_PRP_VF_MEM_P,
IPU_INPUT_BUFFER,
0,
- cam->vdi_enc_bufs[(cam->ping_pong_vdi + 2) %3] +
- cam->v2f.fmt.pix.bytesperline);
+ cam->vdi_enc_bufs[PREVIOUS_INDEX]);
if (err != 0) {
pr_err("ERROR: v4l2 capture: fail to update vdi P buffer");
@@ -68,7 +69,7 @@ static void vdi_work_func(struct work_struct *work)
err = ipu_update_channel_buffer(cam->ipu, MEM_VDI_PRP_VF_MEM,
IPU_INPUT_BUFFER, 0,
- cam->vdi_enc_bufs[cam->ping_pong_vdi]);
+ cam->vdi_enc_bufs[CURRENT_INDEX]);
if (err != 0) {
ipu_clear_buffer_ready(cam->ipu, MEM_VDI_PRP_VF_MEM,
@@ -77,7 +78,7 @@ static void vdi_work_func(struct work_struct *work)
err = ipu_update_channel_buffer(cam->ipu, MEM_VDI_PRP_VF_MEM,
IPU_INPUT_BUFFER,
0,
- cam->vdi_enc_bufs[cam->ping_pong_vdi]);
+ cam->vdi_enc_bufs[CURRENT_INDEX]);
if (err != 0) {
pr_err("ERROR: v4l2 capture: fail to update vdi buffer");
@@ -87,8 +88,7 @@ static void vdi_work_func(struct work_struct *work)
err = ipu_update_channel_buffer(cam->ipu, MEM_VDI_PRP_VF_MEM_N,
IPU_INPUT_BUFFER, 0,
- cam->vdi_enc_bufs[cam->ping_pong_vdi]+
- cam->v2f.fmt.pix.bytesperline);
+ cam->vdi_enc_bufs[NEXT_INDEX]);
if (err != 0) {
ipu_clear_buffer_ready(cam->ipu, MEM_VDI_PRP_VF_MEM_N,
@@ -97,8 +97,7 @@ static void vdi_work_func(struct work_struct *work)
err = ipu_update_channel_buffer(cam->ipu, MEM_VDI_PRP_VF_MEM_N,
IPU_INPUT_BUFFER,
0,
- cam->vdi_enc_bufs[cam->ping_pong_vdi] +
- cam->v2f.fmt.pix.bytesperline);
+ cam->vdi_enc_bufs[NEXT_INDEX]);
if (err != 0) {
pr_err("ERROR: v4l2 capture: fail to update vdi N buffer");
@@ -109,11 +108,11 @@ static void vdi_work_func(struct work_struct *work)
}
cam->vdi_enc_first_frame = 0;
- cam->ping_pong_vdi = (cam->ping_pong_vdi + 1)%3;
+ cam->ping_pong_vdi = NEXT_INDEX;
//Point the CSI output to the next buffer
err = ipu_update_channel_buffer(cam->ipu, CSI_MEM,
IPU_OUTPUT_BUFFER, 0,
- cam->vdi_enc_bufs[cam->ping_pong_vdi]);
+ cam->vdi_enc_bufs[CURRENT_INDEX]);
if (err != 0) {
ipu_clear_buffer_ready(cam->ipu, CSI_MEM,
@@ -121,7 +120,7 @@ static void vdi_work_func(struct work_struct *work)
0);
err = ipu_update_channel_buffer(cam->ipu, CSI_MEM,
IPU_OUTPUT_BUFFER, 0,
- cam->vdi_enc_bufs[cam->ping_pong_vdi]);
+ cam->vdi_enc_bufs[CURRENT_INDEX]);
if (err != 0) {
pr_err("ERROR: v4l2 capture: fail to update CSI buffer");
@@ -275,7 +274,7 @@ static int vdi_enc_setup(cam_data *cam)
cam->v2f.fmt.pix.bytesperline /
bytes_per_pixel(enc.mem_prp_vf_mem.
out_pixel_fmt), IPU_ROTATE_NONE,
- cam->vdi_enc_bufs[cam->ping_pong_vdi], 0, 0,
+ cam->vdi_enc_bufs[CURRENT_INDEX], 0, 0,
cam->offset.u_offset,
cam->offset.v_offset);
@@ -306,7 +305,7 @@ static int vdi_enc_setup(cam_data *cam)
bytes_per_pixel(enc.mem_prp_vf_mem.
out_pixel_fmt),
IPU_ROTATE_NONE,
- cam->vdi_enc_bufs[(cam->ping_pong_vdi + 1) %3],
+ cam->vdi_enc_bufs[NEXT_INDEX],
0, 0,
cam->offset.u_offset,
cam->offset.v_offset);
@@ -335,8 +334,7 @@ static int vdi_enc_setup(cam_data *cam)
bytes_per_pixel(enc.mem_prp_vf_mem.
out_pixel_fmt),
IPU_ROTATE_NONE,
- cam->vdi_enc_bufs[cam->ping_pong_vdi] +
- cam->v2f.fmt.pix.bytesperline,
+ cam->vdi_enc_bufs[CURRENT_INDEX],
0, 0,
cam->offset.u_offset,
cam->offset.v_offset);
@@ -380,8 +378,7 @@ static int vdi_enc_setup(cam_data *cam)
bytes_per_pixel(enc.mem_prp_vf_mem.
out_pixel_fmt),
IPU_ROTATE_NONE,
- cam->vdi_enc_bufs[(cam->ping_pong_vdi + 1) %3] +
- cam->v2f.fmt.pix.bytesperline,
+ cam->vdi_enc_bufs[NEXT_INDEX],
0, 0,
cam->offset.u_offset,
cam->offset.v_offset);
Updated “tvin” application
/*
* Copyright 2007-2012 Freescale Semiconductor, Inc. All rights reserved.
*/
/*
* The code contained herein is licensed under the GNU General Public
* License. You may obtain a copy of the GNU General Public License
* Version 2 or later at the following locations:
*
* http://www.opensource.org/licenses/gpl-license.html
* http://www.gnu.org/copyleft/gpl.html
*/
/*
* @file mxc_v4l2_tvin.c
*
* @brief Mxc TVIN For Linux 2 driver test application
*
*/
#ifdef __cplusplus
extern "C"{
#endif
/*=======================================================================
INCLUDE FILES
=======================================================================*/
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <stdint.h>
#include <sys/types.h>
#include <stdint.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <sys/time.h>
#include <unistd.h>
#include <asm/types.h>
#include <linux/videodev2.h>
#include <sys/mman.h>
#include <math.h>
#include <string.h>
#include <malloc.h>
#include <sys/time.h>
#include "linux/mxcfb.h"
#include "linux/mxc_v4l2.h"
#define TFAIL -1
#define TPASS 0
char v4l_capture_dev[100] = "/dev/video";
char v4l_output_dev[100] = "/dev/video";
int fd_capture_v4l = 0;
int fd_output_v4l = 0;
int g_cap_mode = 0;
int g_input = 2;
int g_fmt = V4L2_PIX_FMT_UYVY;
int g_rotate = 0;
int g_vflip = 0;
int g_hflip = 0;
int g_vdi_enable = 0;
int g_vdi_motion = 0;
int g_tb = 0;
int g_output = 3;
int g_output_num_buffers = 4;
int g_capture_num_buffers = 3;
int g_in_width = 0;
int g_in_height = 0;
int g_display_width = 0;
int g_display_height = 0;
int g_display_top = 0;
int g_display_left = 0;
int g_frame_size;
int g_frame_period = 33333;
v4l2_std_id g_current_std = V4L2_STD_NTSC;
struct testbuffer
{
unsigned char *start;
size_t offset;
unsigned int length;
};
struct testbuffer output_buffers[4];
struct testbuffer capture_buffers[3];
int start_capturing(void)
{
unsigned int i;
struct v4l2_buffer buf;
enum v4l2_buf_type type;
for (i = 0; i < g_capture_num_buffers; i++)
{
memset(&buf, 0, sizeof (buf));
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = i;
if (ioctl(fd_capture_v4l, VIDIOC_QUERYBUF, &buf) < 0)
{
printf("VIDIOC_QUERYBUF error\n");
return TFAIL;
}
capture_buffers[i].length = buf.length;
capture_buffers[i].offset = (size_t) buf.m.offset;
capture_buffers[i].start = mmap (NULL, capture_buffers[i].length,
PROT_READ | PROT_WRITE, MAP_SHARED,
fd_capture_v4l, capture_buffers[i].offset);
memset(capture_buffers[i].start, 0xFF, capture_buffers[i].length);
}
for (i = 0; i < g_capture_num_buffers; i++)
{
memset(&buf, 0, sizeof (buf));
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
buf.index = i;
buf.m.offset = capture_buffers[i].offset;
if (ioctl (fd_capture_v4l, VIDIOC_QBUF, &buf) < 0) {
printf("VIDIOC_QBUF error\n");
return TFAIL;
}
}
type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (ioctl (fd_capture_v4l, VIDIOC_STREAMON, &type) < 0) {
printf("VIDIOC_STREAMON error\n");
return TFAIL;
}
return 0;
}
int prepare_output(void)
{
int i;
struct v4l2_buffer output_buf;
for (i = 0; i < g_output_num_buffers; i++)
{
memset(&output_buf, 0, sizeof(output_buf));
output_buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
output_buf.memory = V4L2_MEMORY_MMAP;
output_buf.index = i;
if (ioctl(fd_output_v4l, VIDIOC_QUERYBUF, &output_buf) < 0)
{
printf("VIDIOC_QUERYBUF error\n");
return TFAIL;
}
output_buffers[i].length = output_buf.length;
output_buffers[i].offset = (size_t) output_buf.m.offset;
output_buffers[i].start = mmap (NULL, output_buffers[i].length,
PROT_READ | PROT_WRITE, MAP_SHARED,
fd_output_v4l, output_buffers[i].offset);
if (output_buffers[i].start == NULL) {
printf("v4l2 tvin test: output mmap failed\n");
return TFAIL;
}
}
return 0;
}
int v4l_capture_setup(void)
{
struct v4l2_capability cap;
struct v4l2_cropcap cropcap;
struct v4l2_crop crop;
struct v4l2_format fmt;
struct v4l2_requestbuffers req;
struct v4l2_dbg_chip_ident chip;
struct v4l2_streamparm parm;
v4l2_std_id id;
unsigned int min;
if (ioctl (fd_capture_v4l, VIDIOC_QUERYCAP, &cap) < 0) {
if (EINVAL == errno) {
fprintf (stderr, "%s is no V4L2 device\n",
v4l_capture_dev);
return TFAIL;
} else {
fprintf (stderr, "%s isn not V4L device,unknow error\n",
v4l_capture_dev);
return TFAIL;
}
}
if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) {
fprintf (stderr, "%s is no video capture device\n",
v4l_capture_dev);
return TFAIL;
}
if (!(cap.capabilities & V4L2_CAP_STREAMING)) {
fprintf (stderr, "%s does not support streaming i/o\n",
v4l_capture_dev);
return TFAIL;
}
if (ioctl(fd_capture_v4l, VIDIOC_DBG_G_CHIP_IDENT, &chip))
{
printf("VIDIOC_DBG_G_CHIP_IDENT failed.\n");
close(fd_capture_v4l);
return TFAIL;
}
printf("TV decoder chip is %s\n", chip.match.name);
if (ioctl(fd_capture_v4l, VIDIOC_S_INPUT, &g_input) < 0)
{
printf("VIDIOC_S_INPUT failed\n");
close(fd_capture_v4l);
return TFAIL;
}
if (ioctl(fd_capture_v4l, VIDIOC_G_STD, &id) < 0)
{
printf("VIDIOC_G_STD failed\n");
close(fd_capture_v4l);
return TFAIL;
}
g_current_std = id;
if (ioctl(fd_capture_v4l, VIDIOC_S_STD, &id) < 0)
{
printf("VIDIOC_S_STD failed\n");
close(fd_capture_v4l);
return TFAIL;
}
/* Select video input, video standard and tune here. */
memset(&cropcap, 0, sizeof(cropcap));
cropcap.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (ioctl (fd_capture_v4l, VIDIOC_CROPCAP, &cropcap) < 0) {
crop.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
crop.c = cropcap.defrect; /* reset to default */
if (ioctl (fd_capture_v4l, VIDIOC_S_CROP, &crop) < 0) {
switch (errno) {
case EINVAL:
/* Cropping not supported. */
fprintf (stderr, "%s doesn't support crop\n",
v4l_capture_dev);
break;
default:
/* Errors ignored. */
break;
}
}
} else {
/* Errors ignored. */
}
parm.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
parm.parm.capture.timeperframe.numerator = 1;
parm.parm.capture.timeperframe.denominator = 0;
parm.parm.capture.capturemode = 0;
if (ioctl(fd_capture_v4l, VIDIOC_S_PARM, &parm) < 0)
{
printf("VIDIOC_S_PARM failed\n");
close(fd_capture_v4l);
return TFAIL;
}
memset(&fmt, 0, sizeof(fmt));
fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
fmt.fmt.pix.width = 0;
fmt.fmt.pix.height = 0;
fmt.fmt.pix.pixelformat = g_fmt;
fmt.fmt.pix.field = V4L2_FIELD_INTERLACED;
if (ioctl (fd_capture_v4l, VIDIOC_S_FMT, &fmt) < 0){
fprintf (stderr, "%s iformat not supported \n",
v4l_capture_dev);
return TFAIL;
}
/* Note VIDIOC_S_FMT may change width and height. */
/* Buggy driver paranoia. */
min = fmt.fmt.pix.width * 2;
if (fmt.fmt.pix.bytesperline < min)
fmt.fmt.pix.bytesperline = min;
min = fmt.fmt.pix.bytesperline * fmt.fmt.pix.height;
if (fmt.fmt.pix.sizeimage < min)
fmt.fmt.pix.sizeimage = min;
if (ioctl(fd_capture_v4l, VIDIOC_G_FMT, &fmt) < 0)
{
printf("VIDIOC_G_FMT failed\n");
close(fd_capture_v4l);
return TFAIL;
}
g_in_width = fmt.fmt.pix.width;
g_in_height = fmt.fmt.pix.height;
memset(&req, 0, sizeof (req));
req.count = g_capture_num_buffers;
req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
req.memory = V4L2_MEMORY_MMAP;
if (ioctl (fd_capture_v4l, VIDIOC_REQBUFS, &req) < 0) {
if (EINVAL == errno) {
fprintf (stderr, "%s does not support "
"memory mapping\n", v4l_capture_dev);
return TFAIL;
} else {
fprintf (stderr, "%s does not support "
"memory mapping, unknow error\n", v4l_capture_dev);
return TFAIL;
}
}
if (req.count < 2) {
fprintf (stderr, "Insufficient buffer memory on %s\n",
v4l_capture_dev);
return TFAIL;
}
return 0;
}
int v4l_output_setup(void)
{
struct v4l2_control ctrl;
struct v4l2_format fmt;
struct v4l2_framebuffer fb;
struct v4l2_cropcap cropcap;
struct v4l2_crop crop;
struct v4l2_capability cap;
struct v4l2_fmtdesc fmtdesc;
struct v4l2_requestbuffers buf_req;
if (!ioctl(fd_output_v4l, VIDIOC_QUERYCAP, &cap)) {
printf("driver=%s, card=%s, bus=%s, "
"version=0x%08x, "
"capabilities=0x%08x\n",
cap.driver, cap.card, cap.bus_info,
cap.version,
cap.capabilities);
}
fmtdesc.index = 0;
fmtdesc.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
while (!ioctl(fd_output_v4l, VIDIOC_ENUM_FMT, &fmtdesc)) {
printf("fmt %s: fourcc = 0x%08x\n",
fmtdesc.description,
fmtdesc.pixelformat);
fmtdesc.index++;
}
memset(&cropcap, 0, sizeof(cropcap));
cropcap.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
if (ioctl(fd_output_v4l, VIDIOC_CROPCAP, &cropcap) < 0)
{
printf("get crop capability failed\n");
close(fd_output_v4l);
return TFAIL;
}
crop.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
crop.c.top = g_display_top;
crop.c.left = g_display_left;
crop.c.width = g_display_width;
crop.c.height = g_display_height;
if (ioctl(fd_output_v4l, VIDIOC_S_CROP, &crop) < 0)
{
printf("set crop failed\n");
close(fd_output_v4l);
return TFAIL;
}
// Set rotation
ctrl.id = V4L2_CID_ROTATE;
ctrl.value = g_rotate;
if (ioctl(fd_output_v4l, VIDIOC_S_CTRL, &ctrl) < 0)
{
printf("set ctrl rotate failed\n");
close(fd_output_v4l);
return TFAIL;
}
ctrl.id = V4L2_CID_VFLIP;
ctrl.value = g_vflip;
if (ioctl(fd_output_v4l, VIDIOC_S_CTRL, &ctrl) < 0)
{
printf("set ctrl vflip failed\n");
close(fd_output_v4l);
return TFAIL;
}
ctrl.id = V4L2_CID_HFLIP;
ctrl.value = g_hflip;
if (ioctl(fd_output_v4l, VIDIOC_S_CTRL, &ctrl) < 0)
{
printf("set ctrl hflip failed\n");
close(fd_output_v4l);
return TFAIL;
}
if (g_vdi_enable) {
ctrl.id = V4L2_CID_MXC_MOTION;
ctrl.value = g_vdi_motion;
if (ioctl(fd_output_v4l, VIDIOC_S_CTRL, &ctrl) < 0)
{
printf("set ctrl motion failed\n");
close(fd_output_v4l);
return TFAIL;
}
}
fb.flags = V4L2_FBUF_FLAG_OVERLAY;
ioctl(fd_output_v4l, VIDIOC_S_FBUF, &fb);
memset(&fmt, 0, sizeof(fmt));
fmt.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
fmt.fmt.pix.width= g_in_width;
fmt.fmt.pix.height= g_in_height;
fmt.fmt.pix.pixelformat = g_fmt;
fmt.fmt.pix.bytesperline = g_in_width;
fmt.fmt.pix.priv = 0;
fmt.fmt.pix.sizeimage = 0;
if (g_tb)
fmt.fmt.pix.field = V4L2_FIELD_INTERLACED_TB;
else
fmt.fmt.pix.field = V4L2_FIELD_INTERLACED_BT;
//Avoid deinterlace in display
fmt.fmt.pix.field = V4L2_FIELD_NONE;
if (ioctl(fd_output_v4l, VIDIOC_S_FMT, &fmt) < 0)
{
printf("set format failed\n");
return TFAIL;
}
if (ioctl(fd_output_v4l, VIDIOC_G_FMT, &fmt) < 0)
{
printf("get format failed\n");
return TFAIL;
}
g_frame_size = fmt.fmt.pix.sizeimage;
memset(&buf_req, 0, sizeof(buf_req));
buf_req.count = g_output_num_buffers;
buf_req.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
buf_req.memory = V4L2_MEMORY_MMAP;
if (ioctl(fd_output_v4l, VIDIOC_REQBUFS, &buf_req) < 0)
{
printf("request buffers failed\n");
return TFAIL;
}
return 0;
}
int
mxc_v4l_tvin_test(void)
{
struct v4l2_buffer capture_buf, output_buf;
v4l2_std_id id;
int i, j;
enum v4l2_buf_type type;
int total_time;
struct timeval tv_start, tv_current;
if (prepare_output() < 0)
{
printf("prepare_output failed\n");
return TFAIL;
}
if (start_capturing() < 0)
{
printf("start_capturing failed\n");
return TFAIL;
}
gettimeofday(&tv_start, 0);
printf("start time = %d s, %d us\n", (unsigned int) tv_start.tv_sec,
(unsigned int) tv_start.tv_usec);
for (i = 0; ; i++) {
begin:
if (ioctl(fd_capture_v4l, VIDIOC_G_STD, &id)) {
printf("VIDIOC_G_STD failed.\n");
return TFAIL;
}
if (id == g_current_std)
goto next;
else if (id == V4L2_STD_PAL || id == V4L2_STD_NTSC) {
type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
ioctl(fd_output_v4l, VIDIOC_STREAMOFF, &type);
type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
ioctl(fd_capture_v4l, VIDIOC_STREAMOFF, &type);
for (j = 0; j < g_output_num_buffers; j++)
{
munmap(output_buffers[j].start, output_buffers[j].length);
}
for (j = 0; j < g_capture_num_buffers; j++)
{
munmap(capture_buffers[j].start, capture_buffers[j].length);
}
if (v4l_capture_setup() < 0) {
printf("Setup v4l capture failed.\n");
return TFAIL;
}
if (v4l_output_setup() < 0) {
printf("Setup v4l output failed.\n");
return TFAIL;
}
if (prepare_output() < 0)
{
printf("prepare_output failed\n");
return TFAIL;
}
if (start_capturing() < 0)
{
printf("start_capturing failed\n");
return TFAIL;
}
i = 0;
printf("TV standard changed\n");
} else {
sleep(1);
/* Try again */
if (ioctl(fd_capture_v4l, VIDIOC_G_STD, &id)) {
printf("VIDIOC_G_STD failed.\n");
return TFAIL;
}
if (id != V4L2_STD_ALL)
goto begin;
printf("Cannot detect TV standard\n");
return 0;
}
next:
memset(&capture_buf, 0, sizeof(capture_buf));
capture_buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
capture_buf.memory = V4L2_MEMORY_MMAP;
if (ioctl(fd_capture_v4l, VIDIOC_DQBUF, &capture_buf) < 0) {
printf("VIDIOC_DQBUF failed.\n");
return TFAIL;
}
memset(&output_buf, 0, sizeof(output_buf));
output_buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
output_buf.memory = V4L2_MEMORY_MMAP;
if (i < g_output_num_buffers) {
output_buf.index = i;
if (ioctl(fd_output_v4l, VIDIOC_QUERYBUF, &output_buf) < 0)
{
printf("VIDIOC_QUERYBUF failed\n");
return TFAIL;
}
} else {
output_buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
output_buf.memory = V4L2_MEMORY_MMAP;
if (ioctl(fd_output_v4l, VIDIOC_DQBUF, &output_buf) < 0)
{
printf("VIDIOC_DQBUF failed\n");
return TFAIL;
}
}
memcpy(output_buffers[output_buf.index].start, capture_buffers[capture_buf.index].start, g_frame_size);
if (ioctl(fd_capture_v4l, VIDIOC_QBUF, &capture_buf) < 0) {
printf("VIDIOC_QBUF failed\n");
return TFAIL;
}
output_buf.timestamp.tv_sec = tv_start.tv_sec;
output_buf.timestamp.tv_usec = tv_start.tv_usec + (g_frame_period * i);
if (ioctl(fd_output_v4l, VIDIOC_QBUF, &output_buf) < 0)
{
printf("VIDIOC_QBUF failed\n");
return TFAIL;
}
if (i == 1) {
type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
if (ioctl(fd_output_v4l, VIDIOC_STREAMON, &type) < 0) {
printf("Could not start stream\n");
return TFAIL;
}
}
}
gettimeofday(&tv_current, 0);
total_time = (tv_current.tv_sec - tv_start.tv_sec) * 1000000L;
total_time += tv_current.tv_usec - tv_start.tv_usec;
printf("total time for %u frames = %u us = %lld fps\n", i, total_time, (i * 1000000ULL) / total_time);
return 0;
}
int process_cmdline(int argc, char **argv)
{
int i;
int vi_set = 0;
int vo_set = 0;
for (i = 1; i < argc; i++) {
if (strcmp(argv[i], "-ow") == 0) {
g_display_width = atoi(argv[++i]);
}
else if (strcmp(argv[i], "-oh") == 0) {
g_display_height = atoi(argv[++i]);
}
else if (strcmp(argv[i], "-ot") == 0) {
g_display_top = atoi(argv[++i]);
}
else if (strcmp(argv[i], "-ol") == 0) {
g_display_left = atoi(argv[++i]);
}
else if (strcmp(argv[i], "-r") == 0) {
g_rotate = atoi(argv[++i]);
}
else if (strcmp(argv[i], "-f") == 0) {
i++;
g_fmt = v4l2_fourcc(argv[i][0], argv[i][1],argv[i][2],argv[i][3]);
if ((g_fmt != V4L2_PIX_FMT_NV12) &&
(g_fmt != V4L2_PIX_FMT_UYVY) &&
(g_fmt != V4L2_PIX_FMT_YUYV) &&
(g_fmt != V4L2_PIX_FMT_YUV420)) {
printf("Default format is used: UYVY\n");
}
}
else if (strcmp(argv[i], "-vi") == 0) {
strcat(v4l_capture_dev, argv[++i]);
vi_set = 1;
}
else if (strcmp(argv[i], "-vo") == 0) {
strcat(v4l_output_dev, argv[++i]);
vo_set = 1;
}
else if (strcmp(argv[i], "-dv") == 0) {
g_input = 1; // Back to "CSI MEM" instead of "CSI VDI MEM"
}
else if (strcmp(argv[i], "-m") == 0) {
g_vdi_enable = 1;
g_vdi_motion = atoi(argv[++i]);
}
else if (strcmp(argv[i], "-tb") == 0) {
g_tb = 1;
}
else if (strcmp(argv[i], "-help") == 0) {
printf("MXC Video4Linux TVin Test\n\n"
"Syntax: mxc_v4l2_tvin.out\n"
" -ow <capture display width>\n"
" -oh <capture display height>\n"
" -ot <display top>\n"
" -ol <display left>\n"
" -r <rotation> -c <capture counter>\n"
" -m <motion> 0:medium 1:low 2:high, 0-default\n"
" -vi <capture_num> uses capture /dev/video<capture_num>\n"
" -vo <output_num> uses output /dev/video<output_num>\n"
" -dv disable VDI\n"
" -tb top field first, bottom field first-default\n"
" -f <format, only YU12, YUYV, UYVY and NV12 are supported> \n");
return TFAIL;
}
}
if (!vi_set) {
strcat(v4l_capture_dev, "0"); // Use /dev/video0 as default capture device
}
if (!vo_set) {
strcat(v4l_output_dev, "16"); // Use /dev/video16 as default output device
}
if ((g_display_width == 0) || (g_display_height == 0)) {
printf("Zero display width or height\n");
return TFAIL;
}
return 0;
}
int main(int argc, char **argv)
{
char fb_device[100] = "/dev/fb0";
int fd_fb = 0, i;
struct mxcfb_gbl_alpha alpha;
enum v4l2_buf_type type;
if (process_cmdline(argc, argv) < 0) {
return TFAIL;
}
if ((fd_capture_v4l = open(v4l_capture_dev, O_RDWR, 0)) < 0)
{
printf("Unable to open %s\n", v4l_capture_dev);
return TFAIL;
}
if ((fd_output_v4l = open(v4l_output_dev, O_RDWR, 0)) < 0)
{
printf("Unable to open %s\n", v4l_output_dev);
return TFAIL;
}
if (v4l_capture_setup() < 0) {
printf("Setup v4l capture failed.\n");
return TFAIL;
}
if (v4l_output_setup() < 0) {
printf("Setup v4l output failed.\n");
close(fd_capture_v4l);
return TFAIL;
}
if ((fd_fb = open(fb_device, O_RDWR )) < 0) {
printf("Unable to open frame buffer\n");
close(fd_capture_v4l);
close(fd_output_v4l);
return TFAIL;
}
/* Overlay setting */
alpha.alpha = 0;
alpha.enable = 1;
if (ioctl(fd_fb, MXCFB_SET_GBL_ALPHA, &alpha) < 0) {
printf("Set global alpha failed\n");
close(fd_fb);
close(fd_capture_v4l);
close(fd_output_v4l);
return TFAIL;
}
mxc_v4l_tvin_test();
type = V4L2_BUF_TYPE_VIDEO_OUTPUT;
ioctl(fd_output_v4l, VIDIOC_STREAMOFF, &type);
type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
ioctl(fd_capture_v4l, VIDIOC_STREAMOFF, &type);
for (i = 0; i < g_output_num_buffers; i++)
{
munmap(output_buffers[i].start, output_buffers[i].length);
}
for (i = 0; i < g_capture_num_buffers; i++)
{
munmap(capture_buffers[i].start, capture_buffers[i].length);
}
close(fd_capture_v4l);
close(fd_output_v4l);
close(fd_fb);
return 0;
}
Hi,
I had finished BT1120 input function with parallel CSI port,and this function was workable now.
but video quality was very bad with interlaced video content.
I tried to enable De-interlace fuction by add-csi-deinterlace-capture.patch that you provided.
But I can't finish this function.
when I enabled CONFIG_MXC_IPU_VDI_ENC=y then the system crashed with following log.
Does this patch also support for IMX6???
====>_ipu_smfc_init
====>_ipu_smfc_init success
====> _ipu_csi_init
====> CSI_DATA_DEST_IDMAC
imx-ipuv3 imx-ipuv3.0: IDMAC21's EBA0 is not 8-byte aligned
imx-ipuv3 imx-ipuv3.0: IDMAC21's EBA1 is not 8-byte aligned
csi=0, ipu=-1064052716, offset=0x746bb000
Unable to handle kernel paging request at virtual address 746bb000 pgd = d674c000
Could you please give me some suggestions for this problem?
Thank you.
Can this patch work in android 4.3?
Add apply patch, the image became blur. Is this right? How to improve it?
01_original = Freescale Android JB4.3 1.1.0
02_deinterlace.png
After apply deinterlance patch
Hello.
I tested HDMI input using this patch in i.MX6DL.
In 720x480i60 , it works well.
but In 1920x1080i60, it cannot capture image. the following message is printed continuously.
--------------------------------------------------------------------------------------------------------------------------
imx-ipuv3 imx-ipuv3.0: warning: disable ipu dma channel 8 during its busy state
imx-ipuv3 imx-ipuv3.0: warning: disable ipu dma channel 8 during its busy state
imx-ipuv3 imx-ipuv3.0: warning: disable ipu dma channel 8 during its busy state
imx-ipuv3 imx-ipuv3.0: warning: disable ipu dma channel 8 during its busy state
imx-ipuv3 imx-ipuv3.0: warning: disable ipu dma channel 8 during its busy state
imx-ipuv3 imx-ipuv3.0: warning: disable ipu dma channel 8 during its busy state
imx-ipuv3 imx-ipuv3.0: warning: disable ipu dma channel 8 during its busy state
--------------------------------------------------------------------------------------------------------------------------
According to reference manual, VDIC supports upto 1080i60.
But, in above message, idmac dma channel 8 means previous field and it's busy.
Are there anyone who knows the reason ?
Thnak you.
Hyungjun Yoon
All, this document is being closed to any new comments/replies. If you have posted a question here, or if you have a question about this topic, please create a discussion so that our support team can see it and respond to it.
Sorry for any inconvenience.