AnsweredAssumed Answered

Implementing VF4 data flow for video preview

Question asked by Ivo Strasil on Dec 7, 2015

We are trying to implement tearingless low latency video preview for 1080p25 camera and 1080p50 HDMI output. IMX6SDLRM specifies that it should be possible using VF4 data flow, is that correct?

 

Hardware setup:

  • Variscite VAR-SOM-MX6 Solo + custom carrier board with ADV7181C video digitizer
  • Sony FCB-EV7300 FullHD video camera, used in 1080p25 mode
  • FullHD LCD panel connected via HDMI


Desired function:

We would like to be able to show video preview and record video at the same time. We were able to achieve that using GStreamer SDK, unfortunately the preview function is not working in GStreamer for mfw_v4lsrc so we had to use mfw_isink, which sadly introduced quite a bit of video latency.

 

We would like to show preview with latency as small as possible and without any tearing. IMX6SDLRM contains table 9-4 (Time-Shared Data Flows Through The IPU) which lists different data flows, as quoted below.

 

 

 

We believe that data flow of type VF4 would be best for us. We do not need another video flows, we have same input and output resolution and we have display frame rate twice the camera frame rate. Unfortunately, there is no more information about VF4 in the reference manual, so we have to guess a bit...

 

Current status:

Right now we are able to show video preview using mxc_v4l2_overlay unit test. We had some issues with color space conversion (CSC) in image converter (IC) splitting image in 4 parts because IC has output limited to maximum size of 1024x1024 pixels. We modified kernel a bit so the CSI saves data directly to memory without any interaction from the IC, and moved CSC to the display processor (DP), which is able to convert YUV to RGB (we receive YUV 4:2:2 from camera and send RGB565 to display).

 

Speaking technically, we utilize CSI_MEM channel to move data from camera to memory and MEM_FG_SYNC to display data from memory. There is one (dual) buffer shared by both channels and memory/bus flows confirms that data are being written only once.

 

It seems to work quite well, latency is very low, but we observe vertical tearing, how can we avoid that?

 

Idea:

Vertical tearing tearing is caused by timing mismatch between camera and display. Since we are not able to change camera behavior nor change the camera, as that is what our customer specified, we have to change display timing. We believe that it should be possible to trigger display synchronization by camera VSYNC, is that correct?

 

Unfortunately we are quite confused about how to achieve that. What exactly is generating synchronization for display, is it waveform generator in DI? If so, how can we change its settings to obtain described behavior? Is there any document providing more information on this topic?

 

Thanks a lot for any idea and/or hint.

Outcomes