AnsweredAssumed Answered

Camera Capture through IPU and VPU with minimum GStreamer usage

Question asked by Ilko Dossev on Jan 27, 2014
Latest reply on Feb 24, 2014 by Ilko Dossev

People,

 

In a Linux system GStreamer is doing sufficient job capturing a couple of video streams from camera and delivering them to a file or network sink, for more and bigger streams it us to much memcpy-ing for good performance.

Native MX6 Cameras can create stream directly through IPU and VPU to the Display, but this pipeline is not available for PCI-E multi-camera adapters.

 

The architecture I am currently working with is TW6869 Intersil PCI-E chip which is able to deliver up to 8 streams over DMA, in RGB16 or UYVY formats, at 30 FPS.

I am able to pass the DMA frame buffer for each camera, directly from my Capture Driver, to the IPU for conversion to either NV12 or I240 planar formats. IPU returns the result in DMA buffers of its own which I allocate at the start of the streaming. But, passing afterwards these buffers to User Space for processing by GStreamer happens to be very performance-degrading; my estimate is that at least 4 "memcpy" operations take place on buffers which are 3/4 of the 640*480*16bpp big.

 

My idea is to feed these NV12 (or I420) buffers directly to the VPU for encoding into compressed image format, like H.264, suitable for streaming over network connections.

 

However, the interface to the VPU driver "mxc_vpu.ko" is much more complicated than the interface to the "mxc_ipu.ko" driver; one user-mode library "libvpu.so" sits on top of the VPU Kernel driver and over it is the FSL GST plugin for Video Encoder. I am certain that such architecture is feasible (at least in theory) but:

1. Is this IPU-to-VPU data & control connectivity practically usable for MX6 Quad or Dual platforms?... and

2. If yes, then what would be the most efficient development to get the control of the data streams in the Camera Capture Driver code?

 

Thanks for your opinions and advice,

Ilko Dossev

Qualnetics

Outcomes