AnsweredAssumed Answered

Using the GPU/IPU for hardware accelerated videomixing?

Question asked by Dilip Kumar on Sep 4, 2013
Latest reply on Nov 23, 2016 by Akshay Budhiraja

Hi, I'm trying to stream two cameras(MIPI camera and a USB camera) at the same time to a single display so as to create a stereoscopic video and of course i have to save the output to a file if necessary. I've been successfully able to do this using the gstreamer pipeline as follows :

 

gst-launch-0.10 -v \

videomixer name=mix ! mfw_v4lsink sync=false  \

v4l2src device=/dev/video0 ! video/x-raw-yuv,width=640,height=480,framerate=30/1 ! ffmpegcolorspace ! videobox fill=2 border-alpha=0 top=0 left=0 bottom=0 right=0 ! mix.  \

mfw_v4lsrc device=/dev/video1 capture-mode=0 fps-n=30 fps-d=1 ! ffmpegcolorspace ! videobox fill=1 border-alpha=0 top=0 left=-640 bottom=0 right=0 ! mix.

 

As you can see from the above pipeline I've used two videoboxes for the two camera streams and using a videomixer element, I've integrated the two streams to a single stream and I'm able to view the output on the display. I can even encode the integrated video stream now using vpuenc and save it to a file of my choice. But the problem is that a lot of frames are being dropped when I use the videobox and videomixer elements to do this. Previously I had obtained the same output by creating two overlays using mfw_isink. But I could only view the stream on the display and not encode to a file or do a network stream. The pipeline I used is as follows :

 

gst-launch-0.10 -v \

v4l2src device=/dev/video0 ! video/x-raw-yuv,width=640,height=480,framerate=30/1 ! mfw_isink axis-left=0 axis-top=0 disp-width=800 disp-height=900 \

mfw_v4lsrc device=/dev/video1 capture-mode=0 fps-n=30 fps-d=1 ! mfw_isink axis-top=0 axis-left=800 disp-width=800 disp-height=900

 

Now to my actual problem :

 

  1. There is a huge drop in frame-rate when using the first pipeline with videobox and videomixer elements. I suspect this is because all the video processing is done by the CPU whereas in the second pipeline all the processing is done by the IPU. I wish to know if there is any possible way to use hardware acceleration by using the IPU/GPU to create overlays and then display it using mfw_v4lsink or encode to file using vpuenc.
  2. The two camera streams are also out of sync when using the first pipeline although they are both focused on the same moving object. The second pipeline streams the two cameras flawlessly and in sync. I feel this is also because of the videobox and videomixer.

 

Note :

  1. The display I'm using has a resolution of 1600x900 at 60hz connected via HDMI to DVI .
  2. I'm using a Sabre Lite rev-D board with iMX6 quad processor running linux kernel version 3.0.35
  3. For testing I'm using both the cameras at VGA resolution at 30 fps even though they are both capable of streaming up to 1080p@30fps . Frame-rate drops hugely for higher resolutions.

 

Any comments or suggestions would be very much helpful. Thank you.

 

P.S :

 

It would be really great if the guys at freescale released an update to the package gst-fsl-plugins with an IPU based videobox and videomixer just like the IPU based video sink (mfw_isink) Francisco Alberto Carrillo Dominguez Leonardo Sandoval Gonzalez Lily Zhang

Outcomes