i.MX6 Video Latency

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX6 Video Latency

Jump to solution
6,370 Views
allanmatthew
Contributor IV

I'm working with the i.MX6Q on a Boundary Devices BD-SL-i.MX6 and I'm having some interesting results using Gstreamer and mfw_v4lsrc/mfw_v4lsink and the hardware encoder.  I'm running Ubuntu 12.04 on Boundary's 3.0.35 kernel.

I'm testing latency by pointing the Boundary Devices OV camera module(s) (parallel and MIPI, same results) at a stopwatch and placing the output display next to the stopwatch.  I take a picture and the delta in the displayed stopwatch value vs the actual stopwatch value I'm calling my latency.

When outputting directly from the camera to an HDMI screen connected to the BD-SL-i.MX6 using the command

gst-launch-0.10 mfw_v4lsrc capture-mode=4 ! mfw_v4lsink --disp-width=1280 --disp-height=720 sync=false -v

I get a latency of 117ms (which seems very high to me).

I want to stream this over a network, so I'm using the vpuenc, h264 payloader and udp sink:

gst-launch-0.10 mfw_v4lsrc capture-mode=4 ! vpuenc codec=6 bitrate=$BITRATE! rtph264pay ! udpsink host=$HOST port=5000 sync=false -v

And on the receiver side (also BD-SL-i.MX6)

gst-launch-0.10 udpsrc port=5000 caps=(caps from source) ! rtph264depay ! vpudec low-latency=true ! ffmpegcolorspace ! mfw_v4lsink sync=false -v

This transmits the h.264 video nicely, but when I measure the latency I get the exact same latency (117ms) as if there was no network!  What is going on here?!  I would imagine the network connection (ethernet->router->wifi) and encode/decode should introduce at least a couple milliseconds, and probably quite a few more.

Is there some buffering or delay in the first command (camera->HDMI out) that I'm unaware of?  I'm hoping to reduce this latency as much as possible, down to the limit of what the encode/transmit/decode is capable of.

Thanks!

Labels (4)
1 Solution
2,920 Views
allanmatthew
Contributor IV

Eric-

Crazy may have been a bit strong :smileyhappy:

Excellent recommendation on using mfw_isink.  I'm down to a 59ms latency!  Unfortunately there is some strange frame buffer behavior... the video appears to be overlaid on the CLI and very dark-blueish.  I'm guessing I'm missing some memory configuration somewhere?

Thanks,

-Allan

View solution in original post

0 Kudos
13 Replies
2,920 Views
imx_learner
Contributor I

Hi Allan,

Your post indeed pertains to what I am currently looking for. I want to stream video from a Drone to ground station, and I am thinking of using the i.MX6 processors. Would you be kind enough and let me know the details of the screen/LCD and the camera you are using? How are you decoding the received data at the receiver side?  Also if I establish an adhoc communication mode between the i.MX board(server side) and use a normal desktop PC(receiver side), do you think the latency induced will be more?

0 Kudos
2,874 Views
LeonardoSandova
Specialist I

Lets look at the timestamps printed by the source:

what are the deltas between each buffer's timestamps when launching 'mfw_v4lsrc ! fakesink -v'?

The fastest you can get camera data is 1/30fps which is 33.3 ms, so with your delta calculation (latency) you are getting 8.54 which is of course pretty bad . I am not sure if is the system is under performing or your latency calculation is not correct.

Regarding the UDP pipelines, are you launching both pipes from the same board (localhost=127.0.0.1)? if this true, there is no much (or any?) network latency. In other hand, vpu* are filters, so these introduce delays....

BTW, if you want to get more insight about latencies of each system call, run the pipelines with strace -T.

Leo

0 Kudos
2,874 Views
allanmatthew
Contributor IV

Leo-

How do I monitor the buffer timestamp deltas?  I'm not sure how or where to output the timestamps in the gstreamer pipeline.

I'm not using the loopback for the UDP pipeline, I'm running from an i.MX6 connected via USB WiFi to a router which is connected via ethernet to another i.MX6.

Thanks,

-Allan

0 Kudos
2,904 Views
LeonardoSandova
Specialist I

Instead of using video sinks, use fakesinks, just add the param -v. For example

$ gst-launch mfw_v4lsrc fps-n=30 ! fakesink -v 

MFW_GST_V4LSRC_PLUGIN 3.0.10 build on Jan 24 2014 02:34:10.

Setting pipeline to PAUSED ...

/GstPipeline:pipeline0/MFWGstV4LSrc:mfwgstv4lsrc0.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)640, height=(int)480, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1

Pipeline is live and does not need PREROLL ...

Setting pipeline to PLAYING ...

New clock: GstSystemClock

/GstPipeline:pipeline0/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)640, height=(int)480, framerate=(fraction)30/1, pixel-aspect-ratio=(fraction)1/1

/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = "preroll   ******* "

/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = "event   ******* (fakesink0:sink) E (type: 102, GstEventNewsegment, update=(boolean)false, rate=(double)1, applied-rate=(double)1, format=(GstFormat)GST_FORMAT_BYTES, start=(gint64)0, stop=(gint64)-1, position=(gint64)0;) 0x2de928"

/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = "chain   ******* (fakesink0:sink) (460800 bytes, timestamp: 0:00:00.033265333, duration: 0:00:00.033333333, offset: 1075314688, offset_end: -1, flags: 4128 discont ) 0x2e8000"

/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = "chain   ******* (fakesink0:sink) (460800 bytes, timestamp: 0:00:00.067261000, duration: 0:00:00.033333333, offset: 1075838976, offset_end: -1, flags: 4096 ) 0x2e80b8"

$ gst-launch mfw_v4lsrc fps-n=15 ! fakesink -v

MFW_GST_V4LSRC_PLUGIN 3.0.10 build on Jan 24 2014 02:34:10.

Setting pipeline to PAUSED ...

/GstPipeline:pipeline0/MFWGstV4LSrc:mfwgstv4lsrc0.GstPad:src: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)640, height=(int)480, framerate=(fraction)15/1, pixel-aspect-ratio=(fraction)1/1

Pipeline is live and does not need PREROLL ...

Setting pipeline to PLAYING ...

New clock: GstSystemClock

/GstPipeline:pipeline0/GstFakeSink:fakesink0.GstPad:sink: caps = video/x-raw-yuv, format=(fourcc)I420, width=(int)640, height=(int)480, framerate=(fraction)15/1, pixel-aspect-ratio=(fraction)1/1

/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = "preroll   ******* "

/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = "event   ******* (fakesink0:sink) E (type: 102, GstEventNewsegment, update=(boolean)false, rate=(double)1, applied-rate=(double)1, format=(GstFormat)GST_FORMAT_BYTES, start=(gint64)0, stop=(gint64)-1, position=(gint64)0;) 0x337928"

/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = "chain   ******* (fakesink0:sink) (460800 bytes, timestamp: 0:00:00.066862000, duration: 0:00:00.066666666, offset: 1075314688, offset_end: -1, flags: 4128 discont ) 0x342000"

/GstPipeline:pipeline0/GstFakeSink:fakesink0: last-message = "chain   ******* (fakesink0:sink) (460800 bytes, timestamp: 0:00:00.134798666, duration: 0:00:00.066666666, offset: 1075838976, offset_end: -1, flags: 4096 ) 0x3420b8"

0 Kudos
2,904 Views
allanmatthew
Contributor IV

Thanks Leo-

I'm seeing exactly what you're seeing, the 0.033 and 0.066 durations.  So, it seems that there is a delay in the mfw_v4lsink?

-Allan

0 Kudos
2,904 Views
LeonardoSandova
Specialist I

Hi Allan,

I think the latency you are observing is normal. You are measuring the latency between the capture and the display time, which is of course non-zero and ideally should be constant, so delays do not accumulate. I have not tried it but in case you want to look at how much time is spent on each system call, run the pipeline with strace -T.

0 Kudos
2,874 Views
allanmatthew
Contributor IV

Leo-

117ms still seems like a crazy amount of latency, especially since i'm just trying to display camera data directly to the screen.  mfw_v4lsrc seems to be ok in that its only latency is one frame (33.3ms), but where does the additional 83.7ms come from?  Does the mfw_v4lsink have a 2 frame buffer with some additional overhead?

-Allan

0 Kudos
2,874 Views
EricNelson
Senior Contributor II

Hi Allan,

"Crazy" sometimes depends on your needs.


I suspect that the primary application of mfw_v4lsink is video playback, and 117ms may be really small for that use-case.

Have you tried using mfw_isink? I don't believe it queues more than a single frame.

0 Kudos
2,921 Views
allanmatthew
Contributor IV

Eric-

Crazy may have been a bit strong :smileyhappy:

Excellent recommendation on using mfw_isink.  I'm down to a 59ms latency!  Unfortunately there is some strange frame buffer behavior... the video appears to be overlaid on the CLI and very dark-blueish.  I'm guessing I'm missing some memory configuration somewhere?

Thanks,

-Allan

0 Kudos
2,869 Views
EricNelson
Senior Contributor II

Allan Matthew wrote:

...

I'm guessing I'm missing some memory configuration somewhere?

Right. The default sets things up for alpha-blending. There are some ioctls to control color-keying and alpha-blending that you might need to integrate.

0 Kudos
2,869 Views
allanmatthew
Contributor IV

Sorry for my naiveté, but how do I disable alphablending with gstreamer?  I'm not finding that anywhere obvious in gst-inspect mfw_isink.

0 Kudos
2,869 Views
LeonardoSandova
Specialist I

As always, good catch Eric. Check this link for your alpha question: https://community.freescale.com/docs/DOC-1518

2,869 Views
allanmatthew
Contributor IV

That did it, thanks guys!