Hi all,
we are using the i.MX 8M processor to decode a H264 stream from an ethernet camera. The stream is decoded by the gstreamer plugin vpudec. The latency we trace for the gstreamer vpudec plugin is approximately 250ms.
The Yocto BSP version on the embedded system is 2.5 (sumo). The vpudec has the version of 4.4.5. The version of imx-vpuwrap is 4.4.5-r0 (bitbake -s).
There are two posts (https://community.nxp.com/thread/310455) (https://community.nxp.com/thread/304322) where there was managed to get rid of the vpu decoding delay on the i.MX 6 processor.
Is this also possible for the i.MX 8M processor? There was mentioned that the deactivation of the reorderEnable option in the vpu decoder for low latency decoding has an significant effect. Is this done in the vpu wrapper?
Which are the needed steps to enable gstreamer low latency H264 decoding with the vpudec hardware decoder plugin on the i.MX 8M processor?
Best Regards
Dennis
Hi kliewer,
Did you manage to reduce the latency?
If yes, in what way?
I will be grateful if you guide me.
Regards
For now, we are trying to reduce the latency, so only gstreamer is being started with one camera and weston is being loaded at boot time as well. The kernel version is 4.14.98 and the version of gstreamer is 1.14.4.
So far we had used the following gstreamer pipelines:
1) gst-launch-1.0 udpsrc port=5002 ! application/x-rtp ! rtph264depay ! vpudec ! waylandsink async=false enable-last-sample=false qos=false sync=false
2) gst-launch-1.0 udpsrc port=5002 ! application/x-rtp ! rtph264depay ! h264parse ! vpudec frame-drop=false frame-plus=0 ! waylandsink async=false enable-last-sample=false qos=false sync=false
3) gst-launch-1.0 udpsrc port=5002 ! application/x-rtp ! rtpjitterbuffer latency=100 ! queue max-size-buffers=0 ! rtph264depay ! vpudec ! waylandsink async=false enable-last-sample=false qos=false sync=false
4) gst-launch-1.0 udpsrc port=5002 ! "application/x-rtp, media=(string)video" ! rtpjitterbuffer ! rtph264depay ! vpudec ! waylandsink sync=false
For the four gstreamer pipelines mentioned above the latency was approximately about 270ms.
To measure the latency of vpudec and the gstreamer pipeline we had built the yocto BSP with this additional entry in local.conf:
PACKAGECONFIG_append_pn-gstreamer1.0 = " gst-tracer-hooks debug"
Afterwards it was possible to use tracing to measure the latency of the gstreamer pipeline.
The measurement of the vpudec latency was done by using the plugin fakesink:
GST_TRACERS="latency(flags=pipeline+element+reported)" GST_DEBUG="GST_TRACER:7" gst-launch-1.0 udpsrc port=5002 ! application/x-rtp ! rtph264depay ! vpudec ! fakesink
The output was:
0:00:00.381469695 3578 0x1b583400 TRACE GST_TRACER :0:: latency, src=(string)udpsrc0_src, sink=(string)fakesink0_sink, time=(guint64)272432531, ts=(guint64)381415575;
To determine the difference the same command was executed without vpudec:
GST_TRACERS="latency(flags=pipeline+element+reported)" GST_DEBUG="GST_TRACER:7" gst-launch-1.0 udpsrc port=5002 ! application/x-rtp ! rtph264depay ! fakesink
Where the output was:
0:00:00.102640444 3583 0x1434c370 TRACE GST_TRACER :0:: latency, src=(string)udpsrc0_src, sink=(string)fakesink0_sink, time=(guint64)825720, ts=(guint64)102583684;
Where "time=" is the latency for the pipeline in nanoseconds.
From this it can be determined that without vpudec the pipeline latency is under 1ms and is rising to about 270ms as soon as vpudec is used.
We also used different camera parameters for bitrate, frames per second and resolution to see an effect on the latency. For various parameter combinations the latency was the same with about 270ms.
We will also be testing the BSP version 5.4.24.
Regards
Dennis
what is your server? as I known, maybe the server encoding issue would cause the host side vpu decoding latency, so I suggest that you can confirm your server side first
I don't find this issue, did your cpu do anything else besides of decoding? and did you mind testing the latest version (5.4.24)? which fix many bugs from the old bsp version
Hi There !
Ok, so what kind of latency did you see ?
Thanks,
/Otto