Video Playback Performance Evaluation on i.MX6DQ Board

cancel
Showing results for 
Search instead for 
Did you mean: 

Video Playback Performance Evaluation on i.MX6DQ Board

No ratings

Video Playback Performance Evaluation on i.MX6DQ Board

In this article, some experiments are done to verify the capability of i.MX6DQ on video playback under different VPU clocks.

1. Preparation

Board: i.MX6DQ SD

Bitstream: 1080p sunflower with 40Mbps, it is considered as the toughest H264 clip. The original clip is copied 20 times to generate a new raw video (repeat 20 times of sun-flower clip) and then encapsulate into a mp4 container. This is to remove and minimize the influence of startup workload of gstreamer compared to vpu unit test.

Kernels: Generate different kernel with different VPU clock setting: 270MHz, 298MHz, 329MHz, 352MHz, 382MHz.

test setting: 1080p content decoding and display with 1080p device. (no resize)

2. Test command for VPU unit test and Gstreamer

The tiled format video playback is faster than NV12 format, so in below experiment, we choose tiled format during video playback.

Unit test command: (we set the frame rate -a 70, higher than 1080p 60fps HDMI refresh rate)

    /unit_tests/mxc_vpu_test.out -D "-i /media/65a78bbd-1608-4d49-bca8-4e009cafac5e/sunflower_2B_2ref_WP_40Mbps.264 -f 2 -y 1 -a 70"

Gstreamer command: (free run to get the highest playback speed)

    gst-launch filesrc location=/media/65a78bbd-1608-4d49-bca8-4e009cafac5e/sunflower_2B_2ref_WP_40Mbps.mp4 typefind=true ! aiurdemux ! vpudec framedrop=false ! queue max-size-buffers=3 ! mfw_v4lsink sync=false

3. Video playback framerate measurement

During test, we enter command "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor" to make sure the CPU always work at highest frequency, so that it can respond to any interrupt quickly.

For each testing point with different VPU clock, we do 5 rounds of tests. The max and min values are removed, and the remaining 3 data are averaged to get the final playback framerate.

#1#2#3#4#5MinMaxAvg
DecPlaybackDecPlaybackDecPlaybackDecPlaybackDecPlaybackPlaybackPlaybackPlayback
270Munit test57.857.357.8157.0457.7857.357.8756.1557.9155.455.457.356.83
GST53.7654.16354.13654.27353.65953.65954.27354.01967
298Munit test60.9758.3760.9858.5560.9757.860.9458.0760.9858.6557.858.6558.33
GST56.75549.14453.27156.15956.66549.14456.75555.365
329Munit test63.859.5263.9252.6363.858.163.8258.2663.7859.3452.6359.5258.56667
GST57.81555.85756.86258.63756.70355.85758.63757.12667
352Munit test65.7959.6365.7859.6865.7859.6566.1649.2165.9357.6749.2159.6858.98333
GST58.66859.10356.41958.0858.31256.41959.10358.35333
382Munit test64.3456.5867.858.7367.7559.6867.8159.3667.7759.7656.5859.7659.25667
GST59.75358.89358.97258.27359.23858.27359.75359.03433

Note: Dec column means the vpu decoding fps, while Playback column means overall playback fps.

untitled.JPG

Some explanation:

Why does the Gstreamer performance data still improve while unit test is more flat? On Gstreamer, there is a vpu wrapper which is used to make the vpu api more intuitive to be called. So at first, the overall GST playback performance is constrained by vpu (vpu dec 57.8 fps). And finally, as vpu decoding performance goes to higher than 60fps when vpu clock increases, the constraint becomes the display refresh rate 60fps.

The video display overhead of Gstreamer is only about 1 fps, similar to unit test.

Based on the test result, we can see that for 352MHz, the overall 1080p video playback on 1080p display can reach ~60fps.

Or if time sharing by two pipelines with two displays, we can do 2 x 1080p @ 30fps video playback.

However, this experiment is valid for 1080p video playback on 1080p display. If for interlaced clip and display with size not same as 1080p, the overall playback performance is limited by some postprocessing like de-interlacing and resize.

Labels (4)
Comments

Hello,

Thanks for presenting this information. It's very beneficial.

I'm debugging a related problem and have few question.

The problem:

I'm playing multiple network streams (4 - 8 - 16) to a 1080p screen using gstreamer. The screen supports DVI only  so I'm using HDMI to DVI cable. Of course I'm scaling down the streams using the mfw_isink element to fit on the screen. All streams starts fine without any noticeable error but after few hours the video freeze's without giving any error messages.

The pipeline I'm using is

                                          vpudec ->

appsr c-> output-selector ->                 input-selector -> mfw_isink

                                          jpegdec->

My questions:

1. The system will be playing video 24/7 and I don't care about power consumption. Is it better to always use the command "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor" ?

2. Why do you use "queue max-size-buffers=3" between decoder and sink? How do you decide the buffers size=3 ?

3. Do you have any tools for monitoring the VPU and IPU performance?

4. What is more likely to be the bottleneck, the display or the decoder ?

Thanks

Hi Tarek,

1. freezing issue: can you monitor the RAM memory usage/free (top -b | grep Mem) while your pipelines are running?

2. queue: the 'max-size-buffers' I believe the reason to reduce it is that the vpudec needs some number of buffers running on the pipeline all the time, if the default value is given, the vpudec at some point will run out of buffers.

Hi Leo,

1. I don't see any memory leaks. I've used Top valgrind and mtrace so I have no doubts.

2. I'm not quit sure I understand that! The default value is 20 buffers and we need to reduce this number to 3 buffers so the VPU does not run out of buffers! I would thought that we need to increase the number, right?

jackmao should have a better understanding of why we need to decrease the queue's buffers. AFAIK, the vpudec needs at least 6 buffers running, and we can not change through the vpudec properties, so the only parameter left is the queue's buffers. In fact, I inspected the queue element and seems that the default is 200!!

max-size-buffers    :     Max. number of buffers in the queue (0=disable)

                        flags: readable, writable

                        Unsigned Integer. Range: 0 - 4294967295 Default: 200 Current: 200

BTW,  have you try using the mfw_ipucsc element for down-scaling?

Leo

Hi Leo,

I'm using the mfw_isink which I think is using the same IPU like mfw_ipucsc. Do you think there is additional benefit from using it?

I will try it anyway so my pipeline will be like this:

appsrc -> vpudec -> mfw_ipucsc -> mfw_isink. Is that correct?

Hi Tarek,

the mfw_ipucsc is a color space conversions and resolutiion scaler, so you can remove it from the pipeline if you do not need these features. In the other hand, If you add it and do not add any capsfilter in front, (I believe) it does not do anything.

Leo

Hi Leo,

I do need scaling down  but my question is:

What is the difference between scaling down using mfw_ipucsc and mfw_isink? Is there any performance gain?

Oh I see. I am not sure. jackmao, can you comment on this?

Leo

They should be no big different on scaling down, because they both use IPU

The vpu use 6 buffer for decode, the vl4sink use two of them, the queue use 3 buffer, you can't set too large, otherwise there no enough buffer to run

Hi Junping,

Do I also need to use a queue before mfw_isink?

Version history
Revision #:
1 of 1
Last update:
‎10-09-2012 08:13 AM
Updated by: