Why encoded video is choppy when saved to file?

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Why encoded video is choppy when saved to file?

7,377 Views
dilipkumar
Contributor III

I'm streaming video from a camera using the following pipeline.

     gst-launch-0.10 mfw_v4lsrc ! vpuenc codec=6 ! matroskamux ! filesink location=test4way.mkv sync=false async=false

My requirement is to save the encoded video in h.264 format. But when i try to view the saved file using gplay (or any player in my PC) the video is found to be choppy. It pauses every few seconds and continues with the loss of few frames while the video is paused. I dont think the problem is with the encoder. Because i've encoded and streamed the video over network and checked that the encoded video is not choppy. I think this problem occurs while saving to file locally. Is the imx6quad not capable of fast data transfer.? Do i need to change anything in the kernel like DMA for example.? I'm using a sabrelite development board (iMX6Q) from boundary devices running kernel version 3.0.35. Any suggestions?

42 Replies

2,820 Views
LeonardoSandova
Specialist I

Do you have your filesystem NFS mounted? if yes, that may be the reason.

Leo

0 Kudos

2,820 Views
dilipkumar
Contributor III

No i'm not using NFS. I'm using SD card formatted in ext3 filesystem. It contains the rootfs.

0 Kudos

2,820 Views
LeonardoSandova
Specialist I

Strange. Can you try removing the sync and async properties from the pipeline?

Also, do you see the issue when rendering on the screen (mfw_v4lsink)?

Make sure you got GST_DEBUG=*:2 on your env before running any gst-launch cmd.

Leo

0 Kudos

2,820 Views
dilipkumar
Contributor III

Let me make my issue more clear. I'll demonstrate it using h.263 (codec=5) instead of h.264 because I cant render the decoded stream when i stream h.264 over UDP (Still don't know why. But that is a different problem).

sender : sabrelite

    gst-launch mfw_v4lsrc ! vpuenc codec=5 quant=1 ! matroskamux streamable=true ! tcpserversink port=5000

receiver : PC running Ubuntu 12.04 LTS

    gst-launch tcpclientsrc host=<IP address of sender> port=5000 ! matroskademux ! ffdec_h263 ! xvimagesink

Monitoring the network usage of the PC showed that i was receiving data at the rate of approximately 5 MiB/s from the sender.

The received and decoded stream in the PC is continuous without any skipped frames. So i modify the receiver pipeline a little and save the video to file in PC.

    gst-launch tcpclientsrc host=<IP address of sender> port=5000 ! filesink location=~/Videos/test.mkv

When i play the saved video using VLC or Totem , the video is again continuous without any frame loss.

Now i modify the sender pipeline so that i can save the encoded video locally to the microSD card containing the rootfs.

    gst-launch mfw_v4lsrc ! vpuenc codec=5 quant=1 ! matroskamux ! filesink location=test.mkv

When I try to play the saved video using gplay (on board itself) or VLC (in PC) the video looks choppy with pauses and frame loss at irregular intervals. Any explanations about this behavior?

0 Kudos

2,820 Views
LeonardoSandova
Specialist I

Lets see if the problem is a SD card issue (filesrc or filesink)

1. Sink the file into RAM: filesink location=/dev/shm/output.mtk. Just remember that the RAM is limited (compared to SD), so do a 'df -h' to know how much you can store on that folder.

2. Sink the file into eMMC

3. Try with other SD and run the original pipe.

Let me know the results.

Leo

2,820 Views
dilipkumar
Contributor III

Hi Leo, I did as you suggested and wrote to ram using filesink location=/dev/shm/output.mkv . When i played back the file the output video was flawless. I even copied it to SD card and played it using gplay.. The video was still smooth. So SD card read speed is not an issue. Only while writing to SD card, I face this problem. I also tried with different HDD and SD cards (16 GB microSD SDHC class 10 UHS-1, 4 GB microSD class 6 and a 250GB SATA 3 Gbps HDD all of them formatted to ext3 filesystem) The videos saved in all of them were choppy when played back. And what do you mean by eMMC?

Using dd if=/dev/zero of=/root/temp.img count=1 bs=500M conv=fsync i saw that sequential write speed to the 16 GB microSD SDHC class 10 UHS-1 card was at 3.5MB/s to 4.5MB/s in several tries. My original pipeline generated data at the rate of 2MB/s. Any ideas why this bottleneck arises? Is it a driver issue?

0 Kudos

2,818 Views
Yuri
NXP Employee
NXP Employee

Please try buffering :

gst-launch mfw_v4lsrc ! queue ! vpuenc codec=6 ! vpudec ! queue ! matroskamux ! filesink location=test.mkv sync=false

or

gst-launch mfw_v4lsrc ! vpuenc codec=6 ! vpudec ! queue max-size-bytes=0 max-size-time=0 ! matroskamux ! filesink location=test.mkv sync=false

Also, using FAT filesystem may improve writing performance.

0 Kudos

2,818 Views
dilipkumar
Contributor III

Hi Yuri, as you suggesteed , i tried bufferring with queue and now I get a different problem. I tried this pipeline :

gst-launch mfw_v4lsrc ! queue max-size-bytes=0 max-size-time=0 ! vpuenc codec=6 ! queue ! matroskamux ! filesink location=test.mkv sync=false

The pipeline runs for about 30 to 40 seconds and then it stops displaying the following debug messages :

0:00:32.144329670  41300x2b800 WARN         mfw_v4lsrc mfw_gst_v4lsrc.c:1210:mfw_gst_v4lsrc_buffer_new: no buffer available in pool
0:00:32.144526336  41300x2b800 WARN            basesrc gstbasesrc.c:2582:gst_base_src_loop:<mfwgstv4lsrc0> error: Internal data flow error.
0:00:32.144591670  41300x2b800 WARN            basesrc gstbasesrc.c:2582:gst_base_src_loop:<mfwgstv4lsrc0> error: streaming task paused, reason error (-5)

ERROR: from element /GstPipeline:pipeline0/MFWGstV4LSrc:mfwgstv4lsrc0: Internal data flow error.

Additional debug info:

gstbasesrc.c(2582): gst_base_src_loop (): /GstPipeline:pipeline0/MFWGstV4LSrc:mfwgstv4lsrc0:

streaming task paused, reason error (-5)

Without using queue the pause and frame loss appeared in the saved video at around 30 to 40 seconds. So my assumption is that the pipeline with queue is crashing at exactly the same time as that happens. Do I need to tweak the parameters for queue?  Should I try queue2 or multiqueue?

Another thing i noticed earlier is that using queues affects the stability of the running pipeline. Some of the other pipelines I used to test crashed in 30 minutes or so with an exception "Segmentation Fault". I even tried increasing the stack size limit but with no effect. I'll start a separate discussion about that error if necessary although it already exists. See : Segmentation fault using queue in pipeline gstreamer

Also I tried saving the encoded video to a 16 GB microSD SDHC class 10 UHS-1 formatted in FAT16 and FAT32 filesystem. But the videos still appeared choppy. They don't make much difference from ext3.

0 Kudos

2,818 Views
Yuri
NXP Employee
NXP Employee

Please try using GST element ! queue ! without parameters (default values).

Next, have You tested SD 10 Class FAT filesystem, assuming that boot

device is not the same SD ? Also You can try USB flash.

First, it makes sense to check writing performance via Linux comnand dd.

As an example :

$ dd if=/dev/zero of=/dev/sdb bs=1M count=100 

Write speed should be greater at least three times more than data flow requirements.

0 Kudos

2,818 Views
dilipkumar
Contributor III

I've tried queue without parameters, but still the pipeline crashes with internal data flow error. I've stopped using SD cards for saving the encoded files because of the write speed limitation in different classes of cards. So I'm using a SATA 3Gbps 500GB HDD to save the encoded files now. Still the problem exists. Running the command

dd if=/dev/zero of=/dev/sdb bs=1M count=100

lists the write speed of the HDD to be around 90-100MB/s on average. This should be more than enough for storing the encoded data generated from the gstreamer pipeline which reaches a maximum of 5 MB/s for vpuenc codec=6 quant=1

0 Kudos

2,818 Views
Yuri
NXP Employee
NXP Employee

Looks like situation has been improved under recent Yocto BSP Linux 3.10.17.
Please try it.

i.MX 6 Series Software and Development Tool R|Freescale

0 Kudos

2,818 Views
dilipkumar
Contributor III

Thanks for bringing it to notice Yuri,

I'm sorry for replying so late. I haven't tried the latest BSP yet. But I will do it soon. Actually I ended up using the queue filter after all the hullabaloo. But I still think it is just a workaround and not the proper solution. I wanted to improve the overall system write performance. But couldn't find a way to successfully do it. So i used a queue element with a very large buffer. In fact, it was so big it engulfed my entire RAM. This is the pipeline I used.

gst-launch mfw_v4lsrc capture-mode=$RESOLUTION ! vpuenc codec=6 quant=10 ! matroskamux ! queue max-size-time=0 max-size-buffers=0 max-size-bytes=1610612736 ! filesink location=recording.mkv

As you can see, in the above pipeline I have assigned a maximum of 1.5GB of ram to buffer to when saving the data to file. I also monitored the ram usage using "ps -l" when gst-launch was running. I observed the following when saving to an UHS-1 compliant microSD card and to a SATA HDD.

Resolution@30fps (encoded bitstream rate @ quant=10)Ram usage when saving to SD cardRam usage when saving to HDD
640x480 (2.0 MB/s)14 MB4.8 MB
1280x720 (5.8 MB/s)27 MB5 MB
1920x1080 (7.5 MB/s)1.36 GB and increasing7 MB

In all these cases, the output video was perfect and jitter free when played back in any media player. When saving to any media, the ram usage slowly started to increase from around 1Mb to the values posted above. But only for 1080p@30 when saving to a microSD card, the ram usage kept increasing till the card ran out out of space and the pipeline crashed. I used an 8GB card for testing. So at about 17 minutes, i had a 7.8GB jitter free video file encoded in H.264 and the ram usage at that time was 1.36 GB. It would have increased higher if I had used a bigger SD card. I hope this answer will be helpful for anyone trying to encode and save videos using the i.MX6 VPU. Thank you.

2,818 Views
Yuri
NXP Employee
NXP Employee

Great !

Also I remember the following limitation :

  When playing the video for a long time, allocation of contiguous memory may fail
(memory fragmentation).

  To play video when the system memory is low, run the command:


echo 1 >/proc/sys/vm/lowmem_reserve_ratio


It protects the DMA zone and avoids memory allocation errors.


~Yuri.

2,820 Views
LeonardoSandova
Specialist I

Hi Dilip, I ran out of ideas. I will replicate the problem and let you know my results. The eMMC is a flash memory (non-volatile) available on your board, so you can mount your system there. Read the User Guide for more info. I believe this memory is faster than SD technology (not sure if this is 100% true), but you can try it. To install stuff on this memory, you need the MFG tool.

Leo

0 Kudos

2,820 Views
dilipkumar
Contributor III

Hi Leo, the board I'm working on right now doesn't have an eMMC. So i can't test writing to it.

0 Kudos

2,820 Views
LeonardoSandova
Specialist I

Dilip, which kernel are you using? patched by you? base version?

Leo

0 Kudos

2,820 Views
dilipkumar
Contributor III

Hi Leo, Sorry for the late reply. I'm using kernel version 3.0.35 . I haven't applied any patches to it other than the default ones that are applied when building using ltib. I'm using the latest BSP source from i.MX 6 Q/D/DL/S L3.0.35_4.1.0 GA release.

0 Kudos

2,820 Views
EricNelson
Senior Contributor II

Can you forward the content of /proc/mounts?

0 Kudos

2,820 Views
dilipkumar
Contributor III

The contents of /proc/mounts

rootfs / rootfs rw 0 0

/dev/root / ext3 rw,relatime,errors=continue,user_xattr,barrier=0,data=writeback 0 0

proc /proc proc rw,relatime 0 0

sys /sys sysfs rw,relatime 0 0

tmpfs /dev tmpfs rw,relatime,mode=755 0 0

devpts /dev/pts devpts rw,relatime,mode=600 0 0

shm /dev/shm tmpfs rw,relatime 0 0

rwfs /mnt/rwfs tmpfs rw,relatime,size=512k 0 0

/dev/sda1 /media/hdd ext4 rw,relatime,user_xattr,barrier=1,data=ordered 0 0

Write speed to my HDD formatted in ext4 to which I'm trying to save the encoded file :

dd if=/dev/zero of=/media/hdd/temp.img count=1 bs=500M conv=fsync

1+0 records in

1+0 records out

524288000 bytes (500.0MB) copied, 6.164990 seconds, 81.1MB/s

Write speed to my SD card (unknown class and specification) formatted in ext3 which contains the rootfs :

dd if=/dev/zero of=/root/temp.img count=1 bs=500M conv=fsync

1+0 records in

1+0 records out

524288000 bytes (500.0MB) copied, 117.263259 seconds, 4.3MB/s

0 Kudos

2,820 Views
EricNelson
Senior Contributor II

Hello Dilip,

It's odd that you're getting such bad performance from the SD card, but I think the fact that you've used the SSD rules out the disk speed as the culprit.

I just tested your exact command-line on a Dora release of Yocto and our latest kernel, and I'm not seeing what you're seeing.

root@nitrogen6x:~# cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root / ext3 rw,relatime,errors=continue,user_xattr,barrier=0,data=writeback 0 0
devtmpfs /dev devtmpfs rw,relatime,size=450392k,nr_inodes=112598,mode=755 0 0
proc /proc proc rw,relatime 0 0
tmpfs /mnt/.psplash tmpfs rw,relatime,size=40k 0 0
sysfs /sys sysfs rw,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /var/volatile tmpfs rw,relatime 0 0
/dev/mmcblk0p1 /media/mmcblk0p1 vfat rw,relatime,gid=6,fmask=0007,dmask=0007,allow_utime=0020,codepage=cp437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620 0 0
root@nitrogen6x:~#
root@nitrogen6x:~# cat /proc/version
Linux version 3.0.35-02844-g6a3d3ca (ericn@ericsam) (gcc version 4.6.2 20110630 (prerelease) (Freescale MAD -- Linaro 2011.07 -- Built at 2011/08/10 09:20) ) #12 SMP PREEMPT Fri Nov 8 13:48:53 MST 2013
root@nitrogen6x:~# 

I used the same command-line as you listed above:

root@nitrogen6x:~# gst-launch mfw_v4lsrc ! vpuenc codec=5 quant=1 ! matroskamux ! filesink location=test.mkv

The resulting video is available here:

Can you try using this image?

     http://boundarydevices.com/yocto-dora-release-mx6/

0 Kudos