Freescale's hardware accelerated h264 codec

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Freescale's hardware accelerated h264 codec

10,424 Views
rostislav
Contributor III

I installed the Freescale's codecs following i.MX Linux Mulimedia Framework User's Guide.


While trying to use a h264 codec we get the following error:


0:11.053 PExternalT...0x2f2be420 GStreamer mfw_gst_h264_getbuffer: >>DECODER: Error -4 in allocating the Framebuffer[0]

Could anybody point to a document that may help to interpret codec errors? What the  "Error -4" means?

I printed that question in my old thread, but I have doubts because the old thread status assumed answered. That is a reason I reprinted the question starting new thread.

57 Replies

2,287 Views
rjongbloed
Contributor II

Lets start again. Simple question: why does the following not work?

#!/bin/sh

gst-launch \

        udpsrc port=5000 ! \

        application/x-rtp, media=video, payload=113, clock-rate=90000, encoding-name=H264 ! \

        rtph264depay ! \

        queue max-size-buffers=1000 ! \

        mfw_vpudecoder ! \

        queue max-size-buffers=1000 ! \

        mfw_v4lsink sync=false &

sleep 3

gst-launch \

        mfw_v4lsrc capture-width=1024 capture-height=600 ! \

        mfw_ipucsc ! \

        "video/x-raw-yuv, format=(fourcc)I420, width=1024, height=600, framerate=(fraction)90000/3000" ! \

        queue max-size-buffers=1000 ! \

        mfw_vpuencoder codec-type=2 ! \

        queue max-size-buffers=1000 ! \

        rtph264pay config-interval=5 pt=113 mtu=1444 ! \

        udpsink port=5000 &

I get the output:

lucid@lucid-desktop:~/projects$ ./gst_test.sh

MFW_GST_V4LSINK_PLUGIN 2.0.3-1-179-e630aa8d build on Dec 26 2011 16:01:32.

Setting pipeline to PAUSED ...

[INFO]  Product Info: i.MX53

VPU Version: firmware 1.4.41; libvpu: 5.3.2

MFW_GST_VPU_DECODER_PLUGIN 2.0.3-1-179-e630aa8d build on Dec 26 2011 15:59:16.

Pipeline is live and does not need PREROLL ...

Setting pipeline to PLAYING ...

New clock: GstSystemClock

lucid@lucid-desktop:~/projects$ MFW_GST_V4LSRC_PLUGIN 2.0.3-1-179-e630aa8d build on Dec 26 2011 16:00:55.

IPU_CSC_CORE_LIBRARY_VERSION_INFOR_01.00.

MFW_GST_IPU_CSC_PLUGIN 2.0.3-1-179-e630aa8d build on Dec 26 2011 15:58:02.

Setting pipeline to PAUSED ...

[INFO]  Product Info: i.MX53

VPU Version: firmware 1.4.41; libvpu: 5.3.2

MFW_GST_VPU_ENCODER_PLUGIN 2.0.3-1-179-e630aa8d build on Dec 26 2011 15:58:19.

Pipeline is live and does not need PREROLL ...

Setting pipeline to PLAYING ...

New clock: GstSystemClock

mfw_gst_ipu_csc_get_unit_size:size=921600

mfw_gst_ipu_csc_get_unit_size:size=921600

>>V4L_SINK: Actually buffer status:

        hardware buffer : 9

        software buffer : 0

full screen size:1024x600

[V4L Update Display]: left=0, top=0, width=1024, height=600

[WARN]  VPU mutex couldn't be locked before timeout expired

[WARN]  VPU mutex couldn't be locked before timeout expired

and no video. Please help us!

0 Kudos

2,287 Views
LeonardoSandova
Specialist I

Robert, can you tried this suggestion?

Re: VPU errors when dual-direction video streaming

I have ran these two pipelines and worked as expected (note, I used a web cam but I think using mfw_v4lsrc camera should be the same):

#PLAYBACK

gst-launch udpsrc port=5000 ! application/x-rtp, media=video, payload=113, clock-rate=90000, encoding-name=H264 !   rtph264depay ! queue max-size-buffers=1000 ! mfw_vpudecoder loopback=true ! queue max-size-buffers=1000 ! mfw_v4lsink sync=false &

#STREAMING

gst-launch v4l2src ! mfw_ipucsc ! 'video/x-raw-yuv, format=(fourcc)I420, width=1024, height=600, framerate=(fraction)90000/3000' ! queue max-size-buffers=1000 ! mfw_vpuencoder codec-type=2 ! queue max-size-buffers=1000 !   rtph264pay  pt=113 mtu=1444 ! udpsink port=5000 &

0 Kudos

2,287 Views
rjongbloed
Contributor II

I assume "v4lsrc" is a typo and you meant "mfw_v4lsrc" ? If I use "v4lsrc", I get:

IPU_CSC_CORE_LIBRARY_VERSION_INFOR_01.00.

MFW_GST_IPU_CSC_PLUGIN 2.0.3-1-179-e630aa8d build on Dec 26 2011 15:58:02.

Setting pipeline to PAUSED ...

[INFO]  Product Info: i.MX53

VPU Version: firmware 1.4.41; libvpu: 5.3.2

MFW_GST_VPU_ENCODER_PLUGIN 2.0.3-1-179-e630aa8d build on Dec 26 2011 15:58:19.

ERROR: Pipeline doesn't want to pause.

ERROR: from element /GstPipeline:pipeline0/GstV4lSrc:v4lsrc0: Could not get/set settings from/on resource.

Additional debug info:

v4l_calls.c(89): gst_v4l_get_capabilities (): /GstPipeline:pipeline0/GstV4lSrc:v4lsrc0:

error getting capabilities Invalid argument of from device /dev/video0

Setting pipeline to NULL ...

Freeing pipeline ...

The thing that is really pissing me off is when I put it back to exactly what I had before, I get:

mfw_gst_ipu_csc_get_unit_size:size=921600

** (gst-launch-0.10:26947): WARNING **: mfwgstipucsc0: size 460800 is not a multiple of unit size 921600

ERROR: from element /GstPipeline:pipeline0/MFWGstV4LSrc:mfwgstv4lsrc0: Internal data flow error.

and that was working only a couple of days ago!

I am really getting to hate gstreamer ....

0 Kudos

2,287 Views
LeonardoSandova
Specialist I

Robert,

I wrote this:

"I have ran these two pipelines and worked as expected (note, I used a web cam but I think using mfw_v4lsrc camera should be the same):"

so I intentionally use v4l2src. GStreamer is great but it is complex. Check what you did before to fix that issue.

Leo

0 Kudos

2,287 Views
rjongbloed
Contributor II

I have no idea why it worked before and why it then did not work. All I can say is after a whole day of setting semi-random parameters we managed to get one configuration to work.

First, I had to use the capture-mode parameter to mfw_v4lsrc as well as the capture-width and capture-height parameters, e.g. capture-mode=3 capture-width=720 capture-height=576

Second, reducing resolution from full screen (1024x600) to 720x576.

As is usual with this sort of problem, I think there are multiple faults. The loopback=true you suggested is essential. Without it, it goes back to having the "VPU mutex couldn't be locked before timeout expired" errors, so, I think this was the critical step.

However, I am deeply suspicious of the quality of the camera driver and/or gstreamer elements, either mfw_v4lsrc or mfw_ipucsc. First, and least important, it is my understanding that the gstreamer elements are supposed to negotiate parameters with each other. This does not seem to be completely true as if explicit parameter are removed we get strange "Internal data flow" errors due to mismatched buffer sizes. However, it seems that if we get explicit parameters right it does work. We do need the resolution to be adjustable. And asymmetric. We can live with a maximum of 720x576 but sometimes it does need to be smaller and sometimes the resolution in each direction can be different. I will do more experiments in this area and let you know.

The second, more important, thing is that the camera grabber crashed (SIGSEGV) several times.

gst-launch \

        mfw_v4lsrc capture-mode=3 capture-width=720 capture-height=576 ! \

        mfw_ipucsc ! \

        "video/x-raw-yuv,format=(fourcc)I420,width=720,height=576,framerate=(fraction)90000/3000" ! \

        mfw_v4lsink disp-width=720 disp-height=576 sync=false

MFW_GST_V4LSRC_PLUGIN 2.0.3-1-179-e630aa8d build on Dec 26 2011 16:00:55.

IPU_CSC_CORE_LIBRARY_VERSION_INFOR_01.00.

MFW_GST_IPU_CSC_PLUGIN 2.0.3-1-179-e630aa8d build on Dec 26 2011 15:58:02.

MFW_GST_V4LSINK_PLUGIN 2.0.3-1-179-e630aa8d build on Dec 26 2011 16:01:32.

Setting pipeline to PAUSED ...

Pipeline is live and does not need PREROLL ...

Setting pipeline to PLAYING ...

New clock: GstSystemClock

Caught SIGSEGV accessing address 0x18

#0  0x2ad1d976 in ?? ()

#1  0x2ad31722 in ?? ()

Spinning.  Please run 'gdb gst-launch 11214' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.

^CCaught interrupt -- handling interrupt.

Interrupt: Stopping pipeline ...

Execution ended after 17220705155 ns.

Setting pipeline to PAUSED ...

And once it had crashed, it continued to do so, requiring a reboot before it worked again. And in one case several reboots! This is a much more important issue, and harder to give a scenario for which it is guaranteed to occur. It first occurred doing the transmit over UDP, but as the above shows, it can happen on a very direct grab to screen as well. The worst kind of bug, I am afraid. But this is the thing we need support on.

0 Kudos

2,287 Views
LeonardoSandova
Specialist I

Robert,

In any GStreamer trouble, enable the debug flags, as a start I would add -gst-debug=*:2 to the pipeline. Also, if you remove the mfw_ipucsc ! capsfilter elements from pipeline, do you observe the same issue?

Leo

0 Kudos

2,287 Views
rjongbloed
Contributor II

The "crash" issue still occurs with the removal of csc/capsfilter.

It is interesting that this is not required, as it seemed to be required in earlier attempts and definitely was on standard Linux based pipelines. The mysteries continue, but I suppose at least that one does not require an answer.

The crash does, however.

0 Kudos

2,287 Views
LeonardoSandova
Specialist I

Remove all unnecessary elements and have a pipeline where you are able to reproduce the crash. If --gst-debug does not help much, you may either enable core dumping or launch the pipeline with strace app. These are basic debugging techniques which may point the root cause.

Leo

0 Kudos

2,286 Views
rjongbloed
Contributor II

Logging not that useful, a lot of "noise" in it. But this is a lower priority as it happens maybe one in ten runs.

Right now I have a different problem with audio. I should probably create a new thread for this, but to get your attention, I'll put it here first.

The following does not work

gst-launch udpsrc port=5000 caps="application/x-rtp,media=audio,encoding-name=PCMU" ! gstrtpjitterbuffer ! rtppcmudepay ! mulawdec ! audioconvert ! alsasink &

gst-launch audiotestsrc samplesperbuffer=160 ! mulawenc ! rtppcmupay ! udpsink port=5000 &

It works perfectly on a standard Linux box, and "gst-launch audiotestsrc ! mulawenc ! rtppcmupay ! rtppcmudepay ! mulawdec ! alsasink" also works fine on the iMX. It's only when it is "live" that it stops.

Thoughts?

0 Kudos

2,287 Views
LeonardoSandova
Specialist I

Hi Robert, as you mentioned, create a new thread adding log.

Leo

0 Kudos

2,287 Views
rjongbloed
Contributor II

An additional comment. I did try Juan's pipline too. It did not work, in the same manner as always.

I also do not believe that it matters if we use RTP or just throw out the packets to a UDP socket. A similar pipeline on a PC (using x264/ffmpeg etc) works fine.

What I think people are losing sight of, is we are not doing a streaming application. It is a video phone. That means that audio and video is going in both directions SIMULTANEOUSLY. We have been able to get unidirectional video working for some time.

0 Kudos

2,287 Views
rjongbloed
Contributor II

Thanks for all the help so far. Things are getting closer, but I am still getting some inexplicable behaviour.

To refresh what we are doing. It is a full screen videophone using SIP. So when a call is established we need audio & video, receive & transmit, media streams. I internally map these to four separate pipelines, each run in their own thread within our application.

This is for two reasons, in a SIP call you can have any combination of those four streams at any given moment, and it can change at any time, for example when put on hold, so you would have to stop a single pipeline, reconstruct it, and restart it. Easier just to have one each.

The second reason is constructing a pipeline that does four different streams simultaneously is tricky. The syntax is arcane and the pipeline does not "roll" at the slightest mistake. I have not had much luck in the past getting such complicated pipelines going.


With this set up, whenever we make calls, the video freezes and we get a large number (several per second) messages of the form:


mxc_ipu_hl_lib.c:956 ipu is busy

And here is the interesting bit. If I make the call with video in one direction only, it works perfectly!


This sounds like some sort of threading contention issue, and you cannot have an encoder and decoder running at the same time. This does not make sense, of course. It must be able to do this. But I have no clue what I might be doing that causes the problem. The two pipelines are:


"appsrc stream-type=0 is-live=true name=videoAppSource ! application/x-rtp, media=video, payload=113, clock-rate=90000, encoding-name=H264 ! rtph264depay name=videoDepacketiser ! queue max-size-buffers=1000 ! mfw_vpudecoder name=videoDecoder ! mfw_ipucsc ! mfw_v4lsink sync=false name=videoSinkDevice"

and

"mfw_v4lsrc name=videoSourceDevice ! mfw_ipucsc ! video/x-raw-yuv, format=(fourcc)I420, width=352, height=288, framerate=(fraction)90000/3000 ! mfw_vpuencoder codec-type=2 name=videoEncoder ! queue max-size-buffers=1000 ! rtph264pay config-interval=5 name=videoPacketiser pt=113 mtu=1444 ! appsink name=videoAppSink sync=true async=false max-buffers=10 drop=false"

2,287 Views
LeonardoSandova
Specialist I

jackmao, any idea why these two pipelines can not run concurrently? it is MX53 and it is also worth notice are two mfw_ipucsc and one mfw_v4lsink instances running at the same time.

rjongbloed, on the decoding pipeline, I think there is no need to have the mfw_ipucsc element, so please remove it.

Leo

0 Kudos

2,287 Views
jack_mao
NXP Employee
NXP Employee

Is the current issue similar to the scenarios

                    camera->encoding->decoding->display  ?

          we suppose this should be support , but can't promise the performance ,  by the way nobody has test this case before.

0 Kudos

2,287 Views
rjongbloed
Contributor II

Not exactly,it is

   camera -> encoder  -> network

and in a separate thread and separate pipeline

network -> decoder -> screen

This is nothing really weird. It's a telephone call with video. Simple as that. But clearly you need to be able to encode and decode video simultaneously.

0 Kudos

2,287 Views
rostislav
Contributor III

Freescale's iMX53 documents explain that VPU is able to provide simultaneous encoding/decoding. As far as I remember the multimedia software supports the multitasking as well. I suppose that some configuration is missing in our approach.

0 Kudos

2,287 Views
timothybean
Contributor IV

Maybe it has something to do with your appsrc/appsink. How do you handle getting the video in and sending it out to the pipeline? Also, just a note, you might want to change encoding-name=H264 to encoding-name=(String)H264... just for good measure.

Maybe move the queue before mfw_vpuencoder element so it runs in a separate thread than the mfw_v4lsrc....

On the decoding side you have the queue after the decoder.... between decoder and display.

Encoding you have it after encoder, not between v4lsrc and encoder.

Tim

0 Kudos

2,287 Views
rjongbloed
Contributor II

Just tried encoding-name=(String)H264 and it does not parse. Seemed OK the way it was.

0 Kudos

2,287 Views
timothybean
Contributor IV

Robert,

I didn't think that the (String) would do much, just good practice. I wouldn't think that your appsrc/sink wouldn't cause much problem. Yes, a queue creates a thread. It will break everything before and after it into a separate threads.

Also, do you really think you need the csc after the decoder? Maybe that is causing some issues as well... You can try taking it out.

Tim

0 Kudos

2,287 Views
rjongbloed
Contributor II

Unfortunately, it still has the contention issue. Unidirectional video is perfect, bi-directional broken.

I have tried putting a queue on both sides of the encoder/decoder, no different.

I tried removing the colour space converters, but the pipelines then get "could not link" errors.

0 Kudos