i.MX6 framebuffer memory issue

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX6 framebuffer memory issue

2,037 Views
alexanderkhudos
Contributor I

I am creating Gstreamer pipelines oni.MX6 platform using Gstreamer API. There are two example pipelines, one just connects video source to a video sink, another adds deinterlacer:

int deinterlace(void)

{

    gst_element_set_state(stopPipe.pipeline, GST_STATE_NULL);

    gst_object_unref(stopPipe.pipeline);

    stopPipe.pipeline = NULL;

    recPipe.pipeline = gst_pipeline_new("gst_RecPipeline");

    recPipe.videoSrc = gst_element_factory_make(GST_SRC, "recPipe.videoSrc");

    recPipe.vidTrans1 = gst_element_factory_make(GST_VID_TRANS, "recPipe.vidTrans1");

    recPipe.dispSink = gst_element_factory_make(GST_SINK, "recPipe.dispSink");

    g_object_set(recPipe.vidTrans1, "deinterlace", 3, NULL); // enable deinterlacing

   

    gst_bin_add_many(GST_BIN(recPipe.pipeline), recPipe.videoSrc, recPipe.vidTrans1, recPipe.dispSink, NULL);

    gst_element_link_many(recPipe.videoSrc, recPipe.vidTrans1, recPipe.dispSink, NULL);

    gst_element_set_state(recPipe.pipeline, GST_STATE_PLAYING);

}

int raw(void)

{

    gst_element_set_state(recPipe.pipeline, GST_STATE_NULL);

    gst_object_unref(recPipe.pipeline);

    recPipe.pipeline = NULL;

    stopPipe.pipeline = gst_pipeline_new("stopPipe");

    stopPipe.videoSrc = gst_element_factory_make(GST_SRC, "stopPipe.videoSrc");

    stopPipe.dispSink = gst_element_factory_make(GST_SINK, "stopPipe.dispSink");

   

    gst_bin_add_many(GST_BIN(stopPipe.pipeline), stopPipe.videoSrc, stopPipe.dispSink, NULL);

    gst_element_link_many(stopPipe.videoSrc, stopPipe.dispSink, NULL);

    gst_element_set_state(stopPipe.pipeline, GST_STATE_PLAYING);

}

The main loop calls each of the functions (manually or just after timeout). Every time a pipeline is created, more memory is allocated, but it does not seem to be ever released, as every pair of created/destroyed pipelines increases application memory consumption by ~8MB. Eventually the application just runs out of memory. The issue does not seem to be specific to Gstreamer pipelines. If I create a Qt application creating and destroying different layouts, the behaviour will be similar, that is every pair of created/destroyed layouts will add ~8MB to the application. When the application terminates, all memory goes back to normal.

I have traced that in Linux 3.10.17, and in Linux 3.14.28, using both gstreamer-0.10, and gstreamer-1.0.

My question is: what can in i.MX6 (framebuffer?) cause this behaviour and, more importantly, how to deal with it?

Tags (2)
0 Kudos
7 Replies

1,002 Views
Yuri
NXP Employee
NXP Employee

Hello,

The following thread may clarify the problem (as general Linux one).

Gstreamer RTMP Streaming Memory Leak In DMA

Some customers managed to avoid the issue, using "static" memory

allocation for all needed time , without memory request / release policy. 

This do not fragment memory.

Regards,

Yuri.

0 Kudos

1,002 Views
alexanderkhudos
Contributor I

Hello Yuri,

Neither of your suggestions fixes the memory leak, every switch between different FB outputs increases the application memory footprint by ~8MB. Do you know of a solution to this, have Freescale/NXP developers actually fixed this issue?

Regards,

Alexander

0 Kudos

1,002 Views
Yuri
NXP Employee
NXP Employee

Hello,

  Sorry, internal thread was linked last time, really the mentioned issues was concerned

with big memory area allocation. Nevertheless,  is it possible to try some standard approaches

to decrease influence of memory leakages. 

===

Preserving contiguous memory region

  echo 1 > /proc/sys/vm/lowmem_reserve_ratio

Setting the parameter in this way, the Kernel will prevent application and file caching to

fragment “too much” memory. However it will limit the amount of memory that applications

can allocate.

Dropping Caches

              echo 3 > /proc/sys/vm/drop_caches

This will free all system caches, but  this can impact other applications running in the system,
requiring
them to load the files they use each time the caches are dropped


There are tools for avoiding file caching of a specific application like the pagecache-management :

https://code.google.com/p/pagecache-mangagement/

This is an open source project NOT maintained and NOT supported by NXP, which could anyway
help in the intent.

===

Regards,

Yuri.

0 Kudos

1,002 Views
alexanderkhudos
Contributor I

Hello Yuri,

I cannot view the link you posted, "Unauthorized!".

I am using L3.14.28, and cma=384M does not seem to help.

Switching to L3.14.52 will not be easy for me as I am using other vendor's Yocto BSP. Is this memory leak fixed in L3.14.52? Can you point me to specific places I should modify?

Alex

0 Kudos

1,002 Views
Yuri
NXP Employee
NXP Employee

Hello,

from the following thread :

memory leakage "issue has been introduced by the FSL patches, unfortunately."

CMA and system memory allocation Linux i.MX6

For 3.10 kernel, You may try to add "cma=384M" in kernel command line.

Also, please use the recent Linux BSP :

https://www.nxp.com/webapp/Download?colCode=L3.14.52_1.1.0_MX6QDLSOLO&appType=license&location=null

Have a great day,
Yuri

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 Kudos

1,002 Views
alexanderkhudos
Contributor I

So, there is no answer from anyone working for NXP. And I thought the memory leak is a serious issue...

0 Kudos

1,002 Views
Yuri
NXP Employee
NXP Employee

Hello,

  looks like the request was lost because of internal reason. Sorry.
I take it and will consider it soon, after holiday.

Have a great day,
Yuri

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 Kudos