i.MX6Q: VPU memory allocation "fragmentation" error

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

i.MX6Q: VPU memory allocation "fragmentation" error

8,078件の閲覧回数
Tarek
Senior Contributor I

Hi,

I'm working on Nitrogen board, BSP version L3.0.35_1.1.0_121218_source, codecs version IMX_MMCODEC_3.0.5_Bundle and LTIB build.  The application is surveillance system which displays up to 16 cameras  on HDMI screen. There is no problem when I first bring up the application in the 16 way mode. All the cameras are displayed fine. When I try to switch to a different set of 16 cameras by shutting down all streams and starting new connections "new Gstreamer pipelines" I get the following ERROR from some streams. The error is due to VPU trying to allocate memory but fails to do so. I'm sure there is enough memory to handle the application requirement because the first time it works without problem. My guess it's fragmentation problem due to the nature of dynamically allocating  and freeing resources.

Gstreamer pipeline: appsrc -> typefinder -> vpudec -> mfw_isink

Is there any VPU garbage collection mechanism to avoid this problem?


console: page allocation failure: order:11, mode:0xd1

[<800477f4>] (unwind_backtrace+0x0/0xf8) from [<800be9a0>] (warn_alloc_failed+0xc8/0x100)

[<800be9a0>] (warn_alloc_failed+0xc8/0x100) from [<800c0ed8>] (__alloc_pages_nodemask+0x4c8/0x6cc)

[<800c0ed8>] (__alloc_pages_nodemask+0x4c8/0x6cc) from [<8004a760>] (__dma_alloc+0xa4/0x300)

[<8004a760>] (__dma_alloc+0xa4/0x300) from [<8004af98>] (dma_alloc_coherent+0x54/0x60)

[<8004af98>] (dma_alloc_coherent+0x54/0x60) from [<803a0964>] (vpu_alloc_dma_buffer+0x2c/0x54)

[<803a0964>] (vpu_alloc_dma_buffer+0x2c/0x54) from [<803a0aac>] (vpu_ioctl+0x120/0x864)

[<803a0aac>] (vpu_ioctl+0x120/0x864) from [<800ff51c>] (do_vfs_ioctl+0x80/0x54c)

[<800ff51c>] (do_vfs_ioctl+0x80/0x54c) from [<800ffa20>] (sys_ioctl+0x38/0x5c)

[<800ffa20>] (sys_ioctl+0x38/0x5c) from [<80040f80>] (ret_fast_syscall+0x0/0x30)

Mem-info:

DMA per-cpu:

CPU    0: hi:   90, btch:  15 usd:  79

CPU    1: hi:   90, btch:  15 usd:  75

CPU    2: hi:   90, btch:  15 usd:  84

CPU    3: hi:   90, btch:  15 usd:  83

Normal per-cpu:

CPU    0: hi:  186, btch:  31 usd:  23

CPU    1: hi:  186, btch:  31 usd: 170

CPU    2: hi:  186, btch:  31 usd: 148

CPU    3: hi:  186, btch:  31 usd:  78

active_anon:14250 inactive_anon:16 isolated_anon:0

active_file:936 inactive_file:8191 isolated_file:0

unevictable:0 dirty:0 writeback:0 unstable:12

free:172332 slab_reclaimable:339 slab_unreclaimable:1799

mapped:2711 shmem:17 pagetables:192 bounce:0

DMA free:78980kB min:780kB low:972kB high:1168kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:186944kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes

lowmem_reserve[]: 0 705 705 705

Normal free:610348kB min:3028kB low:3784kB high:4540kB active_anon:57000kB inactive_anon:64kB active_file:3744kB inactive_file:32764kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:722368kB mlocked:0kB dirty:0kB writeback:0kB mapped:10844kB shmem:68kB slab_reclaimable:1356kB slab_unreclaimable:7196kB kernel_stack:1024kB pagetables:768kB unstable:48kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no

lowmem_reserve[]: 0 0 0 0

DMA: 103*4kB 85*8kB 86*16kB 61*32kB 37*64kB 6*128kB 1*256kB 3*512kB 10*1024kB 9*2048kB 10*4096kB 0*8192kB 0*16384kB 0*32768kB = 78980kB

Normal: 3*4kB 4*8kB 11*16kB 26*32kB 18*64kB 5*128kB 5*256kB 2*512kB 1*1024kB 1*2048kB 1*4096kB 3*8192kB 3*16384kB 16*32768kB = 610332kB

9144 total pagecache pages

0 pages in swap cache

Swap cache stats: add 0, delete 0, find 0/0

Free swap  = 0kB

Total swap = 0kB

262144 pages of RAM

173240 free pages

37873 reserved pages

1116 slab pages

4788 pages shared

0 pages swap cached

Physical memory allocation error!

Physical memory allocation error!

ラベル(3)
30 返答(返信)

2,896件の閲覧回数
N_Coesel
Contributor III

I had the same problem with a video player application (kernel 3.0.35) which needs to play many files. The root problem is memory fragmentation. In addition to the patches I solved the problem by changing mxc_vpu.c (drivers/mxc.vpu) so that once a DMA buffer is allocated it is never released. When a new DMA buffer is requested the list with existing DMA buffers is checked for a free buffer. If none is availabe then a new buffer is created and added to the list with buffers. This is a dirty hack but if your application is likely to play similar videos (codec, frame size) then it will never allocate more than a few buffers. I have attached my version because a patch is likely to be too far off the original.

Secondly I had to remove all the dynamically allocated memory in my program which was allocated/freed to play a video. At some point memory fragmentation will catch up and cause the software to fail.

2,896件の閲覧回数
Tarek
Senior Contributor I

Set the VPU property frame-plus=1

This will reduce the amount of memory allocated!

2,897件の閲覧回数
Danube
Contributor IV

Hi Sir,

Do you have try to using Linux3.0.35_4.1.0 BSP  and test again ?

2,896件の閲覧回数
ieio
Contributor IV

Mr Chang,

I tested the latest BSP and haven't experienced VPU memory allocation fragmentation error anymore.

What have been changed?

2,897件の閲覧回数
EricNelson
Senior Contributor II

Hi Tarek,

Junping Mao (Jack) posted a patch in https://community.freescale.com/message/318063#318063 that pre-allocates contiguous memory for video playback.

I suspect that a similar patch is needed for your use case.

This issue of contiguous memory allocation is pretty critical for apps like yours.

2,897件の閲覧回数
ieio
Contributor IV

Thanks Eric,

I applied the patch and the system is more stable now, nevertheless I still experience some problem related to vpu_alloc_dma_buffer:

single_stream_s invoked oom-killer: gfp_mask=0x8d1, order=11, oom_adj=0, oom_score_adj=0                

[<8003d0dc>] (unwind_backtrace+0x0/0xfc) from [<800b19f4>] (T.393+0x6c/0x18c)                           

[<800b19f4>] (T.393+0x6c/0x18c) from [<800b1b7c>] (T.391+0x68/0x220)                                    

[<800b1b7c>] (T.391+0x68/0x220) from [<800b1f64>] (out_of_memory+0x230/0x318)                           

[<800b1f64>] (out_of_memory+0x230/0x318) from [<800b5c48>] (__alloc_pages_nodemask+0x634/0x6ec)         

[<800b5c48>] (__alloc_pages_nodemask+0x634/0x6ec) from [<80040434>] (__dma_alloc+0xd4/0x2fc)            

[<80040434>] (__dma_alloc+0xd4/0x2fc) from [<80040730>] (dma_alloc_coherent+0x54/0x60)                  

[<80040730>] (dma_alloc_coherent+0x54/0x60) from [<80384db8>] (vpu_alloc_dma_buffer+0x2c/0x64)          

[<80384db8>] (vpu_alloc_dma_buffer+0x2c/0x64) from [<803855b4>] (vpu_ioctl+0x7c4/0x8c8)                 

[<803855b4>] (vpu_ioctl+0x7c4/0x8c8) from [<800f5b00>] (do_vfs_ioctl+0x80/0x5e0)                        

[<800f5b00>] (do_vfs_ioctl+0x80/0x5e0) from [<800f6098>] (sys_ioctl+0x38/0x60)                          

[<800f6098>] (sys_ioctl+0x38/0x60) from [<80037540>] (ret_fast_syscall+0x0/0x30)                        

Mem-info:                                                                                               

DMA per-cpu:                                                                                            

CPU    0: hi:   90, btch:  15 usd:  88                                                                  

Normal per-cpu:                                                                                         

CPU    0: hi:    0, btch:   1 usd:   0                                                                  

active_anon:1561 inactive_anon:11 isolated_anon:0                                                       

active_file:1 inactive_file:0 isolated_file:0                                                          

unevictable:0 dirty:0 writeback:0 unstable:0                                                           

free:40134 slab_reclaimable:273 slab_unreclaimable:1510                                                

mapped:93 shmem:14 pagetables:38 bounce:0                                                              

DMA free:159680kB min:1692kB low:2112kB high:2536kB active_anon:5620kB inactive_anon:44kB active_file:0ks

lowmem_reserve[]: 0 7 7 7                                                                               

Normal free:856kB min:68kB low:84kB high:100kB active_anon:624kB inactive_anon:0kB active_file:4kB inacto

lowmem_reserve[]: 0 0 0 0                                                                               

DMA: 214*4kB 191*8kB 127*16kB 170*32kB 117*64kB 34*128kB 29*256kB 21*512kB 19*1024kB 15*2048kB 15*4096kBB

Normal: 122*4kB 47*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB 0*8192kB B

12 total pagecache pages                                                                                

0 pages in swap cache                                                                                   

Swap cache stats: add 0, delete 0, find 0/0                                                             

Free swap  = 0kB                                                                                        

Total swap = 0kB                                                                                        

65536 pages of RAM                                                                                      

40282 free pages                                                                                        

19660 reserved pages                                                                                    

1244 slab pages                                                                                         

91 pages shared                                                                                         

0 pages swap cached                                                                                     

[ pid ]   uid  tgid total_vm      rss cpu oom_adj oom_score_adj name                                    

[ 1255]     0  1255      439       48   0     -17         -1000 udevd                                   

[ 2291]     0  2291      564       35   0       0             0 rc_mxc.S                                

[ 2296]     0  2296      592       50   0       0             0 sh                                      

[ 2298]     0  2298    13817     1485   0       0             0 single_stream_s                         

[ 2451]     0  2451      548       17   0       0             0 top                                     

Out of memory: Kill process 2298 (single_stream_s) score 2 or sacrifice child                           

Killed process 2298 (single_stream_s) total-vm:55268kB, anon-rss:5576kB, file-rss:364kB                 

[ALLOC] mem alloc cpu_addr = 0xff300000                                                                 

[FREE] freed paddr=0x12000000                                                                           

imx-ipuv3 imx-ipuv3.0: IPU Warning - IPU_INT_STAT_10 = 0x00080000                                       

imx-ipuv3 imx-ipuv3.0: IPU Warning - IPU_INT_STAT_5 = 0x00800000            

Any clues?

0 件の賞賛
返信

2,897件の閲覧回数
EricNelson
Senior Contributor II

Hi ieio,

You did catch that Jack's patch was for the SABRE SD board, didn't you? In order for it to work on Nitrogen6 or SABRE Lite boards, you'll need to add the <tt>dma_declare_coherent_memory</tt> call into <tt>arch/arm/mach-mx6/board-mx6q_sabrelite.c</tt>.

Without some re-factoring, it's not really a general-purpose solution, so we haven't added it to our kernels.

Regards,


Eric

0 件の賞賛
返信

2,897件の閲覧回数
ieio
Contributor IV

Actually I have  a custom board, anyway I added:

voutdev = imx6q_add_v4l2_output(0);

    if (vout_mem.res_msize && voutdev) {

   dma_declare_coherent_memory(&voutdev->dev,
   vout_mem.res_mbase,
   vout_mem.res_mbase,
   vout_mem.res_msize,
   DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE);

    }

    imx6q_add_v4l2_capture(0, &capture_data[0]);

to arch/arm/mach-mx6/customboard.c

and as I said thanks to this patch the system is now stable, I unfortunately still experience problem with VPU dma allocation and oom.

i.


0 件の賞賛
返信

2,897件の閲覧回数
Tarek
Senior Contributor I

Hi ieio,

Are you sure the system is more stable with the patch?

I've changed the sabrelite file and now the system crash when I try to run more than 2 1080 streams.

Out of memory: Kill process 5852 (unity-2d-launch) score 45 or sacrifice child

Killed process 5852 (unity-2d-launch) total-vm:171452kB, anon-rss:15604kB, file-rss:19252kB

Before the patch the VPU was failing to allocate Physical memory and the pipeline fails without killing the system?!!

0 件の賞賛
返信

2,897件の閲覧回数
ieio
Contributor IV

Hi Tarek,

this is the behavior I have after the patch: oom-killer kills my app, but Linux is still reliable and I can restart the application.

Before the patch the VPU was failing to allocate phy mem and I was not able to restart the application.


0 件の賞賛
返信

2,898件の閲覧回数
ieio
Contributor IV

Hi all,

I am experiencing the exact same problem on a different board based on imx6s.

Have u solved this problem?

Regards.

0 件の賞賛
返信

2,898件の閲覧回数
Tarek
Senior Contributor I

Hi ieio,

No not really. I changed my strategy, now I'm allocating all the VPU decoders I may need at start up so I don't need to allocate/deallocate dynamically.

2,898件の閲覧回数
ieio
Contributor IV

Hi Tarek,

may I ask how many VPU decoders are u using? Is each VPU decoder used just once in the pipeline or dynamically linked to several pipeline? I tried to use a limited number or VPU decoders, but I experienced problems dynamically changing the pipeline. My input is an mpeg stream as well and I need to change stream dynamically.

i.

0 件の賞賛
返信

2,898件の閲覧回数
Tarek
Senior Contributor I

Hi ieio,

I have 16 VPU decoders for 16 pipelines. So each pipeline is using one VPU. I couldn't changed dynamically so I did it this way.

What exactly are you changing?

0 件の賞賛
返信

2,898件の閲覧回数
ieio
Contributor IV

Hi Tarek,

I created a simple loop that switch from one stream to another, I unref a pipeline and I create a new one each time I switch from one flow to the another.

May I ask how u select which pipeline should be displayed in case you are running them concurrently?

thanks,

i.

0 件の賞賛
返信

2,898件の閲覧回数
Tarek
Senior Contributor I

Hi ieio,

You can set the pipeline state to NULL if you don't want it to display.

Also you can set the pipeline state to NULL change the stream then set it to PLAYING. For example if your source is filesrc, set state to NULL change the filesrc property "location" to the new file then set it to PLAYING.

This way you can avoid the need for multiple pipelines.

In my case I had to have 16 pipelines because the program may display the 16 at the same time.

2,898件の閲覧回数
ieio
Contributor IV

Thanks Tarek,

your suggestion works.

Do you run some pipelines in parallel?

I am now using just a pipeline and I put it in PAUSED and then NULL state and then I change the "location" property before PLAYING again.

Anyway if I run 2 instances of my code I run in the same problem.

q1 and q2 are two processes running in parallel, and using different portions of screen.

q1: page allocation failure: order:11, mode:0xd1
[<8003d0dc>] (unwind_backtrace+0x0/0xfc) from [<800b5c5c>] (warn_alloc_failed+0x9c/0x118)
[<800b5c5c>] (warn_alloc_failed+0x9c/0x118) from [<800b6948>] (__alloc_pages_nodemask+0x494/0x6ec)
[<800b6948>] (__alloc_pages_nodemask+0x494/0x6ec) from [<80040434>] (__dma_alloc+0xd4/0x2fc)
[<80040434>] (__dma_alloc+0xd4/0x2fc) from [<80040730>] (dma_alloc_coherent+0x54/0x60)
[<80040730>] (dma_alloc_coherent+0x54/0x60) from [<803b3ba0>] (vpu_alloc_dma_buffer+0x2c/0x64)
[<803b3ba0>] (vpu_alloc_dma_buffer+0x2c/0x64) from [<803b439c>] (vpu_ioctl+0x7c4/0x8c8)
[<803b439c>] (vpu_ioctl+0x7c4/0x8c8) from [<800f69a0>] (do_vfs_ioctl+0x80/0x5e0)
[<800f69a0>] (do_vfs_ioctl+0x80/0x5e0) from [<800f6f38>] (sys_ioctl+0x38/0x60)
[<800f6f38>] (sys_ioctl+0x38/0x60) from [<8003769c>] (__sys_trace_return+0x0/0x24)
Mem-info:
DMA per-cpu:
CPU    0: hi:   90, btch:  15 usd:  88
Normal per-cpu:
CPU    0: hi:    6, btch:   1 usd:   0
active_anon:1139 inactive_anon:9 isolated_anon:0
active_file:3 inactive_file:94 isolated_file:0
unevictable:0 dirty:0 writeback:0 unstable:0
free:46144 slab_reclaimable:274 slab_unreclaimable:1563
mapped:103 shmem:14 pagetables:57 bounce:0
DMA free:160012kB min:1564kB low:1952kB high:2344kB active_anon:948kB inactive_anon:0kB active_file:0kB inactive_file:60kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:186944kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:60kB slab_unreclaimable:140kB kernel_stack:96kB pagetables:4kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:93 all_unreclaimable? yes
lowmem_reserve[]: 0 39 39 39
Normal free:24368kB min:336kB low:420kB high:504kB active_anon:3608kB inactive_anon:36kB active_file:12kB inactive_file:484kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:40384kB mlocked:0kB dirty:0kB writeback:0kB mapped:432kB shmem:56kB slab_reclaimable:1036kB slab_unreclaimable:6128kB kernel_stack:328kB pagetables:224kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 191*4kB 182*8kB 158*16kB 102*32kB 89*64kB 55*128kB 32*256kB 16*512kB 22*1024kB 13*2048kB 16*4096kB 1*8192kB 0*16384kB 0*32768kB = 160012kB
Normal: 502*4kB 581*8kB 416*16kB 207*32kB 54*64kB 5*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB 0*8192kB 0*16384kB 0*32768kB = 24032kB
315 total pagecache pages
0 pages in swap cache
Swap cache stats: add 0, delete 0, find 0/0
Free swap  = 0kB
Total swap = 0kB
65536 pages of RAM
46031 free pages
11564 reserved pages
1283 slab pages
270 pages shared
0 pages swap cached
Physical memory allocation error!
Physical memory allocation error!

I have strace logs:

hwbuf allocator zone(614400) destroied.

hwbuf allocator zone(614400) created

hwbuf allocator zone(614400) destroied.

hwbuf allocator zone(614400) created

hwbuf allocator zone(614400) destroied.

[1;34mVS1 destroyed, force=0!

[0mhwbuf allocator zone(462848) destroied.

[INFO]    Product Info: i.MX6Q/D/S

[1;32mvpudec versions :smileyhappy:

[0m) = 669

write(1, "\33[1;32m\tplugin: 3.0.5\n\33[0m", 26 [1;32m    plugin: 3.0.5

[0m) = 26

write(1, "\33[1;32m\twrapper: 1.0.28(VPUWRAPP"..., 80 [1;32m    wrapper: 1.0.28(VPUWRAPPER_ARM_LINUX Build on Apr 12 2013 17:28:04)

[0m) = 80

write(1, "\33[1;32m\tvpulib: 5.4.10\n\33[0m", 27 [1;32m    vpulib: 5.4.10

[0m) = 27

write(1, "\33[1;32m\tfirmware: 2.1.8.34588\n\33["..., 34 [1;32m    firmware: 2.1.8.34588

[0m) = 34

ioctl(3, VIDIOC_QUERYCAP or VT_OPENQRY, 0x7ec0a878) = -1 EPERM (Operation not permitted)

write(2, "Unable to set the pipeline to th"..., 49Unable to set the pipeline to the playing state.

) = 49

write(1, "[ERR]\tmem allocation failed!\n", 29[ERR]    mem allocation failed!

) = 29

it seems that the ioctl VIDIOC_QUERYCAP or VT_OPENQRY fails.

When it works correctly the log is:

hwbuf allocator zone(614400) destroied.

hwbuf allocator zone(614400) created

hwbuf allocator zone(614400) destroied.

hwbuf allocator zone(614400) created

hwbuf allocator zone(614400) destroied.

[1;34mVS1 destroyed, force=0!

[0mhwbuf allocator zone(462848) destroied.

[INFO]    Product Info: i.MX6Q/D/S

[1;32mvpudec versions :smileyhappy:

[0m) = 669

write(1, "\33[1;32m\tplugin: 3.0.5\n\33[0m", 26 [1;32m    plugin: 3.0.5

[0m) = 26

write(1, "\33[1;32m\twrapper: 1.0.28(VPUWRAPP"..., 80 [1;32m    wrapper: 1.0.28(VPUWRAPPER_ARM_LINUX Build on Apr 12 2013 17:28:04)

[0m) = 80

write(1, "\33[1;32m\tvpulib: 5.4.10\n\33[0m", 27 [1;32m    vpulib: 5.4.10

[0m) = 27

write(1, "\33[1;32m\tfirmware: 2.1.8.34588\n\33["..., 34 [1;32m    firmware: 2.1.8.34588

[0m) = 34

ioctl(3, VIDIOC_QUERYCAP or VT_OPENQRY, 0x7ec0a878) = 0

mmap2(NULL, 5236743, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0x19000) = 0x2b414000

SYS_288(0x1, 0x1, 0, 0x7ec0a860, 0x194c8) = 0

fcntl64(7, F_SETFL, O_RDONLY|O_NONBLOCK) = 0

fcntl64(8, F_SETFL, O_RDONLY|O_NONBLOCK) = 0

timer_delete(0x1)                       = 0

mq_notify(16, ptrace: umoven: Input/output error

{...})                    = 9

mq_getsetattr(9, {mq_flags=O_RDONLY|0x10, mq_maxmsg=0, mq_msgsize=0, mq_curmsg=12}, ptrace: umoven: Input/output error

{...}) = 0

SYS_286(0x9, 0x7ec0a658, 0x7ec0a664, 0x10, 0x9) = 0

gettimeofday({265634, 766627}, NULL)    = 0

SYS_290(0x9, 0x7ec0a600, 0x14, 0, 0x7ec0a614) = 20

SYS_297(0x9, 0x7ec0a5e4, 0, 0, 0x7ec0a796) = 108

SYS_297(0x9, 0x7ec0a5e4, 0, 0, 0x2)     = 20

close(9)                                = 0

mq_notify(16, ptrace: umoven: Input/output error

{...})                    = 9

mq_getsetattr(9, {mq_flags=O_RDONLY|0x10, mq_maxmsg=0, mq_msgsize=0, mq_curmsg=12}, ptrace: umoven: Input/output error

{...}) = 0

SYS_286(0x9, 0x7ec0a628, 0x7ec0a634, 0x10, 0x9) = 0


Yossi I think the interaction that fails with the VPU is ioctl(3, VIDIOC_QUERYCAP or VT_OPENQRY,

where fd 3 is /dev/mxc_vpu

lrwx------    1 root     root            64 Jan  4 02:42 3 -> /dev/mxc_vpu


0 件の賞賛
返信

2,898件の閲覧回数
Tarek
Senior Contributor I

Hi ieio,

I have 32 concurrent pipelines but only 16 can be active at the same time.

Try first to use the gst-launch command line. run 2 or more pipelines at the same time and see if that's any good.

0 件の賞賛
返信

2,898件の閲覧回数
ieio
Contributor IV

I can run 4 pipelines at the same time using  gst-launch, but I do not stop them and I do not change the src property each 3 second.

Are u running the concurrent pipelines in the same process? are u using different threads?

Thanks,

i.

0 件の賞賛
返信

2,897件の閲覧回数
Tarek
Senior Contributor I

I don't think it's related to threads/process. Try to increase the 3 seconds. Make it 3 minutes or so. I think it takes much more than 3 seconds to start a single stream.

T

0 件の賞賛
返信