Kernel booting getting stuck after drm initialisation

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Kernel booting getting stuck after drm initialisation

1,341 Views
sd05
Contributor III

Hello community

We are facing a problem while kernel booting, any idea ? Please find the logs

[ 1.634013] imx-drm soc@0:bus@32c00000:display-subsystem: bound 32e10000.mipi_dsi (ops imx_sec_dsim_ops)
[ 1.643973] [drm] Initialized imx-drm 1.0.0 20120507 for soc@0:bus@32c00000:display-subsystem on minor 0
[ 1.868568] max1720x 3-0036: Alert: TEMP MAX!
[ 1.872946] max1720x 3-0036: Alert: VOLT MAX!
[ 1.877307] max1720x 3-0036: Alert: CURR MAX!
[ 22.646610] rcu: INFO: rcu_preempt self-detected stall on CPU
[ 22.652439] rcu: 0-....: (5268 ticks this GP) idle=e3e/1/0x4000000000000002 softirq=59/60 fqs=2557
[ 22.661583] (t=5253 jiffies g=-1035 q=1)
[ 22.665609] Task dump for CPU 0:
[ 22.668858] migration/0 S 0 11 2 0x0000002a
[ 22.674377] Call trace:
[ 22.676914] dump_backtrace+0x0/0x140
[ 22.680598] show_stack+0x14/0x20
[ 22.683954] sched_show_task+0x108/0x138
[ 22.687899] dump_cpu_task+0x40/0x50
[ 22.691507] rcu_dump_cpu_stacks+0x94/0xd0
[ 22.695610] rcu_sched_clock_irq+0x75c/0x9c8
[ 22.699909] update_process_times+0x2c/0x68
[ 22.704129] tick_sched_handle.isra.0+0x2c/0x68
[ 22.708667] tick_sched_timer+0x48/0x98
[ 22.712515] __hrtimer_run_queues+0x110/0x1b0
[ 22.716888] hrtimer_interrupt+0xe4/0x240
[ 22.720935] arch_timer_handler_phys+0x30/0x40
[ 22.725406] handle_percpu_devid_irq+0x80/0x140
[ 22.729965] generic_handle_irq+0x24/0x38
[ 22.733987] __handle_domain_irq+0x60/0xb8
[ 22.738099] gic_handle_irq+0x5c/0x148
[ 22.741862] el1_irq+0xb8/0x180
[ 22.745018] smpboot_thread_fn+0x60/0x2a8
[ 22.749059] kthread+0xf0/0x120
[ 22.752219] ret_from_fork+0x10/0x18
[ 85.658584] rcu: INFO: rcu_preempt self-detected stall on CPU
[ 85.664345] rcu: 0-....: (20995 ticks this GP) idle=e3e/1/0x4000000000000002 softirq=59/60 fqs=10422
[ 85.673657] (t=21005 jiffies g=-1035 q=1)
[ 85.677762] Task dump for CPU 0:
[ 85.680995] migration/0 S 0 11 2 0x0000002a
[ 85.686491] Call trace:
[ 85.688948] dump_backtrace+0x0/0x140
[ 85.692623] show_stack+0x14/0x20
[ 85.695946] sched_show_task+0x108/0x138
[ 85.699881] dump_cpu_task+0x40/0x50
[ 85.703464] rcu_dump_cpu_stacks+0x94/0xd0
[ 85.707569] rcu_sched_clock_irq+0x75c/0x9c8
[ 85.711846] update_process_times+0x2c/0x68
[ 85.716044] tick_sched_handle.isra.0+0x2c/0x68
[ 85.720582] tick_sched_timer+0x48/0x98
[ 85.724428] __hrtimer_run_queues+0x110/0x1b0
[ 85.728790] hrtimer_interrupt+0xe4/0x240
[ 85.732814] arch_timer_handler_phys+0x30/0x40
[ 85.737265] handle_percpu_devid_irq+0x80/0x140
[ 85.741811] generic_handle_irq+0x24/0x38
[ 85.745828] __handle_domain_irq+0x60/0xb8
[ 85.749930] gic_handle_irq+0x5c/0x148
[ 85.753689] el1_irq+0xb8/0x180
[ 85.756869] __schedule+0x50/0x498
[ 85.760283] preempt_schedule_irq+0x4c/0x78
[ 85.764473] arm64_preempt_schedule_irq+0x10/0x20
[ 85.769188] el1_irq+0xd0/0x180
[ 85.772335] schedule+0x4/0xe0
[ 85.775403] kthread+0xf0/0x120
[ 85.778551] ret_from_fork+0x10/0x18
[ 148.670587] rcu: INFO: rcu_preempt self-detected stall on CPU
[ 148.676346] rcu: 0-....: (6246 ticks this GP) idle=e3e/1/0x4000000000000002 softirq=61/61 fqs=18287
[ 148.685571] (t=36759 jiffies g=-1035 q=1)
[ 148.689676] Task dump for CPU 0:
[ 148.692908] migration/0 R running task 0 11 2 0x0000002a
[ 148.699964] Call trace:
[ 148.702422] dump_backtrace+0x0/0x140
[ 148.706098] show_stack+0x14/0x20
[ 148.709420] sched_show_task+0x108/0x138
[ 148.713357] dump_cpu_task+0x40/0x50
[ 148.716944] rcu_dump_cpu_stacks+0x94/0xd0
[ 148.721047] rcu_sched_clock_irq+0x75c/0x9c8
[ 148.725328] update_process_times+0x2c/0x68
[ 148.729526] tick_sched_handle.isra.0+0x2c/0x68
[ 148.734064] tick_sched_timer+0x48/0x98
[ 148.737906] __hrtimer_run_queues+0x110/0x1b0
[ 148.742274] hrtimer_interrupt+0xe4/0x240
[ 148.746298] arch_timer_handler_phys+0x30/0x40
[ 148.750748] handle_percpu_devid_irq+0x80/0x140
[ 148.755286] generic_handle_irq+0x24/0x38
[ 148.759303] __handle_domain_irq+0x60/0xb8
[ 148.763409] gic_handle_irq+0x5c/0x148
[ 148.767168] el1_irq+0xb8/0x180
[ 148.770326] _raw_spin_unlock_irq+0x10/0x48
[ 148.774522] preempt_schedule_irq+0x4c/0x78
[ 148.778714] arm64_preempt_schedule_irq+0x10/0x20
[ 148.783428] el1_irq+0xd0/0x180
[ 148.786576] preempt_schedule_common+0x38/0x48
[ 148.791030] preempt_schedule+0x28/0x30
[ 148.794873] _raw_spin_unlock_irq+0x34/0x48
[ 148.799077] cpu_stopper_thread+0x70/0x120
[ 148.803181] smpboot_thread_fn+0x1e4/0x2a8
[ 148.807286] kthread+0xf0/0x120
[ 148.810434] ret_from_fork+0x10/0x18
[ 211.682581] rcu: INFO: rcu_preempt self-detected stall on CPU
[ 211.688340] rcu: 0-....: (21965 ticks this GP) idle=e3e/1/0x4000000000000002 softirq=61/61 fqs=26148
[ 211.697652] (t=52511 jiffies g=-1035 q=1)
[ 211.701759] Task dump for CPU 0:
[ 211.704992] migration/0 R running task 0 11 2 0x0000002a
[ 211.712052] Call trace:
[ 211.714508] dump_backtrace+0x0/0x140
[ 211.718181] show_stack+0x14/0x20
[ 211.721503] sched_show_task+0x108/0x138
[ 211.725436] dump_cpu_task+0x40/0x50
[ 211.729017] rcu_dump_cpu_stacks+0x94/0xd0
[ 211.733122] rcu_sched_clock_irq+0x75c/0x9c8
[ 211.737402] update_process_times+0x2c/0x68
[ 211.741596] tick_sched_handle.isra.0+0x2c/0x68
[ 211.746138] tick_sched_timer+0x48/0x98
[ 211.749980] __hrtimer_run_queues+0x110/0x1b0
[ 211.754347] hrtimer_interrupt+0xe4/0x240
[ 211.758371] arch_timer_handler_phys+0x30/0x40
[ 211.762821] handle_percpu_devid_irq+0x80/0x140
[ 211.767362] generic_handle_irq+0x24/0x38
[ 211.771379] __handle_domain_irq+0x60/0xb8
[ 211.775486] gic_handle_irq+0x5c/0x148
[ 211.779245] el1_irq+0xb8/0x180
[ 211.782392] _raw_spin_unlock_irq+0x10/0x48
[ 211.786586] preempt_schedule_irq+0x4c/0x78
[ 211.790780] arm64_preempt_schedule_irq+0x10/0x20
[ 211.795494] el1_irq+0xd0/0x180
[ 211.798642] preempt_schedule_common+0x10/0x48
[ 211.803096] preempt_schedule+0x28/0x30
[ 211.806940] smpboot_thread_fn+0x1d4/0x2a8
[ 211.811042] kthread+0xf0/0x120
[ 211.814194] ret_from_fork+0x10/0x18
[ 242.695092] INFO: task swapper/0:1 blocked for more than 120 seconds.
[ 242.701570] Not tainted 5.4.3-imx8mm+g34ac1cd70762 #1
[ 242.707155] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 242.714999] swapper/0 D 0 1 0 0x00000028
[ 242.720504] Call trace:
[ 242.722979] __switch_to+0xe4/0x148
[ 242.726481] __schedule+0x214/0x498
[ 242.729985] schedule+0x40/0xe0
[ 242.733141] schedule_timeout+0x1b8/0x260
[ 242.737163] wait_for_completion+0x94/0x120
[ 242.741372] __flush_work.isra.0+0xfc/0x210
[ 242.745571] flush_work+0xc/0x18
[ 242.748837] deferred_probe_initcall+0x58/0xb0
[ 242.753297] do_one_initcall+0x50/0x190
[ 242.757192] kernel_init_freeable+0x198/0x22c
[ 242.761567] kernel_init+0x10/0x100
[ 242.765071] ret_from_fork+0x10/0x18
[ 242.768738] INFO: task kworker/2:1:33 blocked for more than 120 seconds.
[ 242.775450] Not tainted 5.4.3-imx8mm+g34ac1cd70762 #1
[ 242.781031] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 242.788879] kworker/2:1 D 0 33 2 0x00000028
[ 242.794417] Workqueue: events deferred_probe_work_func
[ 242.799570] Call trace:
[ 242.802025] __switch_to+0xe4/0x148
[ 242.805529] __schedule+0x214/0x498
[ 242.809039] schedule+0x40/0xe0
[ 242.812197] schedule_timeout+0x1b8/0x260
[ 242.816219] wait_for_completion+0x94/0x120
[ 242.820423] __flush_work.isra.0+0xfc/0x210
[ 242.824624] flush_work+0xc/0x18
[ 242.827891] drain_all_pages+0x174/0x258
[ 242.831853] start_isolate_page_range+0x170/0x270
[ 242.836577] alloc_contig_range+0xe8/0x410
[ 242.840691] cma_alloc+0x118/0x230
[ 242.844111] dma_alloc_contiguous+0x98/0xb0
[ 242.848312] __dma_direct_alloc_pages+0x88/0x1f8
[ 242.852947] arch_dma_alloc+0xb8/0x178
[ 242.856713] dma_direct_alloc+0x2c/0x58
[ 242.860566] dma_alloc_attrs+0x7c/0xe8
[ 242.864362] drm_gem_cma_create+0x4c/0x78
[ 242.868389] drm_gem_cma_dumb_create+0x40/0xa0
[ 242.872855] drm_mode_create_dumb+0x8c/0xa8
[ 242.877057] drm_client_framebuffer_create+0x8c/0x1c8
[ 242.882153] drm_fb_helper_generic_probe+0x44/0x1a8
[ 242.887046] __drm_fb_helper_initial_config_and_unlock+0x280/0x470
[ 242.893243] drm_fbdev_client_hotplug+0xd4/0x188
[ 242.897877] drm_fbdev_generic_setup+0x98/0x140
[ 242.902438] imx_drm_bind+0x100/0x150
[ 242.906134] try_to_bring_up_master+0x164/0x1c0
[ 242.910679] __component_add+0xa0/0x168
[ 242.914527] component_add+0x10/0x18
[ 242.918132] imx_sec_dsim_probe+0x18/0x20
[ 242.922168] platform_drv_probe+0x50/0xa0
[ 242.926188] really_probe+0xd4/0x308
[ 242.929779] driver_probe_device+0x54/0xe8
[ 242.933892] __device_attach_driver+0x80/0xb8
[ 242.938265] bus_for_each_drv+0x74/0xc0
[ 242.942118] __device_attach+0xdc/0x138
[ 242.945970] device_initial_probe+0x10/0x18
[ 242.950167] bus_probe_device+0x90/0x98
[ 242.954015] deferred_probe_work_func+0x64/0x98
[ 242.958568] process_one_work+0x198/0x320
[ 242.962591] worker_thread+0x1f0/0x420
[ 242.966352] kthread+0xf0/0x120
[ 242.969509] ret_from_fork+0x10/0x18
[ 274.694591] rcu: INFO: rcu_preempt self-detected stall on CPU
[ 274.700348] rcu: 0-....: (37686 ticks this GP) idle=e3e/1/0x4000000000000002 softirq=61/61 fqs=33992
[ 274.709660] (t=68265 jiffies g=-1035 q=1)
[ 274.713768] Task dump for CPU 0:
[ 274.717000] migration/0 R running task 0 11 2 0x0000002a
[ 274.724063] Call trace:
[ 274.726516] dump_backtrace+0x0/0x140
[ 274.730187] show_stack+0x14/0x20
[ 274.733516] sched_show_task+0x108/0x138
[ 274.737444] dump_cpu_task+0x40/0x50
[ 274.741030] rcu_dump_cpu_stacks+0x94/0xd0
[ 274.745133] rcu_sched_clock_irq+0x75c/0x9c8
[ 274.749409] update_process_times+0x2c/0x68
[ 274.753608] tick_sched_handle.isra.0+0x2c/0x68
[ 274.758146] tick_sched_timer+0x48/0x98
[ 274.761992] __hrtimer_run_queues+0x110/0x1b0
[ 274.766364] hrtimer_interrupt+0xe4/0x240
[ 274.770384] arch_timer_handler_phys+0x30/0x40
[ 274.774834] handle_percpu_devid_irq+0x80/0x140
[ 274.779375] generic_handle_irq+0x24/0x38
[ 274.783394] __handle_domain_irq+0x60/0xb8
[ 274.787495] gic_handle_irq+0x5c/0x148
[ 274.791254] el1_irq+0xb8/0x180
[ 274.794406] __schedule+0x460/0x498
[ 274.797901] preempt_schedule_irq+0x4c/0x78
[ 274.802095] arm64_preempt_schedule_irq+0x10/0x20
[ 274.806808] el1_irq+0xd0/0x180
[ 274.809956] schedule+0x50/0xe0
[ 274.813107] smpboot_thread_fn+0x288/0x2a8
[ 274.817214] kthread+0xf0/0x120
[ 274.820361] ret_from_fork+0x10/0x18
[ 300.110595] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [migration/0:11]
[ 300.117829] Modules linked in:
[ 300.120916] CPU: 0 PID: 11 Comm: migration/0 Not tainted 5.4.3-imx8mm+g34ac1cd70762 #1
[ 300.128841] Hardware name: Variscite DART-MX8MM (DT)
[ 300.133822] pstate: 60000005 (nZCv daif -PAN -UAO)
[ 300.138623] pc : __schedule+0x240/0x498
[ 300.142466] lr : __schedule+0x460/0x498
[ 300.146309] sp : ffff800010dbbbb0
[ 300.149630] x29: ffff800010dbbbb0 x28: ffff00007d0a8d80
[ 300.154952] x27: ffff800010dbbc10 x26: ffff00007d0a92f8
[ 300.160275] x25: ffff800010859dd0 x24: ffff8000108413d4
[ 300.165597] x23: ffff00007d0a91e0 x22: ffff800010841284
[ 300.170920] x21: 00000000000000e0 x20: 0000000000000060
[ 300.176243] x19: ffff00007d0a8d80 x18: 0000000000000014
[ 300.181565] x17: 0000000068bdfef8 x16: 0000000031e25db8
[ 300.186887] x15: 0000000000000000 x14: 003d090000000000
[ 300.192206] x13: ffff800010c96000 x12: 0000000000000000
[ 300.197524] x11: ffff800010d50000 x10: 0000000000007d00
[ 300.202845] x9 : 0000000000000000 x8 : 00000045dc041b00
[ 300.208168] x7 : 7fffffffffffffff x6 : 00000000281aba1c
[ 300.213490] x5 : 00ffffffffffffff x4 : 0000000000000008
[ 300.218812] x3 : ffff00007fb77c80 x2 : 0000000000000000
[ 300.224135] x1 : 742c7533b9dbb800 x0 : 0000000000000000
[ 300.229460] Call trace:
[ 300.231915] __schedule+0x240/0x498
[ 300.235410] preempt_schedule_irq+0x4c/0x78
[ 300.239605] arm64_preempt_schedule_irq+0x10/0x20
[ 300.244318] el1_irq+0xd0/0x180
[ 300.247466] preempt_schedule+0x4/0x30
[ 300.251231] cpu_stopper_thread+0x70/0x120
[ 300.255335] smpboot_thread_fn+0x1e4/0x2a8
[ 300.259442] kthread+0xf0/0x120
[ 300.262590] ret_from_fork+0x10/0x18
[ 327.114550] BUG: workqueue lockup - pool cpus=0 node=0 flags=0x0 nice=0 stuck for 203s!
[ 327.136430] Showing busy workqueues and worker pools:
[ 327.142819] workqueue events: flags=0x0
[ 327.147102] pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=2/256
[ 327.153131] in-flight: 33:deferred_probe_work_func BAR(1) deferred_probe_work_func
[ 327.161289] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=2/256
[ 327.167314] pending: push_to_pool, vmstat_shepherd
[ 327.179643] workqueue mm_percpu_wq: flags=0x8
[ 327.184626] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=2/256
[ 327.190644] in-flight: 12:drain_local_pages_wq BAR(33)
[ 327.196159] pending: vmstat_update
[ 327.202313] workqueue pm: flags=0x4
[ 327.206466] pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/256
[ 327.212484] pending: pm_runtime_work
[ 327.233440] pool 0: cpus=0 node=0 flags=0x0 nice=0 hung=203s workers=3 idle: 93 5
[ 327.244595] pool 4: cpus=2 node=0 flags=0x0 nice=0 hung=321s workers=3 idle: 213 219
[ 328.747613] Console: switching to colour frame buffer device 80x80

Labels (1)
0 Kudos
4 Replies

1,329 Views
sd05
Contributor III

What can be the root cause?

0 Kudos

1,329 Views
sd05
Contributor III

We are using DART-MX8M-MINI som for which variscite has already done ddr test still facing this issue do we need to do it again, we have just added certain peripherals in our proto

0 Kudos

1,322 Views
igorpadykov
NXP Employee
NXP Employee

this board and its BSPs are supported by variscite directly, please post issue on support portal :

https://www.variscite.com/support/

 

Best regards
igor

0 Kudos

1,335 Views
igorpadykov
NXP Employee
NXP Employee

Hi Sunidhi

 

what board used in the case, if custom, had board passed ddr test.

From log: "Hardware name: Variscite DART-MX8MM (DT)"  seems third party

bsp used in the case

http://variwiki.com/index.php?title=DART-MX8M-MINI

 

Best regards
igor

0 Kudos