imx8qm-mek instability running 5.15.32 with RT kernel

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

imx8qm-mek instability running 5.15.32 with RT kernel

739 Views
vik
Contributor II

Hi,

We are running latest version of 5.15.32-2.0.0 BSP on imx8qm-mek with RT kernel patch (patch-5.15.32-rt39.patch). There is instability with platform with enabled HDMI during linux boot. Some details in our yocto build:
  MACHINE=imx8qmmek DISTRO=fslc-wayland source setup-environment build-imx8qmmek-wayland
Download and apply rt kernel patch:
  patch-5.15.32-rt39.patch
Updated kernel configuration with options:
  CONFIG_KVM=n
  CONFIG_EXPERT=y
  CONFIG_PREEMPT_RT=y
Change bootloader ftd_file to use imx8qm-mek-hdmi.dtb

Crash that we occasionally see (1/10 boots):

[ 1.808385] imx-drm display-subsystem: bound imx-drm-dpu-bliteng.2 (ops dpu_bliteng_ops)
[ 1.808601] imx-drm display-subsystem: bound imx-drm-dpu-bliteng.5 (ops dpu_bliteng_ops)
[ 1.809649] imx-drm display-subsystem: bound imx-dpu-crtc.0 (ops dpu_crtc_ops)
[ 1.810686] imx-drm display-subsystem: bound imx-dpu-crtc.1 (ops dpu_crtc_ops)
[ 1.811685] imx-drm display-subsystem: bound imx-dpu-crtc.3 (ops dpu_crtc_ops)
[ 1.812756] imx-drm display-subsystem: bound imx-dpu-crtc.4 (ops dpu_crtc_ops)
[ 1.813750] Unable to handle kernel NULL pointer dereference at virtual address 0000000000000058
[ 1.813757] Mem abort info:
[ 1.813758] ESR = 0x96000004
[ 1.813761] EC = 0x25: DABT (current EL), IL = 32 bits
[ 1.813765] SET = 0, FnV = 0
[ 1.813768] EA = 0, S1PTW = 0
[ 1.813771] FSC = 0x04: level 0 translation fault
[ 1.813774] Data abort info:
[ 1.813775] ISV = 0, ISS = 0x00000004
[ 1.813778] CM = 0, WnR = 0
[ 1.813780] [0000000000000058] user address but active_mm is swapper
[ 1.813785] Internal error: Oops: 96000004 [#1] PREEMPT_RT SMP
[ 1.813790] Modules linked in:
[ 1.813800] CPU: 2 PID: 111 Comm: kworker/2:2 Not tainted 5.15.32+g685a4b266ff4 #1
[ 1.813807] Hardware name: Freescale i.MX8QM MEK (DT)
[ 1.813812] Workqueue: pm pm_runtime_work
[ 1.813827] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
[ 1.813834] pc : genpd_runtime_suspend+0xf0/0x2c0
[ 1.813843] lr : genpd_runtime_suspend+0xe4/0x2c0
[ 1.813849] sp : ffff80000a3ebc50
[ 1.813852] x29: ffff80000a3ebc50 x28: 0000000000000000 x27: 0000000000000000
[ 1.813861] x26: ffff800009a18000 x25: ffff000810c85080 x24: 0000000068680971
[ 1.813870] x23: 0000000000000000 x22: ffff00081467e880 x21: ffff000810c853d0
[ 1.813878] x20: ffff800008820310 x19: ffff000814165000 x18: ffffffffffffffff
[ 1.813886] x17: 0000000000000000 x16: ffff80000882dee4 x15: 0000008df9746c20
[ 1.813894] x14: 0000000000000370 x13: 0000000000000001 x12: 0000000000000000
[ 1.813902] x11: 0000000000000000 x10: 0000000000000960 x9 : ffff80000a3ebaa0
[ 1.813910] x8 : ffff000810a22540 x7 : ffff0008f95bc440 x6 : 0000000000000000
[ 1.813918] x5 : 0000000000000000 x4 : 0000000000000000 x3 : ffff000810a222f0
[ 1.813926] x2 : 0000000000000000 x1 : 0000000000000000 x0 : 0000000000000000
[ 1.813937] Call trace:
[ 1.813940] genpd_runtime_suspend+0xf0/0x2c0
[ 1.813947] __rpm_callback+0x48/0x150
[ 1.813953] rpm_callback+0x6c/0x80
[ 1.813959] rpm_suspend+0x100/0x550
[ 1.813965] pm_runtime_work+0xd4/0xf0
[ 1.813972] process_one_work+0x1d0/0x354
[ 1.813981] worker_thread+0x134/0x45c
[ 1.813987] kthread+0x18c/0x1a0
[ 1.813995] ret_from_fork+0x10/0x20
[ 1.814007] Code: d63f0020 f9411660 52800002 f9412261 (f9402c03)
[ 1.814013] ---[ end trace 0000000000000002 ]---
[ 1.834058] imx6q-pcie 5f000000.pcie: PCIe PLL is locked.
[ 1.834105] imx6q-pcie 5f010000.pcie: PCIe PLL is locked.
[ 1.834167] imx6q-pcie 5f000000.pcie: iATU unroll: disabled
[ 1.834174] imx6q-pcie 5f000000.pcie: Detected iATU regions: 6 outbound, 6 inbound
[ 1.834189] imx6q-pcie 5f000000.pcie: host bridge /bus@5f000000/pcie@0x5f000000 ranges:
[ 1.834274] imx6q-pcie 5f000000.pcie: IO 0x006ff80000..0x006ff8ffff -> 0x0000000000
[ 1.834355] imx6q-pcie 5f000000.pcie: MEM 0x0060000000..0x006fefffff -> 0x0060000000
[ 1.834468] imx6q-pcie 5f010000.pcie: iATU unroll: disabled
[ 1.834471] imx6q-pcie 5f010000.pcie: Detected iATU regions: 6 outbound, 6 inbound
[ 1.834479] imx6q-pcie 5f010000.pcie: host bridge /bus@5f000000/pcie@0x5f010000 ranges:
[ 1.834497] imx6q-pcie 5f000000.pcie: iATU unroll: disabled
[ 1.834501] imx6q-pcie 5f000000.pcie: Detected iATU regions: 6 outbound, 6 inbound
[ 1.834500] imx6q-pcie 5f010000.pcie: IO 0x007ff80000..0x007ff8ffff -> 0x0000000000
[ 1.834513] imx6q-pcie 5f010000.pcie: MEM 0x0070000000..0x007fefffff -> 0x0070000000
[ 1.834577] imx6q-pcie 5f010000.pcie: iATU unroll: disabled
[ 1.834580] imx6q-pcie 5f010000.pcie: Detected iATU regions: 6 outbound, 6 inbound
[ 1.862983] virtio_rpmsg_bus virtio1: creating channel rpmsg-openamp-demo-channel addr 0x1e
[ 2.793981] mmc1: new ultra high speed SDR104 SDHC card at address aaaa
[ 2.794792] mmcblk1: mmc1:aaaa SE32G 29.7 GiB
[ 2.797105] mmcblk1: p1 p2
[ 2.834852] imx6q-pcie 5f000000.pcie: Phy link never came up
[ 2.834852] imx6q-pcie 5f010000.pcie: Phy link never came up
[ 2.837024] imx6q-pcie: probe of 5f000000.pcie failed with error -110
[ 2.837661] imx6q-pcie: probe of 5f010000.pcie failed with error -110
[ 2.842108] ata1: SATA link down (SStatus 0 SControl 300)

Do you have any idea what could be a reason? Is there something that we missing in configuration?

0 Kudos
Reply
1 Reply

626 Views
vik
Contributor II

Some update, on this topic.

When we updated kernel config with:
CONFIG_NR_CPUS=2
CONFIG_SCHED_MC=n
CONFIG_SCHED_SMT=n

We were able to reproduce crash on each boot. This made situation easier for debugging.
Running kernel with kdgb we managed to get more details on crash:

xffff8000090b8ef8 in dev_gpd_data (dev=0xffff000827349000) at /home/vik/Workspace/V4.0/2171-platform-linux-imx6/01-Yocto/build-imx8/workspace/sources/linux-imx/include/linux/pm_domain.h:223
223 return to_gpd_data(dev->power.subsys_data->domain_data);
(gdb) bt full
#0 0xffff8000090b8ef8 in dev_gpd_data (dev=0xffff000827349000)
at /home/vik/Workspace/V4.0/2171-platform-linux-imx6/01-Yocto/build-imx8/workspace/sources/linux-imx/include/linux/pm_domain.h:223
No locals.
#1 genpd_drop_performance_state (dev=0xffff000827349000)
at /home/vik/Workspace/V4.0/2171-platform-linux-imx6/01-Yocto/build-imx8/workspace/sources/linux-imx/drivers/base/power/domain.c:439
gpd_data = <optimized out>
prev_state = <optimized out>
gpd_data = <optimized out>
prev_state = <optimized out>
#2 genpd_runtime_suspend (dev=0xffff000827349000)
at /home/vik/Workspace/V4.0/2171-platform-linux-imx6/01-Yocto/build-imx8/workspace/sources/linux-imx/drivers/base/power/domain.c:989
genpd = 0xffff0008109bd880
suspend_ok = <optimized out>
gpd_data = 0xffff000827c05b80
td = 0xffff000827c05b98
runtime_pm = <optimized out>
time_start = 92688223125
elapsed_ns = <optimized out>
ret = <optimized out>
__func__ = "genpd_runtime_suspend"
#3 0xffff800008818328 in __rpm_callback (cb=0xffff8000090b8d7c <genpd_runtime_suspend>, dev=0xffff000827349000)
at /home/vik/Workspace/V4.0/2171-platform-linux-imx6/01-Yocto/build-imx8/workspace/sources/linux-imx/drivers/base/power/runtime.c:398
retval = 0
idx = <optimized out>
use_links = false
#4 0xffff80000881849c in rpm_callback (cb=cb@entry=0xffff8000090b8d7c <genpd_runtime_suspend>, dev=dev@entry=0xffff000827349000)
at /home/vik/Workspace/V4.0/2171-platform-linux-imx6/01-Yocto/build-imx8/workspace/sources/linux-imx/drivers/base/power/runtime.c:525
retval = <optimized out>
#5 0xffff800008819b70 in rpm_suspend (dev=dev@entry=0xffff000827349000, rpmflags=rpmflags@entry=10)
at /home/vik/Workspace/V4.0/2171-platform-linux-imx6/01-Yocto/build-imx8/workspace/sources/linux-imx/drivers/base/power/runtime.c:665
callback = 0xffff8000090b8d7c <genpd_runtime_suspend>
parent = 0x0
retval = <optimized out>
repeat = <optimized out>
#6 0xffff80000881a2d4 in pm_runtime_work (work=<optimized out>)
at /home/vik/Workspace/V4.0/2171-platform-linux-imx6/01-Yocto/build-imx8/workspace/sources/linux-imx/drivers/base/power/runtime.c:960
dev = 0xffff000827349000
req = <optimized out>
#7 0xffff80000806e3c0 in process_one_work (worker=worker@entry=0xffff000810a71700, work=0xffff000827349190)
at /home/vik/Workspace/V4.0/2171-platform-linux-imx6/01-Yocto/build-imx8/workspace/sources/linux-imx/kernel/workqueue.c:2306
pwq = 0xffff0008ff3da400
pool = 0xffff0008ff3d4cc0
cpu_intensive = false
work_data = 18446462637374809093
collision = 0x0
#8 0xffff80000806ea40 in worker_thread (__worker=0xffff000810a71700)
at /home/vik/Workspace/V4.0/2171-platform-linux-imx6/01-Yocto/build-imx8/workspace/sources/linux-imx/kernel/workqueue.c:2453
work = <optimized out>
worker = 0xffff000810a71700

Due stack corruption we couldn't get more details but it helped us to add some additional pointer/value checks in genpd_drop_performance_state(). We created this patch:

diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 7bef4b6638a2..09891e892866 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -436,7 +436,18 @@ static int genpd_set_performance_state(struct device *dev, unsigned int state)

 static int genpd_drop_performance_state(struct device *dev)
 {
-   unsigned int prev_state = dev_gpd_data(dev)->performance_state;
+   struct generic_pm_domain_data *gpd_data = NULL;
+   unsigned int prev_state = 0;
+
+   if (dev->power.subsys_data && dev->power.subsys_data->domain_data)
+       gpd_data = dev_gpd_data(dev);
+   else
+       return 0;
+
+   if (gpd_data)
+       prev_state = gpd_data->performance_state;
+   else
+       return 0; 

    if (!genpd_set_performance_state(dev, 0))
        return prev_state;

--
2.29.0
 
After applying this patch we didn't see crash for more than 3000 reboots.