i.MX8MQ: Some NVMe drives non functional when L1SS is enabled

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX8MQ: Some NVMe drives non functional when L1SS is enabled

544 Views
andreysmirnov
Contributor IV

Hi Folks.

I'm trying to use various NVMe drives on i.MX8MQ and, so far, I found at least two consumer NVMe SSD that are exhibiting the issue I'm hoping to get some help with. The issue manifests as follows:

  1. PCIe link is established, device is enumerated by the Linux PCI subsystem and up to this point everything looks good. The device is visible in "lspci", config space is readable, etc.
  2. However the device never comes up as a NVMe block device. Trying to do "nvme list" shows empty list and nvme reset work scheduled by the kernel times out after a while. Tracing it further down into the kernel it looks like the very first read of NVMe "status" register returns 0xFFFF_FFFF as if PCIe device isn't present.

I've been able to bisect this down to CONFIG_PCIEASPM_POWER_SUPERSAVE=y kernel configuration option which enables enabling of L1SS for PCIe device. Setting that option to any other parameter "fixes" the problem and drives in question function as expected.
 

This looks like a PCIe hardware compatibility or maybe driver timings related problem, but I'm not sure how to debug this further to come up with a better fix that just disabling L1SS completely. Any advise?

 

NVMe drives exhibiting the problem: 

  1. Kingston OM3PGP4512P (PCIe Gen 4 x 4)
  2. Samsung MZ9LQ256HBJD (PCIe Gen 3 x 4)

i.MX8MQ boards tried:

  1. ZII Ultra Zest Board: https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/boot/dts/freescale/...
  2. i.MX8MQ EVK

Linux kernels tested:

  1. Vanilla 5.17 + custom patch stack (unrelated to PCIe/NVMe)
  2. NXP 5.10 kernel
  3. NXP 5.15 kernel
0 Kudos
Reply
0 Replies