Hi NXP Team,
We are trying to connect the NVMe SSD device (WD SN720) on PCIe2 of the i.MX8MQ processor custom board on L5.4.3_1.0.0 version. Please find attached schematics of the same.
# lspci
0000:00:00.0 PCI bridge: Synopsys, Inc. DWC_usb3 (rev 01)
0000:01:00.0 Non-Volatile memory controller: Sandisk Corp WD Black 2018/PC SN720 NVMe SSD
# dmesg | grep pcie
[ 1.619930] imx6q-pcie 33c00000.pcie: 33c00000.pcie supply epdev_on not found, using dummy regulator
[ 1.629439] imx6q-pcie 33c00000.pcie: host bridge /soc@0/pcie@33c00000 ranges:
[ 1.636726] imx6q-pcie 33c00000.pcie: No bus range found for /soc@0/pcie@33c00000, using [bus 00-ff]
[ 1.646068] imx6q-pcie 33c00000.pcie: IO 0x27f80000..0x27f8ffff -> 0x00000000
[ 1.657523] imx6q-pcie 33c00000.pcie: MEM 0x20000000..0x27efffff -> 0x20000000
[ 1.873843] imx6q-pcie 33c00000.pcie: Link up
[ 1.901730] imx6q-pcie 33800000.pcie: 33800000.pcie supply epdev_on not found, using dummy regulator
[ 1.911086] imx6q-pcie 33800000.pcie: host bridge /soc@0/pcie@33800000 ranges:
[ 1.927280] imx6q-pcie 33800000.pcie: IO 0x1ff80000..0x1ff8ffff -> 0x00000000
[ 1.937649] imx6q-pcie 33800000.pcie: MEM 0x18000000..0x1fefffff -> 0x18000000
[ 1.973849] imx6q-pcie 33c00000.pcie: Link up
[ 1.978222] imx6q-pcie 33c00000.pcie: Link up, Gen2
[ 1.983196] imx6q-pcie 33c00000.pcie: PCI host bridge to bus 0000:00
[ 2.139805] pcieport 0000:00:00.0: PME: Signaling with IRQ 218
[ 2.145821] pcieport 0000:00:00.0: AER: enabled with IRQ 218
[ 2.157823] imx6q-pcie 33800000.pcie: Link up
[ 2.166136] imx6q-pcie 33800000.pcie: Link up
[ 2.175555] imx6q-pcie 33800000.pcie: Link up, Gen1
[ 2.185548] imx6q-pcie 33800000.pcie: PCI host bridge to bus 0001:00
[ 2.502174] pcieport 0001:00:00.0: PME: Signaling with IRQ 221
[ 2.511696] pcieport 0001:00:00.0: AER: enabled with IRQ 221
# dmesg | grep nvme
[ 2.261612] nvme nvme0: pci function 0000:01:00.0
[ 2.319528] nvme 0000:01:00.0: enabling device (0000 -> 0002)
[ 2.695517] nvme nvme0: Removing after probe failure status: -19
Observation:
Linux driver is initialized successfully. But when driver probed then we have received below errors:
"nvme nvme0: Removing after probe failure status: -19".
[ 2.626903] csts: 4294967295
We have checked in the driver and observed that nvme_pci_configure_admin_queue() called nvme_enable_ctrl() at that time NVME_REG_CSTS register will received the 4294967295 value.
Ideally it should be received zero.
when nvme_pci_configure_admin_queue() called nvme_disable_ctrl() at that time NVME_REG_CSTS register will received the 0 value.
[ 2.506076] csts success: 0
=========
drivers/nvme/host/core.c
==> nvme_wait_ready() is being called by nvme_enable_ctrl() and nvme_disable_ctrl().
while ((ret = ctrl->ops->reg_read32(ctrl, NVME_REG_CSTS, &csts)) == 0) {
if (csts == ~0) {
printk("csts: %u\n", csts);
printk("%s: %s: %d\n", __FILE__, __func__, __LINE__);
return -ENODEV;
}
printk("csts success: %u\n", csts);
========
Hi kinjalGediya
seems this is WD SN720 specific driver issue and may be posted on wd community:
Best regards
igor
Hi igor,
Thank you for quick reply,
I have checked that link shared by you.
But in our scenario we are not getting the /dev/nvme* node.
Let me know if you need any information.