PCIe NVMe SSD Not recognised

cancel
Showing results for 
Search instead for 
Did you mean: 

PCIe NVMe SSD Not recognised

Jump to solution
5,083 Views
thomasculverhou
Contributor III

I have a Freescale LS2080 box for which I am developing a custom linux 4.1.8 kernel using the Freescale Yocto project. 

I have an NVMe hard disk attached to the LS2080 via a PCIe card, but the disk is not recognised when I boot up the board with my custom linux kernel.  In fact, it is not recognised in u-boot either if I do

>> pci

I plugged the same combination of NVMe disk and PCIe card into a Debian linux 3.16.7 desktop PC and it was detected and mounted without problem.  I also repeated the experiment with a different PCIe card and different NVMe SSD and got the same result.

When building the LS2080 kernel using the Yocto project, I have enabled the NVMe block device driver and I have verified that this module is present in the kernel when booting on the board.  The PCIe slot on the board is working fine because I have tried it with a PCIe Ethernet card and a PCIe SATA disk. 

The only thing I can think of is that the Ethernet card is 1x lane PCIe device, and the SATA disk is 2x lanes, while the NVMe is 4x lanes - perhaps the 4x lanes aren't working on the board, or the DIP switches are set incorrectly?

I suspect that I am missing something in the kernel configuration or device tree, but I'm not sure what. When I add the NVMe driver to the kernel using menuconfig, the NVMe driver dependencies are supposed to be resolved. 

Can anyone provide insight into what I am missing?

Labels (1)
0 Kudos
1 Solution
1,479 Views
thomasculverhou
Contributor III

It turns out that the PBL binary file supplied with the Freescale Yocto project does not work for the LS2085A board (please bear in mind I have a LS2085A that I am configuring to behave as a LS2080A).

When doing the Yocto build for the ls2080ardb, I get a folder:

build_ls2080ardb_release/tmp/deploy/images/ls2080ardb/rcw/rcw_0x2a_0x41/PBL_0x2a_0x41_1867_533_1867.bin

I upload this onto the board by interrupting uboot in vbank0, following the LS2080A SDK 1.0 Deployment Guide:

=> tftp 0x80000000 PBL_0x2a_0x41_1867_533_1867.bin; erase 0x585400000 +$filesize; cp.b 0x80000000 0x585400000 $filesize

I then reboot into vbank4:

=> qixis_reset altbank

Now I am booting into vbank4.

With the Yocto Project PBL_0x2a_0x41_1867_533_1867.bin, the NVMe SSD is not recognised. In the u-boot output I see something like:

PCIe1: disabled

PCIe2: disabled

PCIe3: Root Complex no link, regs @ 0x3600000

PCIe4: Root Complex no link, regs @ 0x3700000

Following ufedor​'s suggestion, I opened the PBL file in QCVS. I created a new QorIQ configuration project for the LS2085A and in the SerDes blocks I selected the following checkbox options:

- enable SATA->1.5

- enable XFI->10.3125

- enable SGMII->1.25

- enable PCIe->Gen1 2.5

I then selected

0x2A for SerDes1: XFI1-8=10.3125, PLL=11111111)

0x41 for SerDes2: PCIe3 (2.5), 4 lanes; PCIe4 (2.5), 2 lanes; SATA1(1.5) 1 lane;  SATA2(1.5) 1 lane

So nominally, the protocols are the same as those specified in the Yocto Project PBL file. I created a PBL file with this configuration and uploaded it to the LS2085A as aboce. When I boot into vbank4, uboot now looks like this:

PCIe1: disabled

PCIe2: disabled

PCIe3: Root Complex x4 gen2, regs @ 0x3600000

PCI:

         01:00.0        - 8086:0953 - Mass storage controller

PCIe3: Bus 00 - 01

PCIe4: Root Complex x2 gen2, regs @ 0x3700000

PCI:

         03:00.0        - 144d:a802 - Mass storage controller

PCIe4: Bus 02 - 03

(note that for fun I put two NVMe disks in). The NVMe disk is recognised with the correct PCI vendor and device IDs.

The difference between the PBL file from Yocto and the one from QCVS was different SERDES PLL and Protocol Configuration (Bits 911-896) and settings for the Layerscape Chassis EXPANSION AREA (Bits 1023-912) (see the QCVS Component Inspector - PBL Properties tab for the PBL file).

In the Yocto project PBL, SRDS_PLL_PD_PLL2 was not powered down (0b0) and SRDS_DIV_PEX_S2 was set to 0b00 (up to max rate of 8G). If these are set to 0b1 and 0b10/0b01 respectively, the NVMe drive is recognised.

View solution in original post

0 Kudos
5 Replies
1,480 Views
thomasculverhou
Contributor III

It turns out that the PBL binary file supplied with the Freescale Yocto project does not work for the LS2085A board (please bear in mind I have a LS2085A that I am configuring to behave as a LS2080A).

When doing the Yocto build for the ls2080ardb, I get a folder:

build_ls2080ardb_release/tmp/deploy/images/ls2080ardb/rcw/rcw_0x2a_0x41/PBL_0x2a_0x41_1867_533_1867.bin

I upload this onto the board by interrupting uboot in vbank0, following the LS2080A SDK 1.0 Deployment Guide:

=> tftp 0x80000000 PBL_0x2a_0x41_1867_533_1867.bin; erase 0x585400000 +$filesize; cp.b 0x80000000 0x585400000 $filesize

I then reboot into vbank4:

=> qixis_reset altbank

Now I am booting into vbank4.

With the Yocto Project PBL_0x2a_0x41_1867_533_1867.bin, the NVMe SSD is not recognised. In the u-boot output I see something like:

PCIe1: disabled

PCIe2: disabled

PCIe3: Root Complex no link, regs @ 0x3600000

PCIe4: Root Complex no link, regs @ 0x3700000

Following ufedor​'s suggestion, I opened the PBL file in QCVS. I created a new QorIQ configuration project for the LS2085A and in the SerDes blocks I selected the following checkbox options:

- enable SATA->1.5

- enable XFI->10.3125

- enable SGMII->1.25

- enable PCIe->Gen1 2.5

I then selected

0x2A for SerDes1: XFI1-8=10.3125, PLL=11111111)

0x41 for SerDes2: PCIe3 (2.5), 4 lanes; PCIe4 (2.5), 2 lanes; SATA1(1.5) 1 lane;  SATA2(1.5) 1 lane

So nominally, the protocols are the same as those specified in the Yocto Project PBL file. I created a PBL file with this configuration and uploaded it to the LS2085A as aboce. When I boot into vbank4, uboot now looks like this:

PCIe1: disabled

PCIe2: disabled

PCIe3: Root Complex x4 gen2, regs @ 0x3600000

PCI:

         01:00.0        - 8086:0953 - Mass storage controller

PCIe3: Bus 00 - 01

PCIe4: Root Complex x2 gen2, regs @ 0x3700000

PCI:

         03:00.0        - 144d:a802 - Mass storage controller

PCIe4: Bus 02 - 03

(note that for fun I put two NVMe disks in). The NVMe disk is recognised with the correct PCI vendor and device IDs.

The difference between the PBL file from Yocto and the one from QCVS was different SERDES PLL and Protocol Configuration (Bits 911-896) and settings for the Layerscape Chassis EXPANSION AREA (Bits 1023-912) (see the QCVS Component Inspector - PBL Properties tab for the PBL file).

In the Yocto project PBL, SRDS_PLL_PD_PLL2 was not powered down (0b0) and SRDS_DIV_PEX_S2 was set to 0b00 (up to max rate of 8G). If these are set to 0b1 and 0b10/0b01 respectively, the NVMe drive is recognised.

View solution in original post

0 Kudos
1,479 Views
ufedor
NXP TechSupport
NXP TechSupport

The issue could be a complicated one.

In this case it is recommended to create a technical case - How I could create a Service Request?

During the case creation please provide additional information: company name, project phase and application type.

0 Kudos
1,479 Views
ufedor
NXP TechSupport
NXP TechSupport

Please provide U-Boot booting log when the PCIe card is inserted.

0 Kudos
1,479 Views
thomasculverhou
Contributor III

This is booting using u-boot present on the board when it arrived (vbank0; the result is the same when I boot using u-boot compiled from the Yocto project and loaded to vbank4 NOR FLASH):

U-Boot 2015.01Layerscape2-SDK+g16c10aa (May 14 2015 - 15:17:11)

SoC:  LS2085E (0x87010010)

Clock Configuration:

       CPU0(A57):1800 MHz  CPU1(A57):1800 MHz  CPU2(A57):1800 MHz

       CPU3(A57):1800 MHz  CPU4(A57):1800 MHz  CPU5(A57):1800 MHz

       CPU6(A57):1800 MHz  CPU7(A57):1800 MHz

       Bus:      600  MHz  DDR:      1866.667 MT/s     DP-DDR:   1600 MT/s

Reset Configuration Word (RCW):

       00: 48303830 48480048 00000000 00000000

       10: 00000000 00200000 00200000 00000000

       20: 00c12980 00002580 00000000 00000000

       30: 00000e0b 00000000 00000000 00000000

       40: 00000000 00000000 00000000 00000000

       50: 00000000 00000000 00000000 00000000

       60: 00000000 00000000 00027000 00000000

       70: 412a0000 00000000 00000000 00000000

Board: LS2085E-RDB, Board Arch: V1, Board version: D, boot from vBank: 0

FPGA: v1.18

SERDES1 Reference : Clock1 = 156.25MHz Clock2 = 156.25MHz

SERDES2 Reference : Clock1 = 100MHz Clock2 = 100MHz

I2C:   ready

DRAM:  Initializing DDR....using SPD

Detected UDIMM 18ASF1G72AZ-2G1A1

Detected UDIMM 18ASF1G72AZ-2G1A1

DP-DDR:  Detected UDIMM 18ASF1G72AZ-2G1A1

19.5 GiB

DDR    15.5 GiB (DDR4, 64-bit, CL=13, ECC on)

       DDR Controller Interleaving Mode: 256B

       DDR Chip-Select Interleaving Mode: CS0+CS1

DP-DDR 4 GiB (DDR4, 32-bit, CL=11, ECC on)

       DDR Chip-Select Interleaving Mode: CS0+CS1

Waking secondary cores to start from fff1b000

All (8) cores are up.

Using SERDES1 Protocol: 42 (0x2a)

Using SERDES2 Protocol: 65 (0x41)

Flash: 128 MiB

NAND:  2048 MiB

MMC:   FSL_SDHC: 0

EEPROM: Invalid ID (ff ff ff ff)

PCIe1: disabled

PCIe2: disabled

PCIe3: Root Complex no link, regs @ 0x3600000

PCIe4: Root Complex no link, regs @ 0x3700000

In:    serial

Out:   serial

Err:   serial

Error! Not a FIT image

SATA link 0 timeout.

AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x1 impl SATA mode

flags: 64bit ncq pm clo only pmp fbss pio slum part ccc apst

Found 0 device(s).

SCSI:  Net:   crc32+

fsl-mc: Booting Management Complex ... SUCCESS

fsl-mc: Management Complex booted (version: 7.0.3, boot status: 0x1)

fsl-mc: Deploying data path layout ... SUCCESS

DPNI10, DPNI8

Error: DPNI8 address not set.

, DPNI1, DPNI2

Error: DPNI2 address not set.

, DPNI3

Error: DPNI3 address not set.

, DPNI4

Error: DPNI4 address not set.

, DPNI7, DPNI9

Error: DPNI9 address not set.

Hit any key to stop autoboot:  0

=>

If I insert e.g. a network card into the same PCIe slot, I get this in the PCIe section of u-boot output:

PCIe1: disabled

PCIe2: disabled

PCIe3: Root Complex x1 gen1, regs @ 0x3600000

     01:00.0    - 8086:10d3 - Network controller

PCIe3: Bus 00 - 01

PCIe4: Root Complex no link, regs @ 0x3700000

0 Kudos
1,479 Views
thomasculverhou
Contributor III

As a side note, in the u-boot source code tmp/work/ls2080ardb-fsl-linux/u-boot-ls2/2015.10+fslgit-r0/git/include/pci_ids.h, I notice that the NVMe storage is not represented at all.

The class ID is defined in the kernel source code tmp/work-shared/ls2080ardb/kernel-source/drivers/block/nvme-core.c:

#define PCI_CLASS_STORAGE_EXPRESS 0x010802

This is not present in the u-boot source code - perhaps this explains why the device is not recognised in u-boot?

0 Kudos