imx6 solo linux kernel memory map and PCIe device?

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

imx6 solo linux kernel memory map and PCIe device?

4,533 Views
jpa
Contributor IV

I'm trying to bring up a (Yocto) linux PCIe device driver that I've used successfully under the QorIQ SDK, now porting to the imx6.

The driver fails when calling pci_request_regions. 

I'm thinking this may be because:

Virtual kernel memory layout:

    vector  : 0xffff0000 - 0xffff1000   (   4 kB)

    fixmap  : 0xfff00000 - 0xfffe0000   ( 896 kB)

and lspci for my PCIe device says:

Region 0: Memory at fffc0000 (32-bit, non-prefetchable) [disabled] [size=256K]
Region 1: Memory at fffc0000 (32-bit, non-prefetchable) [disabled] [size=256K]

Does it make sense that the vector/fixmap memory space and the memory mapped region for the PCIe device are conflicting?  Or am I confusing memory spaces?

If it does, is there a (relatively) easy way to move the vector space to another address location for the imx6 in the context of Yocto and u-boot ?

Or must I redesign the PCIe device?

John

Labels (3)
0 Kudos
15 Replies

2,960 Views
jpa
Contributor IV

For what it's worth, this https://community.freescale.com/thread/328248

seems like a very similar, if not the same, problem.   There's some kind of issue with pci_request_regions and Lattice ECP FPGAs. 

John

0 Kudos

2,960 Views
fabio_estevam
NXP Employee
NXP Employee

Hi John,

Could you also try this with a 4.1 kernel?

If the problem still happens with this version, then I would suggest you to post this to linux-pci@vger.kernel.org so that someone could potentially help. Please also Cc the mx6 pci driver maintainers as shown by

./scripts/get_maintainer.pl -f drivers/pci/host/pci-imx6.c

Regards,

Fabio Estevam

0 Kudos

2,960 Views
jpa
Contributor IV

Fabio,

I'm not sure I understand what you mean by '4.1 kernel'.   I'm using 3.14.28, which I believe is the latest released.  Are you referring to "L3.0.101_4.1.1" ?  Isn't that a significant step backwards, particularly for pcie fixes?

The problem seems to be limited to the imx6, as the exact same FPGA firmware works with the QorIQ P1010, acknowledging that they're running different kernels.

John

0 Kudos

2,960 Views
fabio_estevam
NXP Employee
NXP Employee

I meant kernel 4.1 from kernel.org:

https://www.kernel.org/

It would be interesting to see if such problem happens with this version as well and get the feedback from the PCI maintainers.

Regards,

Fabio Estevam

0 Kudos

2,960 Views
jpa
Contributor IV

Fabio,

I think I'd need a lot of help to get from a manufacturer's custom BSP layered on top of Freescale's Yocto release to a full-custom kernel.  But I'm up for it if you are. :smileywink:

John

0 Kudos

2,960 Views
fabio_estevam
NXP Employee
NXP Employee

To run 4.1 is very simple:

After doing the git clone

make imx_v6_v7_defconfig

make

make <your.dtb>

Then copy zImage and dtb to your SD card.

Regards,

Fabio Estevam

0 Kudos

2,960 Views
jpa
Contributor IV

We seem to have found some success by enabling PCIe in u-boot (2014.04).  Now when I halt the boot at the u-boot prompt, and type "pci handle 01:00.00" the result shows that the BAR addresses have been re-mapped to new, non-duplicate locations.   If I continue with the kernel boot, my driver no longer fails when reserving the new addresses. 

My guess is that u-boot is doing something to initialize the PCIe properly that the kernel boot is not, but this is beyond my expertise. 

I was not successful building 4.1.  My guess is that your instructions assume a native build, not a cross-compile.  I know the Yocto documentation talks about the possibility of a complete custom kernel build, and if I get time, I may explore that further. 

0 Kudos

2,960 Views
fabio_estevam
NXP Employee
NXP Employee

Prior to building I export the cross compile:

export ARCH=arm

export CROSS_COMPILE=/usr/bin/arm-linux-gnueabi-

Adjust your CROSS_COMPILE as needed to match your toolchain.

Interesting finding that U-boot makes things to work.

It would still be interesting to know if 4.1 works fine without PCI being enabled in U-boot or not.

Then we try to see the missing initialization from 3.14.

0 Kudos

2,960 Views
jpa
Contributor IV

Fabio,

Enabling PCIe in u-boot allowed our driver to load, but we had intermittent errors where the kernel wouldn't start.  I think maybe you wrote the comment in u-boot that indicates that we have to pick either u-boot or the kernel for configuring pcie, but not both, because of a pcie_reset/enumeration problem with the imx6.

So I'm trying your suggestion to use 4.1.2 of the kernel.   I used the DTB from the board manufacturer.  It kernel panics with "buggy DT: spidev listed directly in DT" and also can't find the rootfs...I think it's looking for it on mmc1 and the new kernel is putting it at mmc0.  I think I know how to fix the second problem with a change to u-boot.

John

0 Kudos

2,960 Views
jpa
Contributor IV

Fabio,

{Edit: Sorry, our messages crossed.  I can try re-enabling pcie in u-boot and kernel with 4.1.2 kernel and see if that has the intermittent kernel hang}

4.1.2 doesn't help, the device's registers still don't get remapped into the proper I/O space.

dmesg is attached, note that

pci 0000:01:00.0: reg 0x10: [mem 0xfffc0000-0xffffffff]

pci 0000:01:00.0: reg 0x14: [mem 0xfffc0000-0xffffffff]

never gets remapped, and it's bars are not assigned.

John

0 Kudos

2,960 Views
fabio_estevam
NXP Employee
NXP Employee

Hi John,

Please report this problem to the linux-pci list with the PCI maintainers on Cc (see an earlier post where I send the details).

0 Kudos

2,960 Views
jpa
Contributor IV

As reported before, with PCIe enabled in both u-boot and kernel, we experienced intermittent hangs at "Starting Kernel".  I'd guess that it was 1 out of 10 times, but it was very unpredictable.  Kernel is 3.14.28, u-boot is 2014.04.   Our board vendor replicated this behavior with other carrier boards and PCIe devices. 

If we only enabled PCIe in the kernel (and not in u-boot), our device was not being assigned resources, so the base address registers remained at the same (invalid) locations that were specified in the device design.  I determined that the resources weren't being allocated to our device because our class was set to "0", and at some point, the kernel ignores the device and does not assign resources if the class is unknown.  Assigning a non-zero class to the device caused the kernel to assign resources and remap the bars, and everything worked.   With PCIe enabled in the kernel only, we never saw the intermittent hang.   I'm guessing the behavior of ignoring devices with class=0 is new, and so the reason we didn't encounter the same problem with other processor builds is because they were using older kernels, but I did not go back to the kernel source of the other builds to confirm this. 

This is now closed as far as I'm concerned.

John

0 Kudos

2,960 Views
fabio_estevam
NXP Employee
NXP Employee

John,

With mainline U-boot + mainline kernel it is possible to have PCI enabled in both U-boot and kernel.

About your SPI error, it comes from:

commit 956b200a846e324322f6211034c734c65a38e550

Author: Mark Brown <broonie@kernel.org>

Date:   Fri Mar 27 16:36:04 2015 -0700

    spi: spidev: Warn loudly if instantiated from DT as "spidev"

   

    Since spidev is a detail of how Linux controls a device rather than a

    description of the hardware in the system we should never have a node

    described as "spidev" in DT, any SPI device could be a spidev so this

    is just not a useful description.

   

    In order to help prevent users from writing such device trees generate a

    warning if spidev is instantiated as a DT node without an ID in the match

    table.

   

    Signed-off-by: Mark Brown <broonie@kernel.org>

0 Kudos

2,960 Views
fabio_estevam
NXP Employee
NXP Employee

Why don't you simply use the mx6 pci driver instead?

It is located at drivers/pci/host/pci-imx6.c

Regards,

Fabio Estevam

0 Kudos

2,960 Views
jpa
Contributor IV

Fabio,

The driver in question isn't the driver for the imx6 PCIe root device, but rather a driver for a device that I'm connecting to the PCIe bus with the imx6 as master.  We designed the device (well, sort of, it's an FPGA so our driver is heavily based on sample code from the manufacturer) so I don't think your answer applies. 

As I understand it, the FPGA registers get memory mapped into the imx6 memory space.

John

0 Kudos