LX2082A PCIe 64-bit memory BAR space limitation

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

LX2082A PCIe 64-bit memory BAR space limitation

471件の閲覧回数
ggdavisiv
Contributor I

Greetings,

 

I have an application using the LX2082A where pcie3 is configured as an RC connected to a switch with several endpoints that require a few hundred GiB of 64-bit prefetch memory. I understand that the QorIQ LX2162A Reference Manual, Rev. 1, 12/2021, System Memory Map table shows that there is a 32GiB memory region available for mapping pcie3 resources. Unfortunately, this is not sufficient for my application. However, the table comment states that the "High-speed I/O (PCI Express)" region extends over the "(0x0080_0000_0000-0x00FF_FFFF_FFFF)" address range which infers that there is potentially a 512GiB region that may be available for mapping PCIe resources. With that in mind, I observe the following during pcie3 initialization: 

layerscape-pcie 3600000.pcie: Detected iATU regions: 256 outbound, 24 inbound

Since each iATU region is capable of mapping 4GiB, it infers that the pcie3 RC controller is capable of mapping up to 1TiB of PCIe BAR space as long as multiple iATU regions are programmed to enable access to >4GiB mappings.

With that in mind, when I tried to utilize portions of the "High-speed I/O (PCI Express)" region for mapping pcie3 64-bit memory resources beginning at address 0x00A0_0000_0000 and above, attempting to read/write the address space results in hard lockup for the processor performing the read/write.

This brings me to my question: Is it possible that the CCN-508 System Address Map (SAM) can be programmed to enable access to more of the "High-speed I/O (PCI Express)" region to support cases that require more than 32GiB of address space for mapping 64-bit BARs?

I would like to understand if there is address steering or decoding done on those PCIe 32GiB regions which would preclude this possibility or if the steering is done solely by the CCN-508 SAM where it may be possible to change the existing defaults to enable much larger memory regions as needed where address decoding would then be done by the PCIe RC outbound iATU regions. It this possible or are the address regions fixed without the possibility to expand them further?

0 件の賞賛
返信
3 返答(返信)

427件の閲覧回数
yipingwang
NXP TechSupport
NXP TechSupport

"With that in mind, when I tried to utilize portions of the "High-speed I/O (PCI Express)" region for mapping pcie3 64-bit memory resources beginning at address 0x00A0_0000_0000 and above, attempting to read/write the address space results in hard lockup for the processor performing the read/write. "

How are you reading/writing to this address space ?

0 件の賞賛
返信

337件の閲覧回数
ggdavisiv
Contributor I

I'm using devmem2 to perform test accesses of 64-bit memory BARs.

Here is an example of mapping a 16GiB 64-bit memory BAR within the area reserved for PCIe3 in the LX2162ARM System Memory Map (quirking all but one endpoint to zero-out 64-bit BARs in order to successfully map at least one 16GiB 64-bit BAR endpoint within the available 32GiB address space):

#dmesg|grep 08:00.0|grep assigned
[    3.621071] pci 0000:08:00.0: BAR 0: assigned [mem 0x9400000000-0x97ffffffff 64bit pref]
[    3.629192] pci 0000:08:00.0: BAR 2: assigned [mem 0x9011000000-0x9011ffffff]
[    3.636330] pci 0000:08:00.0: BAR 3: assigned [mem 0x9012000000-0x9012ffffff]
#devmem2 0x9400000000 l
/dev/mem opened.
Memory mapped at address 0xffffa9a2d000.
Read at address  0x9400000000 (0xffffa9a2d000): 0x0000000000000000
#devmem2 0x9500000000 l
/dev/mem opened.
Memory mapped at address 0xffffaf13b000.
Read at address  0x9500000000 (0xffffaf13b000): 0xA1A5A5B781A5A4A1
#devmem2 0x9600000000 l
/dev/mem opened.
Memory mapped at address 0xffff9f07a000.
Read at address  0x9600000000 (0xffff9f07a000): 0xFEFFFFDFFFB7BFFF
#devmem2 0x9700000000 l
/dev/mem opened.
Memory mapped at address 0xffff9703c000.
Read at address  0x9700000000 (0xffff9703c000): 0x0000000000000000

 The above example works for one device that requires 64bit memory support but there are other devices which must be mapped where the total address space required far exceeds the 32GiB System Memory Map area reserved for PCIe3 which is shared for all downstream IO, memory, and config space mappings.

As I previously asked, is it possible that the LX2162A CCN-508 System Address Map (SAM) can be setup to support mapping PCIe 64bit resources above and beyond the defaults stated in the LX2162ARM?

Experiments attempting to use addresses from 0xA0_00000000 and beyond to map much larger PCIe3 64bit memory resources results in a hang, e.g.:

#dmesg|grep 08:00.0|grep assigned
[    3.626000] pci 0000:08:00.0: BAR 0: assigned [mem 0xa000000000-0xa3ffffffff 64bit pref]
[    3.634122] pci 0000:08:00.0: BAR 2: assigned [mem 0x9011000000-0x9011ffffff]
[    3.641260] pci 0000:08:00.0: BAR 3: assigned [mem 0x9012000000-0x9012ffffff]
#devmem2 0xa000000000 l
/dev/mem opened.
Memory mapped at address 0xffffbc61b000.
# Hang followed by watchdog reset occurs here.

Is it possible to customize the CCN-508 SAM to enable mapping PCIe3 address resources greater than the default 32GiB limitation via customized CCN-508 SAM settings after Power-On-Reset, or are the PCIe reserved address spaces fixed via other hardware address steering/decoding that cannot be changed?

 

0 件の賞賛
返信

261件の閲覧回数
yipingwang
NXP TechSupport
NXP TechSupport

"it infers that the pcie3 RC controller is capable of mapping up to 1TiB of PCIe BAR space as long as multiple iATU regions are programmed to enable access to >4GiB mappings"

As mentioned in the memory map in the RM, PCIe3 RC Controller is only capable of addressing 32GB. 256 outbound windows that you see in the logs are not of 4GB size. In the dmesg logs that you have shared earlier:-

#dmesg|grep 08:00.0|grep assigned
[ 3.621071] pci 0000:08:00.0: BAR 0: assigned [mem 0x9400000000-0x97ffffffff 64bit pref]
[ 3.629192] pci 0000:08:00.0: BAR 2: assigned [mem 0x9011000000-0x9011ffffff]
[ 3.636330] pci 0000:08:00.0: BAR 3: assigned [mem 0x9012000000-0x9012ffffff]

you see BAR0, 2 and 3 of different sizes.

Why you're getting a hard lockup is because you're trying to access address space assigned to PCIe5 [0x00A0_0000_0000-0x00A7_FFFF_FFFF]

In most of the cases 32GB is a sufficient size for any pcie address space, however we would like to understand what is your application that demands more than 32GB addressing from PCIe3 RC ?

0 件の賞賛
返信