This discussion is complementing my original "Implementing PCIe support for bridges in iMX6Q WEC7 BSPs" post.
iMX6 PCI bus has only one slot for one device, but a PCI-to-PCI bridge would allow use of multiple endpoints.
The first step is to enable PCI bus traverse and enumeration, as outlined in "WEC7/2013 - Traversing PCIe bus and enumerating devices on it" post.
Once OAL allows access to PCI configuration spaces beyond a bridge, PCI Bus Driver will be able to find, identify and activate endpoints.
First of all, PCI Bus Driver needs information to identify PCI-to-PCI bridge connected to the Root Complex of iMX6.
There is no need for PCI Bridge Drivers as bridges' handling is implemented within PCI Bus Driver.
However, Registry information about the bridge is absolutely necessary.
Following information must be present in the Registry to allow PCI Bus Driver to correctly identify a bridge:
;; These device identifiers must exist for proper discovery.
"VendorID"=multi_sz:"VID1" [,"VID2" [,...]]
"DeviceID"=multi_sz:"VID1" [,"VID2" [,...]]
More than one product and vendor may be specified, up to four pairs can be parsed and handled.
Because no driver DLL is necessary for transparent bridges, no DLL name is specified, just the VID/PID pairs and the device specifications.
PCI Bus Driver should be able to:
- find the upstream port of the bridge and identify the resources required;
- find all downstream ports of the bridge;
- traverse and fin all endpoints behind the bridge;
- identify and enumerate resources required by endpoints;
- assign non-conflicting resources to endpoints;
- configure bridge ports for access to the resources of the endpoints.
The entire MMIO and I/O space is shared by all endpoints on PCIe bus - unlike the PCI configuration space which requires ATU re-programming for access.
There is, however, one major restriction - each endpoint can have MMIO window not smaller that 1 MB.
iMX6 has 14 MB of MMIO space allocated for PCI controller; therefore no more that 14 different endpoints requiring MMIO can be connected.
In addition, each downstream bridge port which has native MMIO space would take 1 MB of it for its own perusal, thus reducing the total for endpoints.
Whether the bridge itself needs driver to program it, or it can operate natively in transparent mode, does not matter - MMIO space will be taken.
PCI Bus Driver verifies that resources required by endpoints do not conflict each other.
Typically there is no need to explicitly specify MMIO base and ranges for endpoints in Registry, as PCI Bus Driver will do the assignment and populate the copy in "Active" devices' branch.
Each upstream bridge port is assigned pass-through MMIO base and MMIO size (and I/O, too) for access to downstream ports and endpoints connected to these ports.
Each downstream bridge port is assigned pass-through MMIO base and MMIO size (and I/O, too) for access to the endpoint connected to it.
Non-connected downstream ports would not get any resources assigned.
Following PCI Configuration elements are programmed by PCI Bus Driver:
Primary Bus Number;
Secondary Bus Number;
Subordinate Bus Number;
I/O base and I/O limit (for either 16-bit or 32-bit access)
Memory Base and Memory Limit (for either 32-bit or 64-bit access)
Prefetchable memory may be programmed to the same values as I/O base and limit.
BARs may or may not be programmed, depending on the bridge's need for native MMIO - or no such need respectively.
BARs and Memory Window must not overlap, because BARs are for access to the native registers of the bridge while Memory Window is for access to the resources beyond the bridge.
There is only one exception to this behavior - olden PCI-to-PCI bridges operate "subtractively" which means, they rebase MMIO address by subtracting their Memory Base from the address placed on the bus. For such bridges, specifying MMIO and I/O in Registry may be a must.
The entire physical address space from 0x0100_0000 to 0x010F_FFFF will be "sliced" in portions for I/O windows assigned to endpoints and aligned at 4 KB boundary.
The entire physical address space from 0x0110_0000 to 0x01DF_FFFF will be "sliced" in portions for MMIO windows assigned to endpoints and aligned at 1 MB boundary.
Each endpoint with several MMIO BARs will have these memory windows accommodated within the same 1 MB assigned to the downstream port.
I would expect that if an endpoint needs more than 1 MB of MMIO space, then the corresponding downstream port will get assigned 2, 3 or more megabytes so that the endpoint request would be satisfied. This also implies that if an endpoint needs more than a megabyte for MMIO, then the total number of usable endpoints will be reduced accordingly.
There will be gaps in the MMIO space of the PCI controller and attempts to access such space would result in memory exceptions. Robust device drivers should either verify the access range for their MMIO, or implement exception handling, or do both.
This is an example of MMIO space assignment for a bridge with 6 downstream ports with 4 of them populated:
0x0110_0000:0x11F_FFFF - MMIO window for downstream bridge port 1
BAR0 0x01100000 size 0x20000
BAR1 0x01120000 size 0x14000
0x0120_0000:0x12F_FFFF - MMIO window for downstream bridge port 2
BAR0 0x01200000 size 0x20000
not populated, no resources assigned
0x0130_0000:0x013F_FFFF - MMIO window for downstream bridge port 4
BAR0 0x01300000 size 0x40000
BAR1 0x01340000 size 0x8000
0x0140_0000:0x014F_FFFF - MMIO window for downstream bridge port 5
BAR0 0x01400000 size 0x40000
BAR1 0x01440000 size 0x8000
not populated, no resources assigned
In the example above, BDFs 7:0:0 and 8:0:0 are likely same kind of device, requesting same number and same sizes for MMIO.
Senior Software Engineer
Adeneo Embedded US