AnsweredAssumed Answered

i.MX6Q - PCI Express Endpoint

Question asked by DeWayne Gibson on Feb 22, 2016
Latest reply on Feb 29, 2016 by Yuri Muhin

Background:

Customer using iMX6Q as a PCI Express Endpoint - NOT A ROOT COMPLEX.

Operating System uC/OS-III (Not Linux) incorporating code adapted from iMX6 Sabre Platform SDK.

 

5 Questions:

  • Q1: The PCIe spec provides a 1 second period after reset to allow an endpoint to perform self-initialization.  It takes nearly that long for the customers system to copy and validate the boot image from SPI flash memory so we are currently unable to ready within that timeframe.  Does NXP have any recommendations on how to bring up the PCIe
    interface?  Note that the customer requires support for High-Assurance Boot.

 

  • Q2: PCIe defines additional methods of resetting an endpoint besides the slot reset.  Specifically, the in-band Hot Reset and Link Disable should cause the endpoint to reset its state, including Config Space contents.  Assuming the i.MX6 PCIe core does reset in these cases, how can software running on the i.MX6 know if this has happened?  There
    does not appear to be any way for the PCIe core to interrupt the ARM cores. How can customer recover from a hot-reset or link down/up.

 

  • Q3: It is common for an endpoint’s programming model to provide one or more “doorbell” or “mailbox” registers that the host system can write to trigger an I/O operation.  Customer use BAR0/1 to map host initiated PCIe transactions to memory in the i.MX6 but are forced to poll work queues because, again, there does not appear to be any way for the PCIe core to interrupt the ARM cores based on a write from the host (root complex which we are not We are Endpoint).  Any suggestions?

 

  • Q4: The i.MX6Q PCIe EP/RC Validation System page in the i.MX Community describes performance improvements when “cache is enabled” but does not describe how to do that.  Can you elaborate?  Is there any way to use the IPU or other method to improve the PCIe performance?  Customer would like to reduce the overhead caused by transferring only 8 to 32 bytes at a time.

 

  • Q5: Customer assumes the PCIe iATU address space (0x01000000 to 0x01ffbfff) should be mapped as Device memory in the ARM MMU.  Is this correct?

Outcomes