Hi,
I use IMX8MM to communicate with zynq7035 through pcie:
The zynq7035 board has been tested with PC(X86) through pcie,and it can run 7x24 hours stability.
But it can only run about 20 hours with IMX8MM,after that the communication is crash.
I have tried many methods to resume the communication,the easiest way is:
step 1. remove pcie device and bridge:
# remove pcie device which is zynq 7035
echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
# remove pcie bridge
echo 1 > /sys/bus/pci/devices/0000\:00\:00.0/remove
step 2. rescan pcie
echo 1 > /sys/bus/pci/rescan
After that the communication works again.(It may crash some hours later again...)
The kernel version is 5.4.160 which follow Freescale github on 5.4-2.1.x-imx tag .
I have no idea how to fix the problem,it seems like bug of kernel.
Can someone help me?
Best regards.
When i use echo 1 > rescan ,the kernel report some warning:
[183402.475074] .........
[183402.479225] pci 0000:00:00.0: PME# supported from D0 D1 D3hot D3cold
[183402.487424] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
[183402.495878] ............
This warning message isn't reported when kernel bootup.
Hi
may be recommended to try with nxp linux releases from source.codeaurora.org/external/imx/linux-imx repository
https://source.codeaurora.org/external/imx/linux-imx/tree/?h=lf-5.10.y
It is different from "https://github.com/Freescale/linux-fslc" as described for example on:
From hardware perspective issue may be caused by poor soldering (one can try to resolder chip)
or instablity of PCIe clock generator, one can check Table 9. PCIe recommendations
i.MX 8M Mini Hardware Developer’s Guide
Best regards
igor
Hi igor,
I use the version of kernel is 5.4.x.
I have compared the branch 5.4-2.1.x-imx on github is same as branch git.kernel.org/linux-stable/linux-5.4.y.
So, I think it's hardware perspective issue.
Sometimes,when rescan pci,the IMX8MM pci bridge bandwidth will be limited which should be 5 GT/s:
root@imx8mm:~/module/xdma# echo 1 > /sys/bus/pci/rescan
[190079.798131] ......
[190079.804983] pci 0000:01:00.0: PME# supported from D0 D1 D2 D3hot
[190079.811197] pci 0000:01:00.0: 2.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x1 link at 0000:00:00.0 (capable of 4.000 Gb/s with 5 GT/s x1 link)
[190079.826921] pci 0000:00:00.0: BAR 0: assigned [mem 0x18000000-0x180fffff]
[190079.833857] ......