iMX6Q PCIe stuck / hanged kernel or HW (SoC)

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

iMX6Q PCIe stuck / hanged kernel or HW (SoC)

6,937件の閲覧回数
wooosaiiii
Contributor III

Hello all,

we are using PCIe communication between two iMX6Q SoMs. One iMX6Q's software is based on Linux (fslc 4.1.15) and the other is running bare-metal based on freescale SDK. So the Linux side is acting as RC and bare-metal side is implementing EP.

We have developed drivers for both sides and communication is working as expected until bare-metal EP side is reset/power cycled...

After EP reset, accessing BAR memory (read() syscall on device driver is accessing BAR memory for example) will cause RC - Linux side to hang. Kernel/SoC is completely stuck and the only option is power cycling!!!

We can actually prevent kernel / HW from hanging itself by performing this sequence when we want to reset EP:

echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
sleep 5                                        
(reset/power cycle EP device)              
sleep 5                                    
echo 1 > /sys/class/pci_bus/0000\:01/rescan

However we don't always know when EP is going to be reset since we don't send such information to the Linux side. Therefore we need a better way of knowing / preventing kernel / HW from hanging itself?

We have also noticed that by reading "RC status and command" register we are able to tell if accessing BAR memory is safe or not:

*48.11.2 Command and Status Register (PCIE_RC_Command)*

Address: 1FF_C000h base + 4h offset = 1FF_C004h

Value after fresh reboot:

> [root@host ~]# ./memtool -32 0x01FFC004 1
> Reading 0x1 count starting at address 0x01FFC004
>
> 0x01FFC004:  00100547
>
> [root@host ~]#

Value after EP reset / power cycle:

> [root@host ~]# ./memtool -32 0x01FFC004 1
> Reading 0x1 count starting at address 0x01FFC004
>
> 0x01FFC004:  00100000
>
> [root@host ~]#

Value after running above commands:

> [root@host ~]# echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
> [root@host ~]# echo 1 > /sys/class/pci_bus/0000\:01/rescan
> [root@host ~]# ./memtool -32 0x01FFC004 1
> Reading 0x1 count starting at address 0x01FFC004
>
> 0x01FFC004:  00100006
>
> [root@host ~]#

Short explanation:

*If you read "0x01FFC004" and the return value is "0x00100000", you
don't want to access PCI EP device (kernel hangs).*

When return values are "0x00100547" or "0x00100006" you can safely
access PCI device without kernel hang.

Example output:

> [root@host ~]# ./memtool -32 0x01FFC004 1
> Reading 0x1 count starting at address 0x01FFC004
>
> 0x01FFC004:  00100006
> [root@host ~]# dd if=/dev/imx6ep bs=1 count=1
> 1+0 records in
> 1+0 records out
> 1 byte copied, 0.03237 s, 0.0 kB/s
> [root@host ~]#
>
>     (RESET EP HERE)
>
> [root@host ~]# ./memtool -32 0x01FFC004 1
> Reading 0x1 count starting at address 0x01FFC004
>
> 0x01FFC004:  00100000
>
> [root@host ~]# dd if=/dev/imx6ep bs=1 count=1
>
>       (STUCK KERNEL / HW HERE)

We thus developed a Linux driver function that checks state of register value and we call it before every access to memory:

/* check PCI RC Status register on iMX6Q */
static int imx6ep_pci_rc_status(struct pci_dev *pdev)
{
        u32 err;

        /* Read PCI RC status and command register at 0x01FFC004 */
        if(pci_bus_read_config_dword(pdev->bus->parent, 0, 0x004, &err))
            return -EIO;

        //dbg("RC status value: 0x%08X\n", err);

        /* Return value of 0x00100000 indicates error */
        if(err == 0x00100000)
            return -EIO;
        else
            return SUCCESS;
} 

But I guess calling this function every where is really inefficient and dirty workaround?

Do you have any idea what actually causes SoC / kernel to hang and how to prevent that?

We also saw: https://community.nxp.com/thread/304284#316162

where Charles Powe had similar problem with iMX6Q hanging itselft, but this thread is quite old and we would want to know if anything has been done to resolve this matter?

Is this problem related to ERR005184 or ERR005723?

Thanks for any suggestions and solutions to our problem!

Primoz

ラベル(3)
11 返答(返信)

4,860件の閲覧回数
wooosaiiii
Contributor III

Hello,

we did more testing on this issue. I looks like CPU hangs itself only if driver does PCI write (iowrite32) while LTSSM is in state other that 0x11 (L0). We hit other LTSSM states for a split second when we reset/power-cycle EP device. This is enough that sometimes we are left with stuck hardware!

This is how we can replicate bug:

1) force LTSSM to state "disabled":

memtool -32 1FFC708=7198004  #force LTSSM into DISABLED 0x19‍‍‍‍

2) do PCI write while LTSSM disabled:

a) in kernel space (our EP driver):

iowrite32(0xFFFFFFFF, &private->status_flag);‍‍‍‍‍

b) or in user space:

[root@host ~]# lspci -vvv -s 01:00  | grep -i Region
        Region 0: Memory at 01100000 (32-bit, non-prefetchable) [size=32K]
        Region 2: Memory at 01108000 (32-bit, non-prefetchable) [size=32K]
        Region 3: Memory at 01110000 (32-bit, non-prefetchable) [size=256]
[root@host ~]# memtool -32 01110000=0xFFFFFFFF                                                                                                                           
Writing 32-bit value 0xFFFFFFFF to address 0x01110000
[root@host ~]# memtool -32 01110000 1         
Reading 0x1 count starting at address 0x01110000

0x01110000:  FFFFFFFF

[root@host ~]# memtool -32 1FFC708=7198004
Writing 32-bit value 0x7198004 to address 0x01FFC708
[root@host ~]# memtool -32 01110000=0xFFFFFFFF
Writing 32-bit value 0xFFFFFFFF t‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

in both cases a) & b) we get CPU hang. 

However if we force LTSSM into L0 (0x11) after being disabled (0x19) we don't observe CPU hang! Here is a proof:

[root@host ~]# memtool -32 01110000=0xFFFFFFFF
Writing 32-bit value 0xFFFFFFFF to address 0x01110000
[root@host ~]# memtool -32 01110000 1
Reading 0x1 count starting at address 0x01110000

0x01110000:  FFFFFFFF

[root@host ~]# memtool -32 1FFC708=7198004
Writing 32-bit value 0x7198004 to address 0x01FFC708
[root@host ~]# memtool -32 1FFC708=7118004
Writing 32-bit value 0x7118004 to address 0x01FFC708
[root@host ~]# memtool -32 01110000=0xFFFFFFFF
[root@host ~]# memtool -32 01110000=0xFFFFFFFF
[root@host ~]# memtool -32 01110000=0xFFFFFFFF
[root@host ~]# memtool -32 01110000 1         
Reading 0x1 count starting at add[  103.668537] (235) imx6q_pcie_abort_handler: PCIe abort: addr = 0x76fe7000 fsr = 0x1018 PC = 0x00010b4c LR = 0x00010b3c instr=e082303
ress 0x01110000

[  103.683314] (282) imx6q_pcie_abort_handler: could not correct imprecise abort error [instr=e0823003]
[  103.694091] Unhandled fault: imprecise external abort (0x1018) at 0x76fe7000
[  103.701144] pgd = bd71c000
[  103.703855] [76fe7000] *pgd=4d55c831, *pte=01110703, *ppte=01110e33
#401 Feb 21 10:51:43 kernel: [  103.668537] (235) imx6q_pcie_abort_handler: PCIe abort: addr = 0x76fe7000 fsr = 0x1018 PC = 0x00010b4c LR = 0x00010b3c instr=e0823003
#402 Feb 21 10:51:43 kernel: [  103.683314] (282) imx6q_pcie_abort_handler: could not correct imprecise abort error [instr=e0823003]
#403 Feb 21 10:51:43 kernel: [  103.694091] Unhandled fault: imprecise external abort (0x1018) at 0x76fe7000
#404 Feb 21 10:51:43 kernel: [  103.701144] pgd = bd71c000
#405 Feb 21 10:51:43 kernel: [  103.703855] [76fe7000] *pgd=4d55c831, *pte=01110703, *ppte=01110e33
Bus error (core dumped)
[root@host ~]#‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

as stated before we only get abort handler (imx6q_pcie_abort_handler) called when we do PCI read on broken bus!

Is this expected hardware behavior?

Must we avoid touching BARs when LTSSM in incorrect state?

Can this be considered as an errata?

Any help?

Thanks,

Primoz

0 件の賞賛
返信

4,860件の閲覧回数
per_orback
Contributor II

Hi wooosaiiii‌,

Did you ever find a good solution for this problem? I am currently working in a project where we use IMX6Q and we are experience problems which are very much alike what you are describing. We have an ethernet controller PCIe EP and when that one resets the IMX freezes and hangs for ever. We are using the 4.14.78 kernel and the Intel igb EP driver.

Best regards

Per

0 件の賞賛
返信

4,860件の閲覧回数
wooosaiiii
Contributor III

Hi Per,

we found a workaround though we could not fully eliminate the issue!

Since we were using PCI-e bus for communication between two iMX6Q processors we were in control of both sides (RC & EP). Namely the driver on Linux side (RC) and the driver on bare-metal side (EP).

We ended up allocating memory for BARs in Linux driver (dma_alloc_coherent()) and then made EP side driver to write directly to this buffers. We limited use or iowrite32() functions in the Linux driver to a bare minimum (negotiation phase). We usually don't expect EP to hotplug during this negotiation phase thus we assume we are safe!

Since you're using Intel IGB PCIe card, I believe you cannot do much regarding this issue.

Possible things to try

- maybe you could wrap calls to iowrite32() in Intel driver to check for LTSSM != 0x11? 

- implement imx6q_pcie_abort_handler() to handle at least ioread32() calls safely?

4,860件の閲覧回数
adeel
Contributor III

Hi Primoz,

If allocated memory with (dma_alloc_cohrent()), did it solve the problem? What this problem has to do with DMA? If you were able to write(BAR) to the mapped memory from user space, I think DMA is not required.

Best Regards,

Adeel

0 件の賞賛
返信

4,859件の閲覧回数
wooosaiiii
Contributor III

adeel‌ you are right that DMA has nothing to do with this issue!

However by allocating DMA buffers on Linux side (RC) and then passing physical adresses to the EP side, the EP can then send data via DMA directly into Linux DDR memory, right? When finished it will send MSI IRQ and Linux can touch data safely without any worries since it will read out of its own DDR memory.

What I want to stress out is that Linux side will not hang itself when it accesses data in its own memory despite EP being reset / offline! If it would access broken EP memory in such case it would be stuck / hanged forever. This is what we are trying to avoid here!

In our case, the only time we touch EP side memory is when we transfer DMA buffer physical addresses to EP's BARs. This is during device probe() in Linux side driver. Then we don't touch BARs anymore and thus assume we are safe!

Hope it helps somehow,

BR,

Primoz

4,859件の閲覧回数
adeel
Contributor III

However by allocating DMA buffers on Linux side (RC) and then passing physical adresses to the EP side, the EP can then send data via DMA directly into Linux DDR memory, right?

No wrong. DMA has nothing to do with PCI or EP can't transfer data with DMA(at least I can't think of that).

I can understand that DMA workaround has solved your RC freeze problem. But, how this workaround is working is hard to imagine. Also, your concern was write() and for read() you already got an abort exception. With DMA buffers in place, you don't see anymore freeze even with write()?

Sorry, if you are troubled with my comments. I just wanted to understand the problem and if there is a possible solution. In doing so I might waste your time. So you are free to reply or close this thread.

0 件の賞賛
返信

4,859件の閲覧回数
wooosaiiii
Contributor III

Hello all,

we still have a problem with iMX6Q hanging itself on BAR memory access...

However we have developed our driver & kernel to a point where only writing to BAR memory still truly hangs SoC. 

Here is what we have done:

1) we programmed RC's PCIe config space register "PCIE_RC_PMCSR" to enable NoSoftReset bit. This helps to retain "PCIE_RC_Command" register values after EP power-cycling. So problems from above posts are eliminated!

2) we added some PCI abort handling to "drivers/pci/host/pci-imx6.c" abort handler. Default abort handler simply returns 0 (OK, proceed), whereas our handler is a bit more sophisticated: 

/*  Added for PCI abort handling */
static int imx6q_pcie_abort_handler(unsigned long addr,
                unsigned int fsr, struct pt_regs *regs)
{
        int reg, ret=0;
        unsigned long instr, pc;

        pc = instruction_pointer(regs) - 4;
        instr = *(unsigned long *) pc; 
#if 1
        pr_info("(%d) %s: PCIe abort: addr = 0x%08lx fsr = 0x%03x PC = 0x%08lx LR = 0x%08lx instr=%08lx\n", 
                        __LINE__, __func__, addr, fsr, regs->ARM_pc, regs->ARM_lr, instr); 
#endif
#if 1
        /* dsb sy - Data Synchronization Barrier instruction */
        if(instr == 0xf57ff04f) {
                pc -= 4;
                instr = *(unsigned long *)pc;
        }
    
        /*
         * If the instruction being executed was a read,
         * make it look like it read all-ones.
         */
        if ((instr & 0x0c500000) == 0x04100000) {
                /* LDR instruction */
                reg = (instr >> 12) & 15; 
                regs->uregs[reg] = -1; 
                regs->ARM_pc = pc + 4;
                ret = 0;
                goto out;
        }
#endif  
        pr_err("(%d) %s: could not correct imprecise abort error [instr=%08lx]\n", __LINE__, __func__, instr);

        ret = 1;
out:
        return ret; /* 0 = OK */ /* 1 = NOT OK */ /* -1 = kernel continue? */
}‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍

Using this handler we can hook to imprecise abort that is thrown by the CPU on EP reset. We can detect if the cause for imprecise abort was read instruction (Linux ioread32()) and than fix this by returning 0xFFFFFFFF. Thus any call to ioread32() in our custom EP driver code will return 0xFFFFFFFF while the PCI communication is broken. 

Also any process that causes imprecise abort that we don't handle in abort handler is killed (hence returning 1 in handler).  

3) However we still get a case where SoC hangs itself without ever calling "imx6q_pcie_abort_handler()".

This happens when we reset/power-cycle EP while iowrite32() or any write to BAR memory is in progress.  

This is how we can replicate bug:

- we add below endless loop to write() in our custom driver (just for testing!!!!):

        pr_warn("entering endless loop - replicating SoC hang\n");
        while(1) {
            iowrite32(1, &private->status_flag);
        }  ‍‍‍‍‍‍‍‍

- in the loop we repeatedly write to MMIO (EP's BAR memory),

- we then reset/power-cycle our EP (over another ssh terminal using custom serial protocol)

- we observe instant SoC hang without PCIe abort handler being called!!! 

[root@host ~]# echo 1 > /dev/imx6ep
[   48.440212] imx6ep: entering endless loop - replicating SoC hang
#817 Feb 14 10:34:55 kernel: [   48.440212] imx6ep: entering endless loop - replicating SoC hang
#818 Feb 14 10:35:19 sshd[1880]: Accepted password for root from 192.168.89.8 port 58206 ssh2‍‍‍‍
using serial port
send: CPUCTRL_RESET_ID seq=41 [03]

(STUCK SOC HERE)‍‍‍‍‍‍‍‍

How can we handle such situation?

Seems like others have exact same problems: https://community.nxp.com/message/538005

NXP any support on this issue?

Regards,

Primož

0 件の賞賛
返信

4,859件の閲覧回数
igorpadykov
NXP Employee
NXP Employee

Hi Primož

nxp has service for helping customers with porting custom drivers

http://www.nxp.com/support/nxp-professional-services:PROFESSIONAL-SERVICE
ProSupport@nxp.com

Best regards
igor

0 件の賞賛
返信

4,859件の閲覧回数
wooosaiiii
Contributor III

Hello,

thank you for you outsourcing offer but we prefer to develop in-house if possible. Smiley Wink Any help by the community & NXP is welcomed!

What I would like to know is how to hook up to any possible abort handler that might be called when writing to non-existent PCI device memory?

Currently we are able to intercept such abort only when reading from broken PCI device. We do that by:

641         /* Added for PCI abort handling */
642         hook_fault_code(16 + 6, imx6q_pcie_abort_handler, SIGBUS, 0,
643                 "imprecise external abort");

and then handle fault in "imx6q_pcie_abort_handler()".

But when performing write operations fault code 22 (0b10110)  is not called!?

This is from Cortex-A9 documentation on:

4.3.15 Data Fault Status Register

[3:0]
Status
Indicates the type of exception generated. To determine the data fault, bits [12] and [10] must be used in conjunction with bits[3:0]. The following encodings are in priority order, highest first:
1. 0b000001 alignment fault
2. 0b000100 instruction cache maintenance fault
3. 0bx01100 1st level translation, synchronous external abort
4. 0bx01110 2nd level translation, synchronous external abort
5. 0b000101 translation fault, section
6. 0b000111 translation fault, page
7. 0b000011 access flag fault, section
8. 0b000110 access flag fault, page
9. 0b001001 domain fault, section
10. 0b001011 domain fault, page
11. 0b001101 permission fault, section
12. 0b001111 permission fault, page
13. 0bx01000 synchronous external abort, nontranslation
14. 0bx10110 asynchronous external abort
15. 0b000010 debug event.

Maybe any idea?

0 件の賞賛
返信

4,860件の閲覧回数
wooosaiiii
Contributor III

Hello all,

I think we can eliminate our custom driver for EP device as potential bug problem.

Here is output of kernel / HW stuck using only raw memory dumps without custom EP driver loaded:

1) FIND MEMORY MAPPED BARS:

    [root@host ~]# dmesg | grep -i BAR
    [    1.443654] pci 0000:00:00.0: BAR 0: assigned [mem 0x01000000-0x010fffff]
    [    1.443686] pci 0000:00:00.0: BAR 8: assigned [mem 0x01100000-0x011fffff]
    [    1.443707] pci 0000:00:00.0: BAR 6: assigned [mem 0x01200000-0x0120ffff pref]
    [    1.443738] pci 0000:01:00.0: BAR 0: assigned [mem 0x01100000-0x01107fff]
    [    1.443779] pci 0000:01:00.0: BAR 2: assigned [mem 0x01108000-0x0110ffff]
    [    1.443816] pci 0000:01:00.0: BAR 3: assigned [mem 0x01110000-0x011100ff]
    [root@host ~]#

2) LETS READ BAR0 ON EP:

    [root@host ~]# ./memtool -32 0x01100000 1
    Reading 0x1 count starting at address 0x01100000

    0x01100000:  00000000

    [root@host ~]#

3) LETS TAKE DOWN EP AND LINK BY RESETING (BASICALLY POWER-CYCLING EP DEVICE):
 
    [root@host ~]# linuxcpuctrlbus ResetEPDevice
    using serial port
    send: CPUCTRL_RESET_ID seq=41 [03]
    recv: CPUCTRL_RESET_ID seq=41
    [root@host ~]#

4) LETS READ BAR0 ON EP AGAIN:

    [root@host ~]#
    [root@host ~]# ./memtool -32 0x01100000 1
    Reading 0x1 count starting at add

    (STUCK KERNEL HERE)

As you can see dumping memory mapped EP's BAR after EP/link reset will produce kernel / HW hang!

Can anyone (igorpadykov maybe?) with similar setup try the same and get back with results?

Another good question is also: "Can I take EP device down asynchronously without notifying/letting know RC about it?".

IMHO we should notify RC about such case but on the other hand I wouldn't expect RC to hang itself in case we don't :smileysad:.

Thanks,

Primoz

0 件の賞賛
返信

4,860件の閲覧回数
igorpadykov
NXP Employee
NXP Employee

Hi Primoz

please try nxp official bsps on link

http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/i.mx-applications-process...

L4.1.15

http://git.freescale.com/git/cgit.cgi/imx/linux-2.6-imx.git/?h=imx_4.1.15_1.0.0_ga

please check RC-EP mode examples on

https://community.freescale.com/docs/DOC-95014 

for narrowing down problem may be recommended try to reproduce issue on i.MX6Q Sabre boards.


================================================================
Best regards
igor
-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 件の賞賛
返信