IP packet stall on GbE-NIC via PCIe.

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

IP packet stall on GbE-NIC via PCIe.

Jump to solution
4,801 Views
george
Senior Contributor II

Dear all,

We manufactured the prototype of the product which used i.MX6Q (MCIMX6Q5EYM10AC).

It has two GbE ports.

     a)  One is Built-in MAC and external PHY.

     b)  Another is GbE-NIC on PCIe.

    "a" works correctly.

    "b" is rarely Stall, when there is much network load.

However, it does not occur, when link-speed is 10Mbps or 100Mbps.

It occurs, only when link-speed is 1Gbps.

According to the paket analysis by the Network Analyzer, the packet response to the equipment which opposes is lost.

And kernel message is very quiet then.

     * Chip used for NIC is intel i210-AT.

As results of investigating many things, there was about three conditions.

     1)  When using iperf3, read is issued, but a response does not return from opposition equipment (from the kernel side, it seems such).

          And CPU is idle.

     2)  Put data into Transmission QUE, And if data are transmitted, the register flag of i210 will change and the Data will be cleared.

          However, a flag cannot change, and data cannot be sent and then tx_detect_hang occurs.

     3)  Or NETDEV watch doc timeout by those upper layer.

* ACK and Packet do not come or flag does not change, Operation with all indispensable to the handshake for data flow is missing.

* Useful information is not included in the message which dmesg and kernel output.

Though we did the same test also by GbE-Chip other than intel, we were the same results.

  • MARVELL - 88E8053
  • Realtek - RTL8111D
  • Broadcom - BCM5751

We are using TimesysLinux. (kernel 3.0.35)

From now on, we will do the test using iMX6Q SD board and freescale BSP.

And then packet analysis on PCIe will also be done more.

If it has information for someone to solve this trouble, please tell me.

And If there is information needed, please tell me it.

Best Regards,

George

Labels (1)
0 Kudos
1 Solution
2,095 Views
gfine
NXP Employee
NXP Employee

Hi George,

There are quite a few reported timing problems for 3.0.35 which can be found at kernel.org. There are also memory cache (slab) issues as well.  Being able to find a specific patch is probably too complex as many patches for a bunch of timing problem and memory issues.

Also, be aware we will support our BSP,  but if the BSP is from Timesys or Boundary, they have to get resolution through them as we have no control over those BSPs or builds.

I can understand the customer's reluctance to move to the newer BSP but it may be the only path available to circumvent the problem.

Cheers,

Glen

View solution in original post

0 Kudos
27 Replies
1,931 Views
gfine
NXP Employee
NXP Employee

Hi George,

Can you have the customer run the following commands (after a failing run) and post the results.

1) lspci

2) lshw -class network

3) cat /proc/net/dev'

Also can they try the same with 3.10.17 ?

Cheers,

Glen

0 Kudos
1,931 Views
george
Senior Contributor II

Dear Glen,

I got the following information from the customer to your question.

--- Before phenomenon occurrence ---

 1) lspci

00:00.0 Class 0604: 16c3:abcd

01:00.0 Class 0604: 12d8:2303

02:01.0 Class 0604: 12d8:2303

02:02.0 Class 0604: 12d8:2303

03:00.0 Class 0200: 8086:1532

 2) lshw -class network

*-network

description: Ethernet interface

product: Intel Corporation

vendor: Intel Corporation

physical id: 0

bus info: pci@0000:03:00.0

logical name: eth0

version: 03

serial: 60:e0:0e:14:00:42

size: 1Gbit/s

capacity: 1Gbit/s

width: 32 bits

clock: 33MHz

capabilities: pm msi msix pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation

configuration: autonegotiation=on broadcast=yes driver=igb

driverversion=3.0.6-k2 duplex=full firmware=15.255-15 ip=192.168.3.219

latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s

resources: irq:154 memory:1200000-121ffff ioport:1000000(size=32) memory:1220000-1223fff

*-network DISABLED

description: Ethernet interface

physical id: 1

logical name: eth1

serial: 60:e0:0e:14:00:43

capabilities: ethernet physical

configuration: broadcast=yes driver=fec driverversion=Revision:

  1. 1.0 link=no multicast=yes

 3) cat /proc/net/dev

Inter-| Receive                                                |  Transmit

face |bytes packets errs drop fifo frame compressed multicast|bytes

packets errs drop fifo colls carrier compressed

lo:    1152      16 0    0    0 0          0         0

1152 16    0    0 0     0       0          0

eth0:     120       2 0    0    0 0          0         0

0 0    0    0 0     0       0          0

eth1:       0       0 0    0    0 0          0         0

0 0    0    0 0     0       0 0

--- After phenomenon occurrence ---

 1) lspci

00:00.0 Class 0604: 16c3:abcd

01:00.0 Class 0604: 12d8:2303

02:01.0 Class 0604: 12d8:2303

02:02.0 Class 0604: 12d8:2303

03:00.0 Class 0200: 8086:1532

 2) lshw -class network

*-network

description: Ethernet interface

product: Intel Corporation

vendor: Intel Corporation

physical id: 0

bus info: pci@0000:03:00.0

logical name: eth0

version: 03

serial: 60:e0:0e:14:00:42

size: 1Gbit/s

capacity: 1Gbit/s

width: 32 bits

clock: 33MHz

capabilities: pm msi msix pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation

configuration: autonegotiation=on broadcast=yes driver=igb

driverversion=3.0.6-k2 duplex=full firmware=15.255-15 ip=192.168.3.219

latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s

resources: irq:154 memory:1200000-121ffff ioport:1000000(size=32) memory:1220000-1223fff

*-network DISABLED

description: Ethernet interface

physical id: 1

logical name: eth1

serial: 60:e0:0e:14:00:43

capabilities: ethernet physical

configuration: broadcast=yes driver=fec driverversion=Revision:

  1. 1.0 link=no multicast=yes

 3) cat /proc/net/dev

Inter-| Receive |  Transmit

face |bytes packets errs drop fifo frame compressed multicast|bytes

packets errs drop fifo colls carrier compressed

lo:    1728      24 0    0    0 0          0         0

1728 24    0    0 0     0       0          0

eth0: 829518035  550560    0 33   33     0 0         0

2159864 32716    0    0 0     0       0          0

eth1:       0       0 0    0 0     0          0 0

  0 0    0    0 0     0       0          0

Was there any information for solving a problem in this?

They have not used 3.10.17 yet.

Though I recommend them it, I think that it cannot fully use PCIe.

Best Regards,

George

0 Kudos
1,931 Views
gfine
NXP Employee
NXP Employee

Also, I noticed an exception in the dmesg log which looks like a problem with interrupt handling. Does this same excpetion occur on the all the PCIe cards?

Glen

0 Kudos
1,931 Views
george
Senior Contributor II

Dear Glen,

I have one more GbE-NIC. --> PLANEX GbE LAN Adapter

This NIC uses MARVELL 88E8053 chip.


However, it is undetectable by both Ltib and Yocto until now.

I think that current consumption of this NIC may have affected my test environment.

I will next investigate the voltage of each point on PCB.


My customer tested by four chips.

  • intel - i210-AT
  • MARVELL - 88E8053
  • Realtek - RTL8111D
  • Broadcom - BCM5751

And they say : The same result will be got even if I use other chips.

Best Regards,

George

0 Kudos
1,931 Views
gfine
NXP Employee
NXP Employee

Hi George,

I can understand the desire to not move to the latest BSP version, and what that involves.

I can not answer, if the patch is relevant or not, until I can run the base and then apply the patch. If looks like the patch addresses a latency issue during transmit state, being recycled to idle, and if a certain flag is not asserted. whether or ot this applies to this problem is questionable without an in-depth debug.   I would ask that you save the original file(s), apply the patch, and see if it fixes the problem.

Currently I have a base configured and like your experience I need to add the PCIe adapter to the network config in Linux. Today I am working on that and should have some news within the next few days.

Can you explain to me what you mean by 'stall'.  Does the adapter stop sending packets and stop working? or does it start, then stop, then start again, and stop again?  The more detailed the explanation the better it is for me to identify the problem.

Cheers,

Glen

0 Kudos
1,931 Views
george
Senior Contributor II

Dear Glen,

"stall" which I say is the phenomenon which TX-Packet stops.

A typescript file got on that occasion is attached here.

I use iperf-c xxxx first in it.

And then like the following files were executed next.

#! /bin/bash -f

while :

do

  echo -n "======> "

  date +"%c / %s"

  iperf3 -c xxxx

done

And it is 1st loop, and a packet stops and it displays Dump.

Though there is no Dump display after it, sending of packets fails and transfer rates falls.

In that case, NIC is always reset with the following messages.

e1000e 0000:01:00.0: eth0: Reset adapter

e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx

Best Regards,

George

0 Kudos
1,931 Views
gfine
NXP Employee
NXP Employee

Hi George,

Thank you.  I needed to know what to look for.

I have researched the error in the log that looks like an exception happened earlier than the 'reset'.   In researching the error I came across a possible workaround.

Can you retest with modifying the pcie_aspm parameter to "pcie_aspm=off" in sys/module/pcie_aspm/parameters/policy. This may or may not change the failing senario but this is worth a try.

It can be modified in the sysfs or by passing pcie_aspm.policy=off as part of the kernel parameters in u-boot.

Cheers,

Glen

0 Kudos
1,931 Views
george
Senior Contributor II

Dear Glen,

Thank you for telling a workaround.

Regrettably, I have lent out SDP to another customer today.

However, I can request to try your workaround to my customer.

And I will report the result to you as soon as possible.

Is your workaround equal to check off "PCI Express ASPM control" of kernel configuration?

image002.png

# CONFIG_PCIEASPM is not set

If your workaround is available, it will bring progress to my customer's product.

Best Regards,

George

0 Kudos
1,931 Views
gfine
NXP Employee
NXP Employee

Hi George,

I would think that unchecking it on the kernel config would have a similar effect as disabling it.  But I do not know if there may be a side effect to doing that.

Cheers,


Glen

0 Kudos
1,929 Views
george
Senior Contributor II

Dear Glen,

My customer did the operation test today using your recommendation parameter.

The behavior has few differences, It stall(s) similarly to before then.

Was the improvement of symptoms seen in your H/W?

Best Regards,

George

0 Kudos
1,929 Views
timesyssupport
Senior Contributor II

Hello George,

We have not undertaken testing of an external PCIe GigE card on this platform. There is now 3.10.17 kernel support for the SDB as well. Can you please submit a support request detailing the issue at https://linuxlink.timesys.com/support?

Thanks,

Timesys Support

1,929 Views
gfine
NXP Employee
NXP Employee

Hi George,

As I said I wasn't sure it would make a differnce but worth a try as aspm on PCIe is known for creating timing errors.  

Earlier you said they are using the Timesys BSP which we have no control over.  Please verify that this problem is seen on the Freescale 3.0.35 BSP as well.

I am still in the final processes of getting the 6Q system up and running as close to what you have described.  It took a while to get the mini-PCIe adapter, and getting the system to recognize the adapter.

timesyssupport Can you monitor if this is a problem in your BSP?

Glen

0 Kudos
1,929 Views
george
Senior Contributor II

Dear Glen,

I informed timesys support that you were recommended.

However, the customer information cannot be provided to timesys according to my customer's particular reason.

Therefore, the reappearance test by timesys is also seldom progressing.

How is your reappearance test?

Could you make the mini-PCIe adapter recognize?

I newly found the part which may be amusing.

I ask for your comment to the following.

I do not have SDP now.

It is because I lent it out to other customers by the promise from before then.

However, I newly have Sabrelite of Bounday Device.

I did the same test using Sabrelite.

It is the repetitive test by iperf as SDP.

The result was the same as SDP.

    • the kernel-3.0.35 (ltib+baundary's kernel) is NG.
    • the kernel-3.10.17 (yocto master-next repo) work fine.

I investigated also other than the iperf test.

I have found that progress of kernel-time in 3.0.35 is strange.

If it compares with hwclock, it will be delayed about 10%.

Is it possible that the delay of this kernel-time causes the fault symptoms of iperf?

* For example : restructuring of divided packet may be unsuccessful.

Is delay of the kernel clock in SDP (kernle-3.0.35) reported?

Best Regards,

George

0 Kudos
2,096 Views
gfine
NXP Employee
NXP Employee

Hi George,

There are quite a few reported timing problems for 3.0.35 which can be found at kernel.org. There are also memory cache (slab) issues as well.  Being able to find a specific patch is probably too complex as many patches for a bunch of timing problem and memory issues.

Also, be aware we will support our BSP,  but if the BSP is from Timesys or Boundary, they have to get resolution through them as we have no control over those BSPs or builds.

I can understand the customer's reluctance to move to the newer BSP but it may be the only path available to circumvent the problem.

Cheers,

Glen

0 Kudos
1,929 Views
george
Senior Contributor II

Dear Glen,

Based on the hint from you, I replaced memory manager with it which is included in new kernel.

And I did the iperf repetitive test using the kernel.

And I was able to see the fault phenomenon in 3.0.35 not occur finally.

I generated new kernel as follows.

The kernel source code used as the base is the newest imx_3.0.35_4.1.0

The kernel source code used for the back port is linux-3.0.101.tar.bz2

Using ltib, kernel-configuration(.default) used default nearly given by ltib.

The dir related to mm was replaced first.

    • mm/ included in 3.0.101 was used.
    • arch/arm/mm/ included in 3.0.101 was used.

However, In order to keep the matching when build kernel, the following was used with 3.0.35.

    • mm/dmapool.c
    • mm/memblock.c
    • arch/arm/mm/dma-mapping.c

And, In order to keep the matching when build kernel, the following also used 3.0.101.

    • include/linux/memcontrol.h
    • include/linux/mm.h
    • include/linux/mmzone.h
    • include/linux/swap.h
    • include/linux/cpuset.h
    • include/linux/migrate.h
    • include/linux/fs.h
    • include/linux/sunrpc/cache.h
    • include/linux/memory.h
    • include/trace/events/vmscan.h
    • fs/nfsd/
    • fs/nfs/

This kernel seems to be stable.

But, this kernel could not be used for a customer's product.

The more adjustment and verification will be needed for it.

However, I think that it was proved by the above that this fault phenomenon is not related to freescale at all.

And according to your advice, my customer started work for their board to use 3.10.17 now.

Best Regards,

Geroge

0 Kudos
1,929 Views
gfine
NXP Employee
NXP Employee

Hi George,

Glad to hear you were able to get a working kernel.  But, as you experienced it took several interations, and would be problematic for the customer.  Thank you for taking the time and effort.

timesyssupport can you see if this is a fesible path to patch your kernel ?

Please let us know if they run into any problems with 3.10.17.

Cheers,

Glen

0 Kudos
1,929 Views
george
Senior Contributor II

Dear Glen,

My customer has not tried it yet.

Because, their kernel Config is as follows from before then.

#

# Bus support

#

CONFIG_ARM_AMBA=y

CONFIG_PCI=y

CONFIG_PCI_SYSCALL=y

CONFIG_ARCH_SUPPORTS_MSI=y

# CONFIG_PCI_MSI is not set

# CONFIG_PCI_STUB is not set

# CONFIG_PCI_IOV is not set

# CONFIG_PCIEPORTBUS is not set

# CONFIG_PCCARD is not set

CONFIG_ARM_ERRATA_764369=y

# CONFIG_PL310_ERRATA_769419 is not set

That is, they have not set these including PCIEPORTBUS.

And they are not using even MSI.

Your recommendation is following kernel config + kernel parameter pcie_aspm=off, Is my understanding correct?

#

# Bus support

#

CONFIG_ARM_AMBA=y

CONFIG_PCI=y

CONFIG_PCI_SYSCALL=y

CONFIG_ARCH_SUPPORTS_MSI=y

CONFIG_PCI_MSI=y

# CONFIG_PCI_STUB is not set

# CONFIG_PCI_IOV is not set

CONFIG_PCIEPORTBUS=y

CONFIG_PCIEAER=y

# CONFIG_PCIE_ECRC is not set

# CONFIG_PCIEAER_INJECT is not set

CONFIG_PCIEASPM=y

# CONFIG_PCIEASPM_DEBUG is not set

# CONFIG_PCCARD is not set

CONFIG_ARM_ERRATA_764369=y

CONFIG_PL310_ERRATA_769419=y

I tell to the customer that they also test using the same settings.

Did you see the same Stall as me in your H/W environment?

BR,

George

0 Kudos
1,931 Views
george
Senior Contributor II

Dear Glen,

Though I also tried yocto, it was not able to detect NIC.

.

Switching to clocksource mxc_timer1

imx6q-pcie 1ffc000.pcie: phy link never came up

PCI host bridge to bus 0000:00

.

However, it came to detect NIC by using imx6q-sabresd-ldo.dtb with following patch.

=> Re: PCIe with PCIe Switch PI7C9X2G303EL or PLX8603 with 3.10.17_1.0.0_ga

[  196.780872] e1000e: Intel(R) PRO/1000 Network Driver - 2.3.2-k

[  196.787309] e1000e: Copyright(c) 1999 - 2013 Intel Corporation.

[  196.793375] e1000e 0000:01:00.0: Disabling ASPM L0s L1

[  196.798619] PCI: enabling device 0000:01:00.0 (0140 -> 0142)

[  196.804761] e1000e 0000:01:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode

[  196.944894] e1000e 0000:01:00.0 eth1: registered PHC clock

[  196.953230] e1000e 0000:01:00.0 eth1: (PCI Express:2.5GT/s:Width x1) 68:05:ca:24:8b:cf

[  196.961386] e1000e 0000:01:00.0 eth1: Intel(R) PRO/1000 Network Connection

[  196.968662] e1000e 0000:01:00.0 eth1: MAC: 3, PHY: 8, PBA No: E46981-008

And I tested it using iperf as Linux-System build by ltib.

After beginning the test, even if about 12 hours pass, it is working correctly.

In kernel 3.0.35, there is badness interrupt handling so that you may say, and was it corrected in 3.10.17?

If it becomes clear, my customer will change version of kernel.

Best Regards,

George

0 Kudos
1,931 Views
gfine
NXP Employee
NXP Employee

HI George,

Thank you.. I am busy getting the HW together to replicate.  Got the min-full PCIe adapter and am awaiting  on a Gbe adapter.

Cheers,


Glen

0 Kudos
1,931 Views
george
Senior Contributor II

Dear Glen,

I supposition that this phenomenon is related to ENGR00255406 patch, and it is not a faultless workaround.

Please tell me your opinion.

BR,

George

0 Kudos