there are Receive FIFO overflow counts in enet module, how to debug ?

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

there are Receive FIFO overflow counts in enet module, how to debug ?

2,705 次查看
zhshuangp
Contributor II

Hi guys,

in imx6q, when the enet module receive udp packets from the other side, we found there are so many IEEE_R_MACERRs as follows, as i know from the community, if the bandwidth is less than 400Mbps, there should be no Rcv FIFO overflow counts, but it happens in our board, I am sure that the bandwidth is less than 100Mbps in our system. how to debug?  any mistake we made ?

root@imx6qsabresd:/usr/application# ethtool -S eth0

NIC statistics:

     tx_dropped: 0

     tx_packets: 13

     tx_broadcast: 1

     tx_multicast: 8

     tx_crc_errors: 0

     tx_undersize: 0

     tx_oversize: 0

     tx_fragment: 0

     tx_jabber: 0

     tx_collision: 0

     tx_64byte: 4

     tx_65to127byte: 9

     tx_128to255byte: 0

     tx_256to511byte: 0

     tx_512to1023byte: 0

     tx_1024to2047byte: 0

     tx_GTE2048byte: 0

     tx_octets: 1014

     IEEE_tx_drop: 0

     IEEE_tx_frame_ok: 13

     IEEE_tx_1col: 0

     IEEE_tx_mcol: 0

     IEEE_tx_def: 0

     IEEE_tx_lcol: 0

     IEEE_tx_excol: 0

     IEEE_tx_macerr: 0

     IEEE_tx_cserr: 0

     IEEE_tx_sqe: 0

     IEEE_tx_fdxfc: 0

     IEEE_tx_octets_ok: 1014

     rx_packets: 18813

     rx_broadcast: 10

     rx_multicast: 0

     rx_crc_errors: 0

     rx_undersize: 0

     rx_oversize: 0

     rx_fragment: 0

     rx_jabber: 0

     rx_64byte: 3

     rx_65to127byte: 517

     rx_128to255byte: 440

     rx_256to511byte: 406

     rx_512to1023byte: 1053

     rx_1024to2047byte: 16394

     rx_GTE2048byte: 0

     rx_octets: 25916856

     IEEE_rx_drop: 0

     IEEE_rx_frame_ok: 9176

     IEEE_rx_crc: 0

     IEEE_rx_align: 0

     IEEE_rx_macerr: 10122

     IEEE_rx_fdxfc: 0

     IEEE_rx_octets_ok: 11595470

root@imx6qsabresd:/usr/application# ifconfig

eth0      Link encap:Ethernet  HWaddr 2A:6E:65:F8:B5:24

          inet addr:169.254.30.91  Bcast:169.254.255.255  Mask:255.255.0.0

          inet6 addr: fe80::286e:65ff:fef8:b524/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:18190 errors:9711 dropped:37 overruns:9711 frame:9711

          TX packets:13 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:18248589 (17.4 MiB)  TX bytes:926 (926.0 B)

0 项奖励
回复
6 回复数

2,191 次查看
b36401
NXP Employee
NXP Employee

You can stop all services that significally loads the network and check with iperf from with bandwidth overruns starts to appear.

Have a great day,

Victor

-----------------------------------------------------------------------------------------------------------------------

Note: If this post answers your question, please click the Correct Answer button. Thank you!

-----------------------------------------------------------------------------------------------------------------------

0 项奖励
回复

2,191 次查看
zhshuangp
Contributor II

Hi Victor,

Thanks for your reply.

we have check with iperf tools, it seems there is no Rcv FIFO Overflow in iperf test. But in our system, it can be reproduced every time. The following is the test result with iperf.

[xll@Lu zhuangp]$ iperf -c 169.254.30.91 -u -t 5 -b 100M

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0- 5.0 sec  59.9 MBytes   101 Mbits/sec

[  3] Sent 42736 datagrams

[  3] Server Report:

[  3]  0.0- 5.0 sec  59.9 MBytes   101 Mbits/sec   0.012 ms    0/42735 (0%)

[  3]  0.0- 5.0 sec  1 datagrams received out-of-order

[xll@Lu zhshuangp]$ iperf -c 169.254.30.91 -u -t 5 -b 200M

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0- 5.0 sec   121 MBytes   203 Mbits/sec

[  3] Sent 86207 datagrams

[  3] Server Report:

[  3]  0.0- 5.0 sec   121 MBytes   203 Mbits/sec   0.001 ms    0/86206 (0%)

[  3]  0.0- 5.0 sec  1 datagrams received out-of-order

[xll@Lu zhshuangp]$ iperf -c 169.254.30.91 -u -t 5 -b 300M

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0- 5.0 sec   180 MBytes   302 Mbits/sec

[  3] Sent 128205 datagrams

[  3] Server Report:

[  3]  0.0- 5.0 sec   180 MBytes   302 Mbits/sec   0.011 ms    0/128204 (0%)

[  3]  0.0- 5.0 sec  1 datagrams received out-of-order

[xll@Lu zhshuangp]$ iperf -c 169.254.30.91 -u -t 5 -b 400M

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0- 5.0 sec   242 MBytes   406 Mbits/sec

[  3] Sent 172414 datagrams

[  3] Server Report:

[  3]  0.0- 5.0 sec   242 MBytes   406 Mbits/sec   0.016 ms    0/172413 (0%)

[  3]  0.0- 5.0 sec  1 datagrams received out-of-order

[xll@Lu zhshuangp]$ iperf -c 169.254.30.91 -u -t 5 -b 500M

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0- 5.0 sec   305 MBytes   511 Mbits/sec

[  3] Sent 217331 datagrams

[  3] Server Report:

[  3]  0.0- 5.0 sec   305 MBytes   511 Mbits/sec   0.024 ms    0/217330 (0%)

[  3]  0.0- 5.0 sec  1 datagrams received out-of-order

[xll@Lu zhshuangp]$ iperf -c 169.254.30.91 -u -t 5 -b 600M

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0- 5.0 sec   345 MBytes   578 Mbits/sec

[  3] Sent 245742 datagrams

[  3] WARNING: did not receive ack of last datagram after 10 tries.

[xll@Lu zhshuangp]$

0 项奖励
回复

2,191 次查看
b36401
NXP Employee
NXP Employee

Please note that these IEEE_rx_macerr may be caused with corrupted (or initially bad) packets.

I mean that the issue may be caused from other side (the side that sends these udp packets).

Please check /sys/class/net/eth0/statistics/rx_*_errors entries for the errors.

Have a great day,                                                                                                                                                                  

Victor                                                                                                                                                                             

                                                                                                                                                                                   

-----------------------------------------------------------------------------------------------------------------------                                                            

Note: If this post answers your question, please click the Correct Answer button. Thank you!                                                                                       

-----------------------------------------------------------------------------------------------------------------------                                                            

0 项奖励
回复

2,191 次查看
zhshuangp
Contributor II

root@imx6qsabresd:/sys/class/net/eth0/statistics# cat rx_bytes

669122

root@imx6qsabresd:/sys/class/net/eth0/statistics# cat rx_compressed

0

root@imx6qsabresd:/sys/class/net/eth0/statistics# cat rx_dropped   

0

root@imx6qsabresd:/sys/class/net/eth0/statistics# cat rx_errors 

412

root@imx6qsabresd:/sys/class/net/eth0/statistics# cat rx_fifo_errors

412

root@imx6qsabresd:/sys/class/net/eth0/statistics# cat rx_frame_errors

0

root@imx6qsabresd:/sys/class/net/eth0/statistics# cat rx_length_errors

0

root@imx6qsabresd:/sys/class/net/eth0/statistics# cat rx_missed_errors

0

root@imx6qsabresd:/sys/class/net/eth0/statistics# cat rx_over_errors  

0

root@imx6qsabresd:/sys/class/net/eth0/statistics# cat rx_packets    

811

root@imx6qsabresd:/sys/class/net/eth0/statistics# ifconfig

eth0      Link encap:Ethernet  HWaddr 3E:33:AD:B6:C7:23 

          inet addr:169.254.30.91  Bcast:169.254.255.255  Mask:255.255.0.0

          inet6 addr: fe80::3c33:adff:feb6:c723/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:823 errors:412 dropped:0 overruns:412 frame:0

          TX packets:89 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:673940 (658.1 KiB)  TX bytes:37274 (36.4 KiB)

0 项奖励
回复

2,191 次查看
zhshuangp
Contributor II

In order to fix this bug, i have try the following idea, but make no sense.

(1) even if the packets received in ENET module correctly, i will also discard them when the length is more than 100 bytes, please refer to the source code attached below, in order to verify that the root cause for this issue is not associated with CPU process capability.

debug.png

(2) change the weight when use napi

netif_napi_add(ndev, &fep->napi, fec_enet_rx_napi, NAPI_POLL_WEIGHT/4);

(3) as far as i know, there is ERR004512, may be associated with this issue, but there is one doubt here i do not understand. when imx6 enet connect the other side (the side that sends these udp packets), this issue can be reproduced every time, but if let the same other side connect to one PC ethernet(RJ45), everything is good. and we get the average speed is about 160Mbps. the ERR004512 tell me the limited speed is about 470Mbps, more than 160Mbps, but that means the peak speed, not average speed ? those two concept is different ? Besides, if we reduce the send UDP speed from 160Mbps to 28Mbps, this issue can also be reproduced every time, so I guess this issue is not associated with UDP send speed, may be associated with our specific streame mode, but what is the root cause ? I have no idea to figure it out. .

0 项奖励
回复

1,933 次查看
Ryze
Contributor I

Hi zhshuangp:

Have you solved the problem yet?I used iMX6Q, have the same problem.Can you give me some advice?Please

0 项奖励
回复