Our kernel is 3.0.35 u-boot is 2009
I have referenced this thread https://boundarydevices.com/i-mx6-ethernet/ trying to enhance performance of throughput, when I add "enable_wait_mode=off" from uboot
, but it hands up in kernel and can not boot to filesystem.
These are my others questions below:
environment:
i5 notbook
imx6 board
Q1 I set iperf.exe -s -w 256K -i 1 -M 64 on notbook (IP address 10.10.10.1)
iperf -c 10.10.10.1 -i 1 -w 256K -M 64 -w 256K on imx6
and we get
[ 3] 0.0- 1.0 sec 41.8 MBytes 350 Mbits/sec
[ 3] 1.0- 2.0 sec 38.5 MBytes 323 Mbits/sec
[ 3] 2.0- 3.0 sec 42.4 MBytes 355 Mbits/sec
[ 3] 3.0- 4.0 sec 49.0 MBytes 411 Mbits/sec
[ 3] 4.0- 5.0 sec 49.4 MBytes 414 Mbits/sec
[ 3] 5.0- 6.0 sec 50.2 MBytes 422 Mbits/sec
[ 3] 6.0- 7.0 sec 50.4 MBytes 423 Mbits/sec
[ 3] 7.0- 8.0 sec 44.1 MBytes 370 Mbits/sec
[ 3] 8.0- 9.0 sec 44.5 MBytes 373 Mbits/sec
[ 3] 9.0-10.0 sec 47.1 MBytes 395 Mbits/sec
[ 3] 0.0-10.0 sec 458 MBytes 384 Mbits/sec
we have check imx6 docment, the limitation of throughput is 475Mbps, and actual measurement is close to 400Mbps. Above question1 is make sense.
but when if I set -M 128
iperf.exe -s -w 256K -i 1 -M 128 on notbook
iperf -c 10.10.10.1 -i 1 -w 256K -M 128 on imx6
and I get
WARNING: attempt to set TCP maximum segment size to 128, but got 536
------------------------------------------------------------
Client connecting to 10.10.10.1, TCP port 5001
TCP window size: 256 KByte (WARNING: requested 256 KByte)
------------------------------------------------------------
[ 3] local 10.10.10.2 port 47427 connected with 10.10.10.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 5.88 MBytes 49.3 Mbits/sec
[ 3] 1.0- 2.0 sec 6.38 MBytes 53.5 Mbits/sec
[ 3] 2.0- 3.0 sec 6.25 MBytes 52.4 Mbits/sec
[ 3] 3.0- 4.0 sec 6.38 MBytes 53.5 Mbits/sec
[ 3] 4.0- 5.0 sec 6.25 MBytes 52.4 Mbits/sec
[ 3] 5.0- 6.0 sec 6.25 MBytes 52.4 Mbits/sec
[ 3] 6.0- 7.0 sec 6.38 MBytes 53.5 Mbits/sec
[ 3] 7.0- 8.0 sec 6.25 MBytes 52.4 Mbits/sec
[ 3] 8.0- 9.0 sec 6.25 MBytes 52.4 Mbits/sec
[ 3] 9.0-10.0 sec 6.25 MBytes 52.4 Mbits/sec
[ 3] 0.0-10.0 sec 62.6 MBytes 52.5 Mbits/sec
Why is so different with -M 64 and -M 128? And I try use -M 256 -M 512 -M 1024 -M 1280 -M 1518, It looks we get throughput multiple with previous.
Q2 Why -M 64 so different with others?
Thank you.
Hello Bernie,
Boundary devices has their own Linux and Uboot BSP for their board, so you can comment on their community so they can share what it is the changes they made in order to get a better performance of the ENET the only workaround that the device has it is the documented in the Errata file that you can see here:
http://cache.freescale.com/files/32bit/doc/errata/IMX6DQCE.pdf