About Gigabit Ethernet performance gap between Android&Linux

Document created by waterzhou Employee on Nov 13, 2013
Version 1Show Document
  • View in full screen mode

    Gigabit Ethernet should be one of most beautiful features in our imx6 platform which will bring more colorful dreams to many customers.

But recently,many people responsed that there were great performance gaps between using Android and Linux. Now let me give an exploration here.


    Same hardware, same kernel, different performance,why?

    In linux, its data throughput can reach 400Mbps.In JB, it can only get to 200Mbps.

    From the below info, we can see it should be related with frames dropping.

     root@android:/ # busybox ifconfig eth0

     eth0      Link encap:Ethernet  HWaddr 00:04:9F:02:6C:E1

               inet addr:  Bcast:  Mask:

               inet6 addr: fe80::204:9fff:fe02:6ce1/64 Scope:Link

               UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

               RX packets:7382672 errors:71828 dropped:789 overruns:71828 frame:71828

               TX packets:4147006 errors:0 dropped:0 overruns:0 carrier:0

               collisions:0 txqueuelen:1000

               RX bytes:2568845018 (2.3 GiB)  TX bytes:284789020 (271.5 MiB)

     In TCP stack, there are three buffers involved in iperf test case:tcp_mem; tcp_rmem; tcp_wmem.

     All of them are described by three variables which will influence a lot for iperf test result.

     In linux,I got a snapshot for them:

     root@sabresd_6dq:/# cat /proc/sys/net/ipv4/tcp_mem

     18240      24320     36480

     root@sabresd_6dq:/# cat /proc/sys/net/ipv4/tcp_rmem

     4096       87380     778240

     root@sabresd_6dq:/# cat /proc/sys/net/ipv4/tcp_wmem

     4096       16384     778240


     In Android,I also got them to compare to:

     root@sabresd_6dq:/# cat /proc/sys/net/ipv4/tcp_mem

     9285      12380     18570

     root@sabresd_6dq:/# cat /proc/sys/net/ipv4/tcp_rmem

     4096       87380     396160

     root@sabresd_6dq:/# cat /proc/sys/net/ipv4/tcp_wmem

     4096       16384     396160


     The tcp_mem varibles define how the TCP stack should behave in kernel memory management.The first value tells the kernel the low threshold.

The second value tells the kernel at which point to start pressuing memory usage down. The third one tells the kernel how many memory pages it may use maximally.

If it is reached,TCP streams and packets start geting dropped until to a safe level.

     In tcp_rmem, the first value defines the minimum receive buffer for each TCP connection and this buffer is always allocated to a TCP socket.The second one defines

the default receive buffer size. The third one specifies the maximum receive buffer that can be allocated for a TCP socket.

     In tcp_wmem, three varibles also be given to describle the TCP send buffer for each TCP socket.


     We can check how these values come from in kernel code.There is an algorithm in kernel_imx/net/ipv4/sysctl_net_ipv4.c +450.


    limit = nr_free_buffer_pages() / 8;

    limit = max(limit, 128UL);

    sysctl_tcp_mem[0] = limit / 4 * 3;

    sysctl_tcp_mem[1] = limit;

    sysctl_tcp_mem[2] = sysctl_tcp_mem[0] * 2;

    /* Set per-socket limits to no more than 1/128 the pressure threshold */

    limit = ((unsigned long)sysctl_tcp_mem[1]) << (PAGE_SHIFT - 7);

    max_wshare = min(4UL*1024*1024, limit);

    max_rshare = min(6UL*1024*1024, limit);

    sysctl_tcp_wmem[0] = SK_MEM_QUANTUM;

    sysctl_tcp_wmem[1] = 16*1024;

    sysctl_tcp_wmem[2] = max(64*1024, max_wshare);

    sysctl_tcp_rmem[0] = SK_MEM_QUANTUM;

    sysctl_tcp_rmem[1] = 87380;

    sysctl_tcp_rmem[2] = max(87380, max_rshare);


     From the above algorithm, we can see tcp_mem,tcp_wmem[2],tcp_rmem[2] all related with nr_free_buffer_pages() which stands for

amount of free RAM allocatable within ZONE_DMA and ZONE_NORMAL.

     So here, we can find the root cause of performance gap between Android and Linux. There is big gaps in free RAM while running different OS.

In fact, in android, Google has introduced one mechanism to tune these values through propertity.

Now we are using default AOSP's values, you can refer to them in device/fsl/imx6/etc/init.rc.For wifi and Ethernet, they are both using net.tcp.buffersize.wifi.

# Define TCP buffer sizes for various networks

#   ReadMin, ReadInitial, ReadMax, WriteMin, WriteInitial, WriteMax,

    setprop net.tcp.buffersize.default 4096,87380,110208,4096,16384,110208

    setprop net.tcp.buffersize.wifi    524288,1048576,2097152,262144,524288,1048576

    setprop net.tcp.buffersize.lte     524288,1048576,2097152,262144,524288,1048576

    setprop net.tcp.buffersize.umts    4094,87380,110208,4096,16384,110208

    setprop net.tcp.buffersize.hspa    4094,87380,262144,4096,16384,262144

    setprop net.tcp.buffersize.hsupa   4094,87380,262144,4096,16384,262144

    setprop net.tcp.buffersize.hsdpa   4094,87380,262144,4096,16384,262144

    setprop net.tcp.buffersize.hspap   4094,87380,1220608,4096,16384,1220608

    setprop net.tcp.buffersize.edge    4093,26280,35040,4096,16384,35040

    setprop net.tcp.buffersize.gprs    4092,8760,11680,4096,8760,11680

    setprop net.tcp.buffersize.evdo    4094,87380,262144,4096,16384,262144


I tried to change the above values but unfortunately got no obvious improvement.Hi,why????

so another topic,how tcp_mem and tcp_rmem cowork in kernel? In android, we only have way to tuning tcp_rmem or tcp_wmem settings but not tc_mem.

Take "iperf -c" for example, tcp_rmem will be filled up according to the frequency of Gigabit ethernet clock. And then it will be

repacked acoording to the size of tcp_mem.If tcp_mem is smaller, more times will be triggered and if it has exceeds the max value

dropping frames will be triggered. Then retransport will be launched in TCP. At last, performance will downgrade. It is just like go surfing using Gigabit but with a rubbish notebook.

You still can't enjoy good performance of Gigabit ethernet.

Why kernel calculate tcp_mem like this in ipv4? Maybe they consider the balance between single high-bandwidth and multiple connections.

You can imagine if we change the tcp_mem to use a solid big value, it may cause the board deny connections because of a lack of memory allocation

in tcp init. Here I will give out several method to improve our android ethernet performance.

  • Enlarge your memory size in board design phase.

     I have double checked it by testing in our SabreAuto board whose memory is 2G whose download speed can reach to 270 Mbps about 50Mbps over Sabresd.

  • Try to use older version android, if you can use ICS, you can abandon JB4.3. Compared with newer android version, old version will take less memory and there will leave more free memory to use. Using ICS, we can reach 380 Mbps downloading while in JB4.3, it can only get to 210 Mbps.
  • If you are using sabresd's Gigabit ethernet for a very important case, you can balance it if you can throw other memory eaters like GPU. I have checked it if we disable GPU, the performance can reached to 340 Mbps in JB4.3 with about 50% improvement.
  • Change tcp_mem algorithm to enlarge its max value threshold. Like you can change  "sysctl_tcp_mem[2] = sysctl_tcp_mem[0] * 2" to "sysctl_tcp_mem[2] = sysctl_tcp_mem[0] * 3" above. You can see there won't be framedropping any more. Or you can also refer to How To: Network / TCP / UDP Tuning to hard code it. But like its author said in it, it is not recommended for those support multiple users or multiple connections. for it maybe cause the board to deny connections because of a lack of memory allocation.
  • Tune tcp_rmem and tcp_wmem through the following patches in android. you will get bidirectional 320Mbps. But if you use ifconfig tool to set static ip you will not get these parameters set. For AOSP's framework only support DHCP now. For this case, you can manually echo these parameters in console before doing test.

               Gerrit Code Review

               Gerrit Code Review

  • Change kernel's scheduler policy config.

     Disable CONFIG_FAIR_GROUP_SCHED and only enable CONFIG_RT_GROUP_SCHED will contribute some enhancement.

     With the above changes, I have tested on Sabresd RevC1 using android4.3GA, the bidirection speed can both reach 390~400Mbps.

3 people found this helpful