Sabre Lite iMX6x Gigabit ethernet throughput problem

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Sabre Lite iMX6x Gigabit ethernet throughput problem

5,593 Views
shaileshl
Contributor I

Currently we are evaluating the iMX6X Sabre Lite platform with the Timesys default kernel and BSP provided by Boundry Devices.

We are using a TCP I/P client server program to measure the throughput of the Gigabit ethernet interface.

We are measuring the throughput in a direct one to one link between Sabre Lite platform and Windows 7 PC using a cross cable. Link happens to be established @ 1000Mbps (shown on Windows PC and Linux status message). BUT THE THROUGHPUT happens to be very less.

For sending about 1.5MBytes data, it is taking around 125 ms (measured using gettimeofday for the send() socket call) to send the data to the Windows PC. This shows that the Gigabit ethernet throughput comes to around 100 Mbps only. This happens to be such a poor throughput considering the Gigabit interface and quad core iMX6Q processor processing power.

Freescale does not seem to have provided any performance benchmarks for the iMX6Q peripherals.

Is anyone able to get a much better ethernet throughput on the iMX6Q platform? Are there any things we can do for improving the throughput by modifying any ethernet interface parameters or Kernel options? Are there any patches available for higher throughput freescale ethernet driver?

Please provide suggestions for improving the throughput.

Labels (2)
0 Kudos
4 Replies

1,085 Views
FranciscoCarril
Contributor V

Please take a look at the Chip Errata:  IMX6DQCE

ERR004512 ENET: 1 Gb Ethernet MAC (ENET) system limitation

Description:

The theoretical maximum performance of 1 Gbps ENET is limited to 470 Mbps (total for Tx and

Rx). The actual measured performance in optimized environment is up to 400 Mbps.

Projected Impact:

Minor. Limitation of ENET throughput to around 400 Mbps. ENET remains fully compatible to

1Gb standard in terms of protocol and physical signaling.

Workarounds:

No workaround.

Proposed Solution:

No fix scheduled

Linux BSP Status:

No software workaround available

1,085 Views
shaileshl
Contributor I

Thanks for pointing to some helpful information.

But as mentioned the measured performance is upto 400Mbps. i understand this must be the physical layer throughput. Even if we consider TCP/IP protocol overheads to be 25 to 30%, we still should be getting around 300 Mbps throughput at the Linux application level. Does that make sense?

Any idea what is limiting the throughput to 100Mbps?

What needs to be done to increase the throughput to near 300 Mbps?

Any modifications required in FEC driver to increase the throughput ?

Currently I am using the default setting of MTU, would the throughput increase if we change the MTU or any other parameter?


In my kernel configuration only the FEC ethernet controller and NAPI support is enabled. As per my understanding whether "Enable FEC 1588 timestamping" is enabled / disabled should not matter.


Can you any provide any more pointers / analysis done by you or provided by freescale in this regard ?


Any help in this regard would be highly appreciable.

0 Kudos

1,085 Views
FranciscoCarril
Contributor V


Could you test with this recommendation?


1. Customer only get 200Mbps bandwidth in i.MX6 platform, I think DVFS is enabled.

If DVFS is enabled, it will influence on many modules performance, such as sata, mmc card, usb device, Ethernet…

You can test Ethernet performance with DVFS disable : echo 0 > /sys/devices/platform/imx_dvfscore.0/enable

0 Kudos

1,085 Views
shaileshl
Contributor I

I checked, but found that DVFS is disabled by default on bootup.

One more thing I did was to use Iperf for checking the throughput. With Iperf the throughput was able to reach a maximum of 300Mbps, which indicates that the issue is not with the ethernet driver or ethernet interface settings, but has something to do with our client server application.

Now we trying to analyse the client server application.

Thanks for the support.

0 Kudos