imx6q: PACKET_MMAP Performance?

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

imx6q: PACKET_MMAP Performance?

2,214件の閲覧回数
pev
Contributor I

Hi all,

I've got a board based on the imx6q - it's running the 3.0.35 kernel (via yocto) and is generally pretty stable. My code fires out a lot of UDP packets via a gigabit ethernet interface. Using a simple standard socket / sendto() TX only test loop I can get a reasonable 490Mbit/s throughput on the wire before the CPU load tops out. However, top out it does and having a quick dig via perf I get the feeling that it's mostly the copying and context switching while sending the packets that causes this. I'm pretty sure the ethernet driver itself is fine as its the standard e1000e driver and also I can easily saturate the gigabit running  iperf multithreaded. 

So, I was planning on converting my code to use PACKET_MMAP to see what can be achieved as this seems to work well on other platforms. Doing a quick and dirty test using packet_mmap.c (from Linux packet mmap - IwzWiki) I top out at around 265Mbit/s before the CPU maxes out which is pretty bad. Note that the same version of the code compiled on my x86 box behaves exactly as expected. If I run  "perf top" on the target I see that about 60-70% of the CPU time is spent in v7_flush_kern_dcache_area which doesn't seem right. Has anyone any experience doing similar on any of the imx6 boards / kernels? Unfortunately I don't have the dev kit so cant easily test on a later kernel! (if anyone fancies spending 10 mins replicating on their board it would be appreciated!)

Cheers!

ラベル(4)
0 件の賞賛
返信
3 返答(返信)

1,251件の閲覧回数
MarekVasut
Senior Contributor I

Do you know of ERRATA ERR004512 :

ERR004512

ENET: 1 Gb Ethernet MAC (ENET) system limitation

Description:

The theoretical maximum performance of 1 Gbps ENET is limited to 470 Mbps (total for Tx and

Rx). The actual measured performance in an optimized environment is up to 400 Mbps.

Projected Impact:

Minor. Limitation of ENET throughput to around 400 Mbps. ENET remains fully compatible to

1Gb standard in terms of protocol and physical signaling. If the TX and RX peak data rate is higher

than 400 Mbps, there is a risk of ENET RX FIFO overrun.

Workarounds:

There is no workaround for the throughput limitation. To prevent overrun of the ENET RX FIFO,

enable pause frame.

Proposed Solution:

No fix scheduled

Linux BSP Status:

No software workaround available

0 件の賞賛
返信

1,251件の閲覧回数
pev
Contributor I

Hi Marek,

As I mentioned, on our custom board I can saturate the gigabit using iperf multithreaded (> 700Mbit/s) As I understand it, that errata is for the sabre boards? I'm guessing that with a 470Mbit cutoff they're using a USB2 based controller rather than PCIe right?

0 件の賞賛
返信

1,251件の閲覧回数
MarekVasut
Senior Contributor I

The errata applies to all MX6 ; only to their gigabit FEC interface. Any PCIe-connected ethernet interface is NOT affected by the errata.

0 件の賞賛
返信