iMX8QM 10G ethernet performance issues

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

iMX8QM 10G ethernet performance issues

ソリューションへジャンプ
3,853件の閲覧回数
martin_lovric
Contributor II

Hello everyone,

I'm having issue with 10G ethernet performance on Toradex Apalis iMX8QM SoM.

We have SoM and AQVC107 10G ethernet controller on our custom carrier board. The SoM interfaces with controller via PCIe x2 link and is running a custom Yocto distribution of Linux, based on Boot2Qt with kernel version 4.14.

The device is connected to Nvidia Pegasus system over 10G ethernet. I've used iperf3 to test the bandwidth. On the device I was running iperf3 server and on the Nvidia system was iperf3 client in UDP mode. With this setup I've achieved about ~1.6 Gbits/sec with ~38% receiver packet loss.

If I reversed the roles so that Nvidia is running server and the device is client I have better results, about ~2 Gbits/sec with ~0.028% receiver packet loss.

I've tried to run iperf3 client on Nvidia with -w 32k parameter and I've succeeded to get ~1.2Gbits/sec with packet loss less than 0,5%  on the device.

Could you help me figure out what is causing these huge packet losses when the device (iMX8QM) is running iperf3 server? We've checked HW design and ruled it out as possible cause, you can find the schematics of the PCIe connection to the controller in the attachment.

Could the problem be in the PCIe driver? Is there perhaps some configuration I'm not aware of?

 

BR!

ラベル(1)
0 件の賞賛
返信
1 解決策
3,798件の閲覧回数
Yuri
NXP Employee
NXP Employee

@martin_lovric 
Hello,

   below are my comments:

 1) Now we have only the following i.MX8 PCIe performance estimations.

 https://www.nxp.com/webapp/Download?colCode=AN13164

 

2)  i.MX8QM system performance also may be restricted by memory:  but it is not provided.

Regards,
Yuri.

 

元の投稿で解決策を見る

5 返答(返信)
3,841件の閲覧回数
Yuri
NXP Employee
NXP Employee

@martin_lovric 
Hello,

   please try to use iperf2 instead of iperf3.

  https://community.nxp.com/t5/i-MX-Processors/TCP-Network-performace-issues/m-p/876395

 

Regards,
Yuri.

0 件の賞賛
返信
3,833件の閲覧回数
martin_lovric
Contributor II

Hello Yuri,

Thanks for this advice, with iperf 2.0.10 I've succeeded to make test using TCP protocol but the maximum speed I've achieved was ~1.8Gbits/sec:

Screenshot from 2021-04-13 13-59-46.png

 

I've noticed when iperf server is handling incoming connection the ksoftirqd/0 has high CPU usage:

ksoftirqd.png

 

Is this normal behavior and could this cause issues with 10G ethernet interface?

Also, do you have any idea what could be causing the 10G ethernet interface to run at only ~1.8Gbits/sec?

0 件の賞賛
返信
3,822件の閲覧回数
Yuri
NXP Employee
NXP Employee

@martin_lovric 
Hello,

  I think Your result is not so bad, taking into account i.MX8 PCie performance.

https://community.nxp.com/t5/i-MX-Processors/PCIe-Bandwidth/m-p/1063269

 

Regards,
Yuri.

0 件の賞賛
返信
3,810件の閲覧回数
martin_lovric
Contributor II

But the Gen3 PCIe x2 should be able to reach 8Gbits/sec bandwidth, Am I correct? Unless there is limitation on iMX8QM side I'm not aware of.

During the UDP test, the kernel thread "ksoftirqd/0" had high CPU usage of core 0 as well. I noticed increased UDP packet loss when I tried to run CPU intensive application during the UDP test.

Could the CPU be bottleneck for the PCIe performance? If it is, how can we solve this?

0 件の賞賛
返信
3,799件の閲覧回数
Yuri
NXP Employee
NXP Employee

@martin_lovric 
Hello,

   below are my comments:

 1) Now we have only the following i.MX8 PCIe performance estimations.

 https://www.nxp.com/webapp/Download?colCode=AN13164

 

2)  i.MX8QM system performance also may be restricted by memory:  but it is not provided.

Regards,
Yuri.