iMX8QM 10G ethernet performance issues

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

iMX8QM 10G ethernet performance issues

Jump to solution
3,868 Views
martin_lovric
Contributor II

Hello everyone,

I'm having issue with 10G ethernet performance on Toradex Apalis iMX8QM SoM.

We have SoM and AQVC107 10G ethernet controller on our custom carrier board. The SoM interfaces with controller via PCIe x2 link and is running a custom Yocto distribution of Linux, based on Boot2Qt with kernel version 4.14.

The device is connected to Nvidia Pegasus system over 10G ethernet. I've used iperf3 to test the bandwidth. On the device I was running iperf3 server and on the Nvidia system was iperf3 client in UDP mode. With this setup I've achieved about ~1.6 Gbits/sec with ~38% receiver packet loss.

If I reversed the roles so that Nvidia is running server and the device is client I have better results, about ~2 Gbits/sec with ~0.028% receiver packet loss.

I've tried to run iperf3 client on Nvidia with -w 32k parameter and I've succeeded to get ~1.2Gbits/sec with packet loss less than 0,5%  on the device.

Could you help me figure out what is causing these huge packet losses when the device (iMX8QM) is running iperf3 server? We've checked HW design and ruled it out as possible cause, you can find the schematics of the PCIe connection to the controller in the attachment.

Could the problem be in the PCIe driver? Is there perhaps some configuration I'm not aware of?

 

BR!

0 Kudos
Reply
1 Solution
3,813 Views
Yuri
NXP Employee
NXP Employee

@martin_lovric 
Hello,

   below are my comments:

 1) Now we have only the following i.MX8 PCIe performance estimations.

 https://www.nxp.com/webapp/Download?colCode=AN13164

 

2)  i.MX8QM system performance also may be restricted by memory:  but it is not provided.

Regards,
Yuri.

 

View solution in original post

5 Replies
3,856 Views
Yuri
NXP Employee
NXP Employee

@martin_lovric 
Hello,

   please try to use iperf2 instead of iperf3.

  https://community.nxp.com/t5/i-MX-Processors/TCP-Network-performace-issues/m-p/876395

 

Regards,
Yuri.

0 Kudos
Reply
3,848 Views
martin_lovric
Contributor II

Hello Yuri,

Thanks for this advice, with iperf 2.0.10 I've succeeded to make test using TCP protocol but the maximum speed I've achieved was ~1.8Gbits/sec:

Screenshot from 2021-04-13 13-59-46.png

 

I've noticed when iperf server is handling incoming connection the ksoftirqd/0 has high CPU usage:

ksoftirqd.png

 

Is this normal behavior and could this cause issues with 10G ethernet interface?

Also, do you have any idea what could be causing the 10G ethernet interface to run at only ~1.8Gbits/sec?

0 Kudos
Reply
3,837 Views
Yuri
NXP Employee
NXP Employee

@martin_lovric 
Hello,

  I think Your result is not so bad, taking into account i.MX8 PCie performance.

https://community.nxp.com/t5/i-MX-Processors/PCIe-Bandwidth/m-p/1063269

 

Regards,
Yuri.

0 Kudos
Reply
3,825 Views
martin_lovric
Contributor II

But the Gen3 PCIe x2 should be able to reach 8Gbits/sec bandwidth, Am I correct? Unless there is limitation on iMX8QM side I'm not aware of.

During the UDP test, the kernel thread "ksoftirqd/0" had high CPU usage of core 0 as well. I noticed increased UDP packet loss when I tried to run CPU intensive application during the UDP test.

Could the CPU be bottleneck for the PCIe performance? If it is, how can we solve this?

0 Kudos
Reply
3,814 Views
Yuri
NXP Employee
NXP Employee

@martin_lovric 
Hello,

   below are my comments:

 1) Now we have only the following i.MX8 PCIe performance estimations.

 https://www.nxp.com/webapp/Download?colCode=AN13164

 

2)  i.MX8QM system performance also may be restricted by memory:  but it is not provided.

Regards,
Yuri.