Check the performance of the pfe0 interface

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Check the performance of the pfe0 interface

Jump to solution
782 Views
Jeff-CF-Huang
Contributor II

Hi Sir,

The bitrate of the pfe0 interface should be around 900 Mbit/sec, but it was approximately 650 Mbit/sec.

JeffCFHuang_0-1729587368855.png

How can I identify the bottleneck when transferring data through the pfe0 interface?

Best regards,

Jeff

0 Kudos
Reply
1 Solution
667 Views
chenyin_h
NXP Employee
NXP Employee

Hello, @Jeff-CF-Huang 

Thanks for your reply.

From my experiece, firstly check the interrupts uses in your test, in this specific test, they are PFE0 and DMA, find the correponding irq number, and check if the affinity is supported under /proc/irq/XXX/smp_affinity. you may reference the following link for further reading:

https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/7/html/tuning_guide/...

From my test, if not doing any tuning, the iperf test would reach 1G line rate after 5-20 seconds, due the irqbalance's help. If there are multiple tasks in the system, then maybe keep using irqbalance is still a good choice, since irq binding may impact other tasks running in the system.

 

BR

Chenyin

View solution in original post

0 Kudos
Reply
7 Replies
668 Views
chenyin_h
NXP Employee
NXP Employee

Hello, @Jeff-CF-Huang 

Thanks for your reply.

From my experiece, firstly check the interrupts uses in your test, in this specific test, they are PFE0 and DMA, find the correponding irq number, and check if the affinity is supported under /proc/irq/XXX/smp_affinity. you may reference the following link for further reading:

https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/7/html/tuning_guide/...

From my test, if not doing any tuning, the iperf test would reach 1G line rate after 5-20 seconds, due the irqbalance's help. If there are multiple tasks in the system, then maybe keep using irqbalance is still a good choice, since irq binding may impact other tasks running in the system.

 

BR

Chenyin

0 Kudos
Reply
689 Views
chenyin_h
NXP Employee
NXP Employee

Hello, @Jeff-CF-Huang 

Thanks for you reply.

I have reproduced the issue with your script while running for several times, there may be 700-800 Mbps for 5-20 seconds during a 100 seconds test.

I looked into the issue, in my opinion, it may caused by heavy load of irq handling during the test, A53 is not very powerful,  while doing iperf benchmark,   there would be huge number of irqs to one core. Once enabling the ADC, there also be many irqs, if when the irqs are handled on one core, the benchmark results may be impact, but since the irqbalance is enabled by default, after several seconds, the irqs may be balanced to other cores, the load of cores that impact the TCP performance would be decreased so that the rest of testing results may keep reaching the line rate.

If it is worried that the network throughput would be impacted, I suggest binding the corresponding irqs(PFE0) to a dedicated core, to avoid other impact of the system load.

 

BR

Chenyin

 

BR

Chenyin

0 Kudos
Reply
679 Views
Jeff-CF-Huang
Contributor II

Hi Chenyin,

Thank you for your suggestion.

Could you explain how to bind hardware resources to a specific core or group of cores?

Best regards,

Jeff Huang

0 Kudos
Reply
734 Views
chenyin_h
NXP Employee
NXP Employee

Hello, @Jeff-CF-Huang 

Sorry that I did not reproduce the issue from my side, the logs are attached for your reference.

May I know if there are additional setings for trigger such issue?

 

BR

Chenyin

0 Kudos
Reply
718 Views
Jeff-CF-Huang
Contributor II

Hi Chenyin,

Thanks for your reply.

After checking, the issue is not easily reproducible.

I’ve tried the following commands that are most likely to raise the situation.

#!/bin/bash
echo 1 > /sys/bus/iio/devices/iio:device1/scan_elements/in_voltage4_en
echo 4096 > /sys/bus/iio/devices/iio:device1/buffer/length
echo 1 > /sys/bus/iio/devices/iio:device1/buffer/enable

echo 1 > /sys/bus/iio/devices/iio:device0/scan_elements/in_voltage4_en
echo 4096 > /sys/bus/iio/devices/iio:device0/buffer/length
echo 1 > /sys/bus/iio/devices/iio:device0/buffer/enable


echo 0 > /sys/bus/iio/devices/iio:device1/buffer/enable
echo 1 > /sys/bus/iio/devices/iio:device1/buffer/enable

echo 0 > /sys/bus/iio/devices/iio:device0/buffer/enable
echo 1 > /sys/bus/iio/devices/iio:device0/buffer/enable

iperf3 -c 192.168.1.20 -t 100

 

Best regards,

Jeff Huang

0 Kudos
Reply
763 Views
chenyin_h
NXP Employee
NXP Employee

Hello, @Jeff-CF-Huang 

Thanks for you post.

May I know if you are working with S32G2 or G3? which version BSP you are using for the performance benchmark? Would you please share  more information?

I just tested the TCP performance on RDB2 with BSP42, it could achieve the line rate of 1G port.

 

BR

Chenyin

0 Kudos
Reply
757 Views
Jeff-CF-Huang
Contributor II

We are working with the S32G399 and BSP40, and using iperf3 as the client.
If we don't enable the ADC, the rate can reach approximately 950 Mbit/sec.

JeffCFHuang_0-1729676409869.png

The below command is how to enable adc.

echo 1 > /sys/bus/iio/devices/iio:device1/scan_elements/in_voltage4_en
echo 4096 > /sys/bus/iio/devices/iio:device1/buffer/length
echo 1 > /sys/bus/iio/devices/iio:device1/buffer/enable

 

0 Kudos
Reply