UDP packets oder on LS1046

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

UDP packets oder on LS1046

9,401 次查看
zwg132883
Contributor III

Hi,

My cpu is ls1046,that all the ethernets work on  DPAA1. When the MAC10 receive the UDP packets, it has some out-order packets which can’t be corrected.

Our board use MAC10 run in 10G had integrated FMC tool, and when run FMC tool command there were no errors. Also RCW is set to RR_FFSSPPPH_1133_5559.

fmc -c /etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559/config.xml -p /etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559/policy_ipv4.xml -a

 For the 10G MAC10, using fmc tool can reduce the out order packets to 3%, comparing with no using fmc tool. But the out-order packets can't be corrected to zero. I don't know why FMC tool can’t  correct all of the out-order packets? thanks.

标签 (1)
0 项奖励
回复
44 回复数

4,778 次查看
yipingwang
NXP TechSupport
NXP TechSupport

After executing the command "fmc -c /etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559/config.xml -p /etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559/policy_ipv4.xml -a", you are using core-affined queue to implement order preservation.

This is because in DPAA1, each interface used by default one pool channel across all software portals and also the dedicated channel of each CPU. In Linux Kernel, PCD Frame queues in use dedicated channels. You could refer to the section "5.Dedicated and Pool Channels Usage in Linux Kernel" in https://community.nxp.com/docs/DOC-329916 for details.

So, normally out order packets should be zero. What's about your test environment?

You need to use multiple flows, after executing FMC policy, one flow will bind to one core, so all 4 cores will be used by multiple flows. Please configure multiple iperf clients(different source and destination addresses) to connect to the iperf server to create multiple flows. In the real scenario, one user application uses one flow.

 

0 项奖励
回复

4,760 次查看
zwg132883
Contributor III

Thanks yiping.

The document in https://community.nxp.com/docs/DOC-329916 are all related to the PPC cpu.Do you have the instructions to the ARM ls1046 CPU? Just like“Dedicated Channel and Pool Channel Used in Linux Kernel and USDPAA”?

My test environment is very easy, the date flow to the eth5 is only one date flow. I also tested the 1G ethernet by iperf3, it also had the same issue. Using the same test method, my PPC(T1042) cards’ out order packets can be corrected to zero after integrating the FMC tool.

Why arm ls1046 can’t be correct? Have you test the FMC tool on ls1046? If you have tested on ls1046, could you take a detail instruction description to me?

标记 (1)
0 项奖励
回复

4,753 次查看
yipingwang
NXP TechSupport
NXP TechSupport

Deploy LSDK 21.08 images to LS1046ARDB target boards.

On Server board:

root@ls1046ardb:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559# fmc -x
root@ls1046ardb:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559# fmc -c config.xml -p policy_ipv4.xml -a
root@ls1046ardb:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559# iperf -u -s

On the client board:

root@localhost:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559# fmc -x
root@localhost:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559# fmc -c config.xml -p policy_ipv4.xml -a
root@localhost:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559# iperf -c 100.1.1.1 -u -b 1000M -P 10 -t 30
------------------------------------------------------------
Client connecting to 100.1.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11.22 us (kalman adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 11] local 100.1.1.7 port 49019 connected with 100.1.1.1 port 5001
[ 10] local 100.1.1.7 port 48899 connected with 100.1.1.1 port 5001
[ 12] local 100.1.1.7 port 57404 connected with 100.1.1.1 port 5001
[ 5] local 100.1.1.7 port 34366 connected with 100.1.1.1 port 5001
[ 7] local 100.1.1.7 port 49393 connected with 100.1.1.1 port 5001
[ 9] local 100.1.1.7 port 36360 connected with 100.1.1.1 port 5001
[ 8] local 100.1.1.7 port 36252 connected with 100.1.1.1 port 5001
[ 4] local 100.1.1.7 port 48311 connected with 100.1.1.1 port 5001
[ 3] local 100.1.1.7 port 33748 connected with 100.1.1.1 port 5001
[ 6] local 100.1.1.7 port 54853 connected with 100.1.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 11] 0.0-30.0 sec 2.16 GBytes -527589369.28 bits/sec
[ 11] Sent 1575199 datagrams
[ 10] 0.0-30.0 sec 2.69 GBytes -373514684.49 bits/sec
[ 10] Sent 1968437 datagrams
[ 12] 0.0-30.0 sec 2.67 GBytes -380966871.50 bits/sec
[ 12] Sent 1949419 datagrams
[ 5] 0.0-30.0 sec 2.69 GBytes -374737246.88 bits/sec
[ 5] Sent 1965318 datagrams
[ 9] 0.0-30.0 sec 2.76 GBytes -354650032.02 bits/sec
[ 9] Sent 2016586 datagrams
[ 8] 0.0-30.0 sec 2.17 GBytes -523211217.84 bits/sec
[ 8] Sent 1586374 datagrams
[ 3] 0.0-30.0 sec 2.70 GBytes -371557519.04 bits/sec
[ 3] Sent 1973433 datagrams
[ 6] 0.0-30.0 sec 2.14 GBytes -532251631.67 bits/sec
[ 6] Sent 1563298 datagrams
[ 5] Server Report:
[ 5] 0.0-30.0 sec 2.00 GBytes 572 Mbits/sec 921588.537 ms 2143835024/ 0 (inf%)
[ 9] Server Report:
[ 9] 0.0-30.0 sec 2.00 GBytes 572 Mbits/sec 918588.537 ms 2144277013/ 0 (inf%)
[ 9] 0.00-30.01 sec 6 datagrams received out-of-order
[ 12] Server Report:
[ 12] 0.0-30.0 sec 2.00 GBytes 572 Mbits/sec 2322588.537 ms 2144407509/ 0 (inf%)
[ 12] 0.00-30.01 sec 3 datagrams received out-of-order
[ 6] Server Report:
[ 6] 0.0-30.0 sec 2.00 GBytes 572 Mbits/sec 1412588.537 ms 2145019793/ 0 (inf%)
[ 11] Server Report:
[ 11] 0.0-30.0 sec 2.00 GBytes 572 Mbits/sec 919588.537 ms 2144597350/ 0 (inf%)
[ 11] 0.00-30.01 sec 8 datagrams received out-of-order
[ 3] Server Report:
[ 3] 0.0-30.0 sec 2.00 GBytes 572 Mbits/sec 1935588.537 ms 2143820827/ 0 (inf%)
[ 3] 0.00-30.01 sec 1 datagrams received out-of-order
[ 8] Server Report:
[ 8] 0.0-30.0 sec 2.00 GBytes 572 Mbits/sec 919588.537 ms 2144567267/ 0 (inf%)
[ 10] Server Report:
[ 10] 0.0-30.0 sec 2.00 GBytes 572 Mbits/sec 930588.537 ms 2143893581/ 0 (inf%)
[ 10] 0.00-30.01 sec 2 datagrams received out-of-order
[ 7] 0.0-30.0 sec 2.75 GBytes -357951651.59 bits/sec
[ 7] Sent 2008159 datagrams
[ 7] Server Report:
[ 7] 0.0-30.0 sec 2.00 GBytes 572 Mbits/sec 1897588.537 ms 2143889499/ 0 (inf%)
[ 4] 0.0-30.0 sec 2.73 GBytes -362306528.99 bits/sec
[ 4] Sent 1997045 datagrams
[SUM] 0.0-30.0 sec 25.5 GBytes 7.29 Gbits/sec
[SUM] Sent 18603268 datagrams
[ 4] Server Report:
[ 4] 0.0-30.3 sec 2.00 GBytes 567 Mbits/sec 16562588.537 ms 2144331459/ 0 (inf%)
root@localhost:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559#

0 项奖励
回复

4,703 次查看
zwg132883
Contributor III

Hi,yiping
I saw your test report,there are a few out-of-order UDP parckets. But for my
test result, it had more out-of-order UDP parckets than yours. Please see attachment. Is there any problems in my test steps? thanks.

0 项奖励
回复

4,697 次查看
yipingwang
NXP TechSupport
NXP TechSupport

On the server side, please execute command "iperf -u -s".

0 项奖励
回复

4,684 次查看
zwg132883
Contributor III

Hi,Yiping
    In my card, it only has iperf3 tool. iperf3 is the new version, and it can't support "iperf -u -s", 
root@odin:/# iperf3 -u -s
iperf3: parameter error - some option you are trying to set is client only

 

Usage: iperf3 [-s|-c host] [options]
       iperf3 [-h|--help] [-v|--version]

 

Server or Client:
  -p, --port      #         server port to listen on/connect to
  -f, --format   [kmgtKMGT] format to report: Kbits, Mbits, Gbits, Tbits
  -i, --interval  #         seconds between periodic throughput reports
  -F, --file name           xmit/recv the specified file
  -A, --affinity n/n,m      set CPU affinity
  -B, --bind      <host>    bind to a specific interface
  -V, --verbose             more detailed output
  -J, --json                output in JSON format
  --logfile f               send output to a log file
  --forceflush              force flushing output at every interval
  -d, --debug               emit debugging output
  -v, --version             show version information and quit
  -h, --help                show this message and quit
Server specific:
  -s, --server              run in server mode
  -D, --daemon              run the server as a daemon
  -I, --pidfile file        write PID file
  -1, --one-off             handle one client connection then exit

 

I think this issue can be reproduced, use which version of iperf is not the key to it.
But how to resolve my current problem?

标记 (1)
0 项奖励
回复

4,681 次查看
yipingwang
NXP TechSupport
NXP TechSupport

Please refer to my attached iperf3 test result, there is no out of order packets.

Did you use Linux Kernel from NXP formal LSDK release?

Could you please try Linux Kernel Image from LSDK 21.08 release?

$ wget https://www.nxp.com/lgfiles/sdk/lsdk2108/boot_LS_arm64_lts_5.10.tgz

0 项奖励
回复

4,657 次查看
zwg132883
Contributor III

Hi,Yiping
      My iperf3 test environment is two LS1046 cards. The out-of-order packets is 
on the server side. On the client, it also hadn't out-of-order like yours.
Could you pls send me your report on server side? This is the summary results on server side.
- - - - - - - - - - - - - - - - - - - - - - - - -
Test Complete. Summary Results:
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5] (sender statistics not available)
[SUM]  0.0-30.0 sec  1127 datagrams received out-of-order
[  5]   0.00-30.04  sec  3.43 MBytes   958 Kbits/sec  302.565 ms  30/2515 (1.2%)  receiver
[  6] (sender statistics not available)
[SUM]  0.0-30.0 sec  1131 datagrams received out-of-order
[  6]   0.00-30.04  sec  3.42 MBytes   956 Kbits/sec  299.002 ms  33/2513 (1.3%)  receiver
[  9] (sender statistics not available)
[SUM]  0.0-30.0 sec  1082 datagrams received out-of-order
[  9]   0.00-30.04  sec  3.41 MBytes   951 Kbits/sec  176.398 ms  19/2486 (0.76%)  receiver
[ 11] (sender statistics not available)
[SUM]  0.0-30.0 sec  1111 datagrams received out-of-order
[ 11]   0.00-30.04  sec  3.37 MBytes   941 Kbits/sec  299.515 ms  31/2472 (1.3%)  receiver
[ 13] (sender statistics not available)
[SUM]  0.0-30.0 sec  1130 datagrams received out-of-order
[ 13]   0.00-30.04  sec  3.46 MBytes   965 Kbits/sec  271.778 ms  33/2537 (1.3%)  receiver
[ 15] (sender statistics not available)
[SUM]  0.0-30.0 sec  1091 datagrams received out-of-order
[ 15]   0.00-30.04  sec  3.43 MBytes   958 Kbits/sec  369.664 ms  33/2517 (1.3%)  receiver
[ 17] (sender statistics not available)
[SUM]  0.0-30.0 sec  1104 datagrams received out-of-order
[ 17]   0.00-30.04  sec  3.41 MBytes   952 Kbits/sec  206.089 ms  32/2500 (1.3%)  receiver
[ 19] (sender statistics not available)
[SUM]  0.0-30.0 sec  1137 datagrams received out-of-order
[ 19]   0.00-30.04  sec  3.44 MBytes   960 Kbits/sec  177.326 ms  24/2515 (0.95%)  receiver
[ 21] (sender statistics not available)
[SUM]  0.0-30.0 sec  1142 datagrams received out-of-order
[ 21]   0.00-30.04  sec  3.44 MBytes   961 Kbits/sec  204.638 ms  25/2518 (0.99%)  receiver
[ 23] (sender statistics not available)
[SUM]  0.0-30.0 sec  1073 datagrams received out-of-order
[ 23]   0.00-30.04  sec  3.40 MBytes   949 Kbits/sec  328.883 ms  30/2492 (1.2%)  receiver
[SUM]   0.00-30.04  sec  34.2 MBytes  9.55 Mbits/sec  263.586 ms  290/25065 (1.2%)  receiver
CPU Utilization: local/receiver 0.3% (0.0%u/0.2%s), remote/sender 0.0% (0.0%u/0.0%s)
iperf3: the client has unexpectedly closed the connection
iperf 3.2

My Linux kernel version is V4.14.47, So LDSK version is not 21.08.
If we upgrade the kernel, it may be spend a long time. 
Could you pls help to verify if the NXP LDSK can work well in Linux V4.14.47?
And how do I download the patch lists of LDSK that refer to the out-of-order issue base on linux4.14?

标记 (1)
0 项奖励
回复

4,602 次查看
yipingwang
NXP TechSupport
NXP TechSupport

This is log on the server side.

root@ls1046ardb:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559# fmc -c config.xml -p policy_ipv4.xml -a
root@ls1046ardb:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559# iperf -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 100.1.1.1 port 5001 connected with 100.1.1.2 port 48762
[ 4] local 100.1.1.1 port 5001 connected with 100.1.1.2 port 34738
[ 6] local 100.1.1.1 port 5001 connected with 100.1.1.2 port 45080
[ 10] local 100.1.1.1 port 5001 connected with 100.1.1.2 port 55858
[ 7] local 100.1.1.1 port 5001 connected with 100.1.1.2 port 33969
[ 11] local 100.1.1.1 port 5001 connected with 100.1.1.2 port 38818
[ 5] local 100.1.1.1 port 5001 connected with 100.1.1.2 port 45524
[ 8] local 100.1.1.1 port 5001 connected with 100.1.1.2 port 40491
[ 12] local 100.1.1.1 port 5001 connected with 100.1.1.2 port 39511
[ 9] local 100.1.1.1 port 5001 connected with 100.1.1.2 port 57602
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 11] 0.0-30.0 sec 3.04 GBytes 872 Mbits/sec 1.241 ms 449376/2673333 (17%)
[ 5] 0.0-30.0 sec 2.51 GBytes 719 Mbits/sec 1.056 ms 396122/2230713 (18%)
[ 5] 0.00-30.00 sec 29 datagrams received out-of-order
[ 9] 0.0-30.0 sec 2.05 GBytes 586 Mbits/sec 0.005 ms 830714/2326517 (36%)
[ 10] 0.0-30.0 sec 1.94 GBytes 556 Mbits/sec 0.937 ms 818548/2236157 (37%)
[ 8] 0.0-30.0 sec 1.49 GBytes 427 Mbits/sec 1.201 ms 1131365/2219427 (51%)
[ 12] 0.0-30.0 sec 1.33 GBytes 380 Mbits/sec 0.003 ms 1367096/2337784 (58%)
[ 3] 0.0-30.2 sec 2.04 GBytes 580 Mbits/sec 15.595 ms 821689/2313818 (36%)
[ 3] 0.00-30.25 sec 54 datagrams received out-of-order
[ 4] 0.0-30.3 sec 2.19 GBytes 622 Mbits/sec 15.590 ms 1074065/2673921 (40%)
[ 4] 0.00-30.25 sec 10 datagrams received out-of-order
[ 6] 0.0-30.2 sec 1.59 GBytes 451 Mbits/sec 15.485 ms 1174911/2336022 (50%)
[ 6] 0.00-30.25 sec 21 datagrams received out-of-order
[ 7] 0.0-30.2 sec 1.62 GBytes 460 Mbits/sec 14.620 ms 1492724/2674795 (56%)
[ 7] 0.00-30.25 sec 2 datagrams received out-of-order
[SUM] 0.0-30.3 sec 19.8 GBytes 5.62 Gbits/sec 15.595 ms 9556610/24022487 (40%)
[SUM] 0.00-30.25 sec 116 datagrams received out-of-order

0 项奖励
回复

4,563 次查看
zwg132883
Contributor III

Hi,Yiping
    I saw your test result on your server side, there were also some out-of-order datagrams.
Like this:
  "[ 5] 0.00-30.00 sec 29 datagrams received out-of-order"
Did you test on the latest version LDSK 21.08? 
If yes, can I assume the ls1046 has always had the out-of-order issue?

0 项奖励
回复

4,561 次查看
yipingwang
NXP TechSupport
NXP TechSupport

Yes, I used the latest LSDK, and there are a few out-of-order datagrams on the server side when performing iperf testing on MAC10.

0 项奖励
回复

4,525 次查看
zwg132883
Contributor III

Hi Yiping

     Have you analyzed why it had some out-of-order packets? Will you try to resolve to it? I think there are a few out-of-order packets of MAC10 in my card, may be familiar with yours.

0 项奖励
回复

4,469 次查看
yipingwang
NXP TechSupport
NXP TechSupport

On the server side:

root@ls1046ardb:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559# iperf -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 100.1.1.1 port 5001 connected with 100.1.1.7 port 59706
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-10.0 sec 1.22 GBytes 1.05 Gbits/sec 0.017 ms 2145700350/2146591999 (1e+02%)

On the client side:

root@localhost:/etc/fmc/config/private/ls1046ardb/RR_FFSSPPPH_1133_5559# iperf -c 100.1.1.1 -b 1000M -u
------------------------------------------------------------
Client connecting to 100.1.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11.22 us (kalman adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 100.1.1.7 port 53384 connected with 100.1.1.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.22 GBytes 1.05 Gbits/sec
[ 3] Sent 891649 datagrams
[ 3] Server Report:
[ 3] 0.0- 3.3 sec 2.00 GBytes 5.22 Gbits/sec 927588.537 ms 2146298738/ 0 (inf%)

After executing FMC policy, one flow will bind to one core, so all 4 cores will be used by multiple flows.

Please configure multiple iperf clients(different source and destination addresses) to connect to the iperf server to create multiple flows. In the real scenario, one user application uses one flow.

0 项奖励
回复

4,443 次查看
zwg132883
Contributor III

Hi Yiping
    Your above test can enable the out-of-order become to 0 on server side? If yes, how do you test? 
I had said above my environment is two ls1046 cards test.So one card is the server, the other is client.
They are all only one IP. Why should I configure multiple iperf clients? And how do I config?

标记 (1)
0 项奖励
回复

4,376 次查看
yipingwang
NXP TechSupport
NXP TechSupport

I used the above commands to perform iperf test on LS1046ARDB, out-of-order packets become to 0 on server side.

I uses the same test environment with you.

configuring multiple iperf clients with different source IP address to simulate multiple flows.

You could use iperf "--bind" parameter to bind ip address.

0 项奖励
回复

4,251 次查看
zwg132883
Contributor III

Hi Yiping

   I test use iperf3  --bind (IP), there also isn't available.

This is my result:

On server side:

-----------------------------------------------------------

Server listening on 5201

-----------------------------------------------------------

Time: Mon, 03 Apr 2023 13:39:17 GMT

Accepted connection from 192.168.6.18, port 51567

     Cookie: jy6jx6fy3dzibtn4kerhm3uigsrtvwuwfp6t

[  5] local 192.168.6.17 port 5201 connected to 192.168.6.18 port 59000

Starting Test: protocol: UDP, 1 streams, 1448 byte blocks, omitting 0 seconds, 10 second test, tos 0

[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams

[  5]  0.00-1.00   sec  1.09 MBytes 9.16 Mbits/sec  0.001 ms  0/791 (0%)

[  5]  1.00-2.00   sec  1.14 MBytes 9.57 Mbits/sec  0.002 ms  0/826 (0%)

[  5]  2.00-3.00   sec  1.14 MBytes 9.57 Mbits/sec  43.969 ms  26/852 (3.1%)

[  5]  3.00-4.00   sec  1.14 MBytes 9.56 Mbits/sec  64.958 ms  3/828 (0.36%)

[  5]  4.00-5.00   sec  1.14 MBytes 9.57 Mbits/sec  70.608 ms  -2/824 (-0.24%)

[  5]  5.00-6.00   sec  1.14 MBytes 9.56 Mbits/sec  0.002 ms  -27/798 (-3.4%)

[  5]  6.00-7.00   sec  1.14 MBytes 9.57 Mbits/sec  0.003 ms  0/826 (0%)

[  5]  7.00-8.00   sec  1.14 MBytes 9.57 Mbits/sec  68.999 ms  27/853 (3.2%)

[  5]  8.00-9.00   sec  1.14 MBytes 9.56 Mbits/sec  59.311 ms  -2/823 (-0.24%)

[  5]  9.00-10.00  sec  1.14 MBytes 9.57 Mbits/sec  52.925 ms  3/829 (0.36%)

[  5] 10.00-10.05  sec  56.6 KBytes 9.41 Mbits/sec  63.971 ms  0/40 (0%)

- - - - - - - - - - - - - - - - - - - - - - - - -

Test Complete. Summary Results:

[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams

[  5] (sender statistics not available)

[SUM]  0.0-10.0 sec 2800 datagrams received out-of-order

[  5]  0.00-10.05  sec  11.4 MBytes 9.52 Mbits/sec  63.971 ms  28/8290 (0.34%)  receiver

CPU Utilization: local/receiver 0.1% (0.0%u/0.1%s), remote/sender 1.9% (0.3%u/1.6%s)

 

When on client side run:

iperf3 -c 192.168.6.17 -u -b 1000M -B 192.168.6.18

 

I think bind a host IP is used in multiple clients send packets to server, but for my environment, there is only configuration for ethernet in the client. It may be can't resolve my problem.

标记 (1)
0 项奖励
回复

4,245 次查看
yipingwang
NXP TechSupport
NXP TechSupport

Did you still encounter out-of-order issue when use the following command on LS1046 target board?

Server: Iperf3 -s

 

Client: Iperf3 -c 192.168.6.17 -b 1000M -u

0 项奖励
回复

4,227 次查看
zwg132883
Contributor III

Yes. I still encounter out-of-order issue when use the following command on LS1046 target board.
You can see my test result.
On server side:
root@odin:~# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.6.18, port 43454
[  5] local 192.168.6.17 port 5201 connected to 192.168.6.18 port 57286
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-1.00   sec  1.09 MBytes  9.17 Mbits/sec  29.077 ms  16/808 (2%)  
[  5]   1.00-2.00   sec  1.14 MBytes  9.56 Mbits/sec  0.005 ms  -16/809 (-2%)  
[  5]   2.00-3.00   sec  1.14 MBytes  9.57 Mbits/sec  0.002 ms  0/826 (0%)  
[  5]   3.00-4.00   sec  1.14 MBytes  9.56 Mbits/sec  33.363 ms  28/853 (3.3%)  
[  5]   4.00-5.00   sec  1.14 MBytes  9.57 Mbits/sec  0.002 ms  -28/798 (-3.5%)  
[  5]   5.00-6.00   sec  1.14 MBytes  9.57 Mbits/sec  31.373 ms  29/855 (3.4%)  
[  5]   6.00-7.00   sec  1.14 MBytes  9.56 Mbits/sec  68.889 ms  -1/824 (-0.12%)  
[  5]   7.00-8.00   sec  1.14 MBytes  9.57 Mbits/sec  50.453 ms  -2/824 (-0.24%)  
[  5]   8.00-9.00   sec  1.14 MBytes  9.57 Mbits/sec  35.075 ms  2/828 (0.24%)  
[  5]   9.00-10.00  sec  1.14 MBytes  9.56 Mbits/sec  0.002 ms  -28/797 (-3.5%)  
[  5]  10.00-10.08  sec  96.2 KBytes  9.64 Mbits/sec  0.010 ms  0/68 (0%)  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[SUM]  0.0-10.1 sec  2357 datagrams received out-of-order
[  5]   0.00-10.08  sec  11.4 MBytes  9.53 Mbits/sec  0.010 ms  0/8290 (0%)  receiver
iperf3: the client has unexpectedly closed the connection

 

On client side:
root@odin:/# iperf3 -c 192.168.6.17 -b 1000M -u
Connecting to host 192.168.6.17, port 5201
[  5] local 192.168.6.18 port 57286 connected to 192.168.6.17 port 5201
[ ID] Interval           Transfer     Bitrate         Total Datagrams
[  5]   0.00-1.00   sec  1.19 MBytes  9.95 Mbits/sec  859  
[  5]   1.00-2.00   sec  1.14 MBytes  9.57 Mbits/sec  826  
[  5]   2.00-3.00   sec  1.14 MBytes  9.56 Mbits/sec  825  
[  5]   3.00-4.00   sec  1.14 MBytes  9.57 Mbits/sec  826  
[  5]   4.00-5.00   sec  1.14 MBytes  9.57 Mbits/sec  826  
[  5]   5.00-6.00   sec  1.14 MBytes  9.56 Mbits/sec  825  
[  5]   6.00-7.00   sec  1.14 MBytes  9.57 Mbits/sec  826  
[  5]   7.00-8.00   sec  1.14 MBytes  9.56 Mbits/sec  825  
[  5]   8.00-9.00   sec  1.14 MBytes  9.57 Mbits/sec  826  
[  5]   9.00-10.00  sec  1.14 MBytes  9.57 Mbits/sec  826  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  11.4 MBytes  9.60 Mbits/sec  0.000 ms  0/8290 (0%)  sender
[  5]   0.00-10.08  sec  11.4 MBytes  9.53 Mbits/sec  0.010 ms  0/8290 (0%)  receiver

iperf Done.

0 项奖励
回复

4,221 次查看
yipingwang
NXP TechSupport
NXP TechSupport

The transfer bitrate is too low on your target board.

It seems that the out-of-order issue is caused by packet loss problem.

Please check  Fman Rx Port Statistic data on the server side.

root@localhost:~# ls /sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/*

/sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/port_dealloc_buf
/sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/port_discard_frame
/sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/port_enq_total
/sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/port_frame
/sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/port_rx_bad_frame
/sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/port_rx_filter_frame
/sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/port_rx_large_frame
/sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/port_rx_out_of_buffers_discard

 root@localhost:~# cat /sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/*
fm0-port-rx7 counter: 0
fm0-port-rx7 counter: 0
fm0-port-rx7 counter: 2988831
fm0-port-rx7 counter: 2988831
fm0-port-rx7 counter: 0
fm0-port-rx7 counter: 0
fm0-port-rx7 counter: 0
fm0-port-rx7 counter: 0
root@localhost:~#

 

For details regarding packet loss issue debugging, please refer to https://community.nxp.com/t5/Layerscape-Knowledge-Base/Debugging-Packet-LOSS-and-QMAN-Enqueue-Reject...

0 项奖励
回复

4,212 次查看
zwg132883
Contributor III

Hi Yiping,

It may be not caused by packet loss problem. I said that PPC (T1042) card can be resolved this out-of-order issue after the FMC tool is integrated. And we all use the same test method.
# cat /sys/devices/platform/soc/1a00000.fman/1a91000.port/statistics/* 
        fm0-port-rx7 counter: 0
        fm0-port-rx7 counter: 0
        fm0-port-rx7 counter: 648793
        fm0-port-rx7 counter: 648793
        fm0-port-rx7 counter: 0
        fm0-port-rx7 counter: 0
        fm0-port-rx7 counter: 0
        fm0-port-rx7 counter: 0

Is there any methods to check the out-of-order packets statistic number?

I'm thinking is it possible to set the dpaa rx and tx driver to bind one cpu core?

标记 (1)
0 项奖励
回复