Having performed some network performance tests with iperf3, it can be seen that the maximum connection capacity is not reached.
UDP
[ 5] 0.00-1.00 sec 673 MBytes 5.65 Gbits/sec 524771
[ 5] 1.00-2.00 sec 678 MBytes 5.69 Gbits/sec 528683
[ 5] 2.00-3.00 sec 677 MBytes 5.68 Gbits/sec 527809
[ 5] 3.00-4.00 sec 690 MBytes 5.79 Gbits/sec 537840
[ 5] 4.00-5.00 sec 699 MBytes 5.87 Gbits/sec 545084
[ 5] 5.00-6.00 sec 695 MBytes 5.83 Gbits/sec 542207
[ 5] 6.00-7.00 sec 680 MBytes 5.70 Gbits/sec 529956
[ 5] 7.00-8.00 sec 665 MBytes 5.58 Gbits/sec 518615
[ 5] 8.00-9.00 sec 653 MBytes 5.47 Gbits/sec 508780
[ 5] 9.00-10.00 sec 641 MBytes 5.38 Gbits/sec 499949
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-10.00 sec 6.59 GBytes 5.66 Gbits/sec 0.000 ms 0/5263694 (0%) sender
[ 5] 0.00-10.64 sec 38.1 MBytes 30.0 Mbits/sec 0.242 ms 5233880/5263562 (99%) receiver
TCP
ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 1.18 MBytes 9.93 Mbits/sec 0 403 KBytes
[ 5] 1.00-2.00 sec 3.88 MBytes 32.5 Mbits/sec 141 465 KBytes
[ 5] 2.00-3.00 sec 4.75 MBytes 39.8 Mbits/sec 0 520 KBytes
[ 5] 3.00-4.00 sec 5.25 MBytes 44.0 Mbits/sec 1 389 KBytes
[ 5] 4.00-5.00 sec 3.75 MBytes 31.5 Mbits/sec 1 298 KBytes
[ 5] 5.00-6.00 sec 2.88 MBytes 24.1 Mbits/sec 0 328 KBytes
[ 5] 6.00-7.00 sec 3.00 MBytes 25.2 Mbits/sec 0 345 KBytes
[ 5] 7.00-8.00 sec 3.25 MBytes 27.3 Mbits/sec 0 355 KBytes
[ 5] 8.00-9.00 sec 3.38 MBytes 28.3 Mbits/sec 0 357 KBytes
[ 5] 9.00-10.00 sec 3.62 MBytes 30.4 Mbits/sec 0 357 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 34.9 MBytes 29.3 Mbits/sec 143 sender
[ 5] 0.00-10.19 sec 34.1 MBytes 28.0 Mbits/sec receiver
Some reconfigurations of the LK were made according to the recommendations mentioned by @yipingwang in another similar thread[1].
解決済! 解決策の投稿を見る。
You could use DPDK.
Please refer to dpdk ipsecgw performance.
LSDK20.12 kernel 5.4 | |||
name | Framesize | Throughput(Kpps) | Throughput(Mbps) |
dpdk_ipsecgw_caam_offload_50g_16c_perf | 82 | 23,953 | 19,546 |
408 | 13,967 | 47,825 | |
1,442 | 4,275 | 50,000 |
I got similar iperf performance result as the following with LSDK 21.08 default software environment and two lx2160ardb boards connecting together. Is your user case termination traffic instead of forwarding?
root@localhost:/# iperf -c 100.1.1.64 -P 16 -t 30
------------------------------------------------------------
Client connecting to 100.1.1.64, TCP port 5001
TCP window size: 442 KByte (default)
------------------------------------------------------------
[ 4] local 100.1.1.65 port 44980 connected with 100.1.1.64 port 5001
[ 5] local 100.1.1.65 port 44982 connected with 100.1.1.64 port 5001
[ 3] local 100.1.1.65 port 44978 connected with 100.1.1.64 port 5001
[ 6] local 100.1.1.65 port 44984 connected with 100.1.1.64 port 5001
[ 7] local 100.1.1.65 port 44986 connected with 100.1.1.64 port 5001
[ 8] local 100.1.1.65 port 44988 connected with 100.1.1.64 port 5001
[ 12] local 100.1.1.65 port 44996 connected with 100.1.1.64 port 5001
[ 9] local 100.1.1.65 port 44990 connected with 100.1.1.64 port 5001
[ 10] local 100.1.1.65 port 44992 connected with 100.1.1.64 port 5001
[ 11] local 100.1.1.65 port 44994 connected with 100.1.1.64 port 5001
[ 13] local 100.1.1.65 port 44998 connected with 100.1.1.64 port 5001
[ 14] local 100.1.1.65 port 45000 connected with 100.1.1.64 port 5001
[ 15] local 100.1.1.65 port 45002 connected with 100.1.1.64 port 5001
[ 16] local 100.1.1.65 port 45004 connected with 100.1.1.64 port 5001
[ 17] local 100.1.1.65 port 45006 connected with 100.1.1.64 port 5001
[ 18] local 100.1.1.65 port 45008 connected with 100.1.1.64 port 5001
[ ID] Interval Transfer Bandwidth
[ 7] 0.0-30.0 sec 1.29 GBytes 369 Mbits/sec
[ 12] 0.0-30.0 sec 1.26 GBytes 362 Mbits/sec
[ 11] 0.0-30.0 sec 1.21 GBytes 346 Mbits/sec
[ 14] 0.0-30.0 sec 1.24 GBytes 355 Mbits/sec
[ 16] 0.0-30.0 sec 1.27 GBytes 364 Mbits/sec
[ 4] 0.0-30.0 sec 1.23 GBytes 352 Mbits/sec
[ 5] 0.0-30.0 sec 1.27 GBytes 362 Mbits/sec
[ 9] 0.0-30.0 sec 1.24 GBytes 356 Mbits/sec
[ 13] 0.0-30.0 sec 1.28 GBytes 365 Mbits/sec
[ 15] 0.0-30.0 sec 1.25 GBytes 357 Mbits/sec
[ 3] 0.0-30.1 sec 1.25 GBytes 356 Mbits/sec
[ 6] 0.0-30.0 sec 1.31 GBytes 376 Mbits/sec
[ 10] 0.0-30.1 sec 1.28 GBytes 365 Mbits/sec
[ 17] 0.0-30.0 sec 1.23 GBytes 353 Mbits/sec
[ 18] 0.0-30.0 sec 1.24 GBytes 354 Mbits/sec
[ 8] 0.0-30.1 sec 1.28 GBytes 366 Mbits/sec
[SUM] 0.0-30.1 sec 20.1 GBytes 5.75 Gbits/sec
root@localhost:/# iperf -c 100.1.1.64 -P 16 -t 30 -u -b 40G
------------------------------------------------------------
Client connecting to 100.1.1.64, UDP port 5001
Sending 1470 byte datagrams, IPG target: 0.27 us (kalman adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 100.1.1.65 port 47092 connected with 100.1.1.64 port 5001
[ 4] local 100.1.1.65 port 51398 connected with 100.1.1.64 port 5001
[ 5] local 100.1.1.65 port 51049 connected with 100.1.1.64 port 5001
[ 6] local 100.1.1.65 port 43814 connected with 100.1.1.64 port 5001
[ 7] local 100.1.1.65 port 45999 connected with 100.1.1.64 port 5001
[ 8] local 100.1.1.65 port 57700 connected with 100.1.1.64 port 5001
[ 9] local 100.1.1.65 port 60702 connected with 100.1.1.64 port 5001
[ 10] local 100.1.1.65 port 43959 connected with 100.1.1.64 port 5001
[ 11] local 100.1.1.65 port 39367 connected with 100.1.1.64 port 5001
[ 12] local 100.1.1.65 port 52498 connected with 100.1.1.64 port 5001
[ 13] local 100.1.1.65 port 46112 connected with 100.1.1.64 port 5001
[ 14] local 100.1.1.65 port 48562 connected with 100.1.1.64 port 5001
[ 15] local 100.1.1.65 port 33950 connected with 100.1.1.64 port 5001
[ 16] local 100.1.1.65 port 37260 connected with 100.1.1.64 port 5001
[ 17] local 100.1.1.65 port 35469 connected with 100.1.1.64 port 5001
[ 18] local 100.1.1.65 port 32931 connected with 100.1.1.64 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-30.2 sec 275 MBytes 76.3 Mbits/sec
[ 5] Sent 196141 datagrams
[ 5] Server Report:
[ 5] 0.0-30.2 sec 192 KBytes 51.9 Kbits/sec 11.167 ms 3101/ 0 (inf%)
[ 10] 0.0-30.2 sec 277 MBytes 77.0 Mbits/sec
[ 10] Sent 197856 datagrams
[ 12] 0.0-30.2 sec 278 MBytes 77.1 Mbits/sec
[ 12] Sent 198121 datagrams
[ 14] 0.0-30.2 sec 276 MBytes 76.6 Mbits/sec
[ 14] Sent 196964 datagrams
[ 12] Server Report:
[ 12] 0.0-30.2 sec 193 KBytes 52.4 Kbits/sec 14.995 ms 3719/ 0 (inf%)
[ 10] Server Report:
[ 10] 0.0-30.2 sec 193 KBytes 52.4 Kbits/sec 20.850 ms 2843/ 0 (inf%)
[ 14] Server Report:
[ 14] 0.0-30.2 sec 192 KBytes 52.1 Kbits/sec 6.310 ms 7829/ 0 (inf%)
[ 6] 0.0-30.2 sec 279 MBytes 77.3 Mbits/sec
[ 6] Sent 198680 datagrams
[ 7] 0.0-30.2 sec 279 MBytes 77.3 Mbits/sec
[ 7] Sent 198676 datagrams
[ 8] 0.0-30.2 sec 275 MBytes 76.2 Mbits/sec
[ 8] Sent 195864 datagrams
[ 16] 0.0-30.2 sec 272 MBytes 75.5 Mbits/sec
[ 16] Sent 194226 datagrams
[ 7] Server Report:
[ 7] 0.0-30.2 sec 194 KBytes 52.6 Kbits/sec 14.559 ms 2173/ 0 (inf%)
[ 8] Server Report:
[ 8] 0.0-30.2 sec 191 KBytes 51.8 Kbits/sec 12.627 ms 831/ 0 (inf%)
[ 6] Server Report:
[ 6] 0.0-30.2 sec 194 KBytes 52.6 Kbits/sec 14.019 ms 8185/ 0 (inf%)
[ 16] Server Report:
[ 16] 0.0-30.2 sec 190 KBytes 51.4 Kbits/sec 0.001 ms 4134/ 0 (inf%)
[ 3] 0.0-30.2 sec 275 MBytes 76.4 Mbits/sec
[ 3] Sent 196386 datagrams
[ 13] 0.0-30.2 sec 273 MBytes 75.8 Mbits/sec
[ 13] Sent 194996 datagrams
[ 17] 0.0-30.2 sec 274 MBytes 75.9 Mbits/sec
[ 17] Sent 195257 datagrams
[ 13] Server Report:
[ 13] 0.0-30.2 sec 190 KBytes 51.6 Kbits/sec 9.668 ms 7045/ 0 (inf%)
[ 3] Server Report:
[ 3] 0.0-30.2 sec 192 KBytes 52.0 Kbits/sec 6.817 ms 2735/ 0 (inf%)
[ 17] Server Report:
[ 17] 0.0-30.2 sec 191 KBytes 51.7 Kbits/sec 0.446 ms 9778/ 0 (inf%)
[ 11] 0.0-30.2 sec 277 MBytes 76.7 Mbits/sec
[ 11] Sent 197250 datagrams
[ 15] 0.0-30.2 sec 275 MBytes 76.3 Mbits/sec
[ 15] Sent 196217 datagrams
[ 18] 0.0-30.2 sec 274 MBytes 76.0 Mbits/sec
[ 18] Sent 195509 datagrams
[ 18] Server Report:
[ 18] 0.0-30.2 sec 191 KBytes 51.7 Kbits/sec 3.367 ms 6324/ 0 (inf%)
[ 11] Server Report:
[ 11] 0.0-30.2 sec 193 KBytes 52.2 Kbits/sec 7.802 ms 4433/ 0 (inf%)
[ 15] Server Report:
[ 15] 0.0-30.2 sec 192 KBytes 51.9 Kbits/sec 3.930 ms 2006/ 0 (inf%)
[ 4] 0.0-30.2 sec 277 MBytes 76.8 Mbits/sec
[ 4] Sent 197401 datagrams
[ 9] 0.0-30.2 sec 279 MBytes 77.3 Mbits/sec
[ 9] Sent 198661 datagrams
[SUM] 0.0-30.2 sec 4.31 GBytes 1.22 Gbits/sec
[SUM] Sent 3148205 datagrams
Hi @yipingwang. Appreciate the feedback!
My work is with terminating traffic and not forwarding. However, I would like to know if it is possible to get the 40G on the QSFP port?
Please specify "-P 8", please refer to the following iperf result provided by the testing team.
This is the result of netperf, the result is about 35.6Gbps.
You could use DPDK.
Please refer to dpdk ipsecgw performance.
LSDK20.12 kernel 5.4 | |||
name | Framesize | Throughput(Kpps) | Throughput(Mbps) |
dpdk_ipsecgw_caam_offload_50g_16c_perf | 82 | 23,953 | 19,546 |
408 | 13,967 | 47,825 | |
1,442 | 4,275 | 50,000 |