IP Forwarding performance on LS2088/LX2160

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

IP Forwarding performance on LS2088/LX2160

1,601件の閲覧回数
sedat_altun
Contributor III

Hi,

I am trying to measure IP Forwarding performance of LS2088 and Lx2160 cpus on LSDK linux.

But during the test the ingress traffic with single source and destination ip address is processed only by a single core. 

So I am measuring only  the single core performance . Even though the single core is %100 busy the other cores are not involved in packet processing.

Is there any way to distribute the traffic with single source and single port address among all cores.

Thanks in advance.

0 件の賞賛
返信
7 返答(返信)

1,589件の閲覧回数
yipingwang
NXP TechSupport
NXP TechSupport

DPAA2 platform can support flow steering, and the multi flows will be distributed to each core.

Here is a example script for LX2160ARDB.

#!/bin/bash

mac_list="dpmac.5 dpmac.6"

for mac in $mac_list
do
ls-addni --options=DPNI_OPT_HAS_KEY_MASKING $mac
done

echo ===== show dpmac and dpni ====
ls-listmac

# please check interface name which connect to dpmac5 and dpmac6!!

eth_list=(eth2 eth3)
echo "===== eth list is $eth_list ==="

# mac/ip address configuration on eth2 and eth3
for i in $(seq 0 1)
do
ifconfig ${eth_list[$i]} hw ether 00:E0:0C:00:77:0$(($i + 1))
ifconfig ${eth_list[$i]} 192.85.$(($i + 1)).1/24 up
done

# arp bind and flow steering configuration
for i in $(seq 2 17)
do
arp -s 192.85.1.$i 00:10:94:00:00:01
arp -s 192.85.2.$i 00:10:94:00:00:02
ethtool -N ${eth_list[0]} flow-type ip4 dst-ip 192.85.2.$i action $(($i - 2))
ethtool -N ${eth_list[1]} flow-type ip4 dst-ip 192.85.1.$i action $(($i - 2))
done

ifconfig

# disable gro and pause frame on eth2 and eth3
for eth in ${eth_list[@]}
do
ethtool -K $eth gro off
ethtool -A $eth tx off
done
echo 1 > /proc/sys/net/ipv4/ip_forward

0 件の賞賛
返信

1,568件の閲覧回数
sedat_altun
Contributor III

Hi yipingwang

Thank you very much for your reply.

But you misunderstood me.

I don't want to steer specific frames (source, destination IP) to a dedicated core, I want to distribute the frames with a specific source, destination IP address between all the cores of Lx2160. But by default, on Lx2160 the traffic with a specific source, destination IP address is processed by a single core. And the single core Ip forwarding performance is near 6Gbps, but I want to get at least 9Gbps forwarding performance by using all the cores.

My test setup is depicted at the below figure. The sender PC-1 has IP address 1.1.1.1 and the receiver PC-2 has ip address 2.2.2.2, the RDB has 2 ip addresses 1.1.1.2 for rx and 2.2.2.3 for tx.

I am measuring the ip forwarding performance between PC-1 and PC-2, but I observe that a single core is only involved processing the frames by reading /proc/softirqs file.

How can a let the ingress frames from PC-1 to PC-2 to be processed by all cores of LX160?

Thank you very much.

sedat_altun_0-1673198141904.png

                                              Figure -1 Test setup

 

 

0 件の賞賛
返信

1,537件の閲覧回数
yipingwang
NXP TechSupport
NXP TechSupport

For single flow, the traffic will be distributed to single core on dpaa2 platform, it doesn't support single flow traffic distributed to all cores in software. Also, it will cause frames out of order.

 

If you want to the performance achieved 9Gbps, you can use flow-steering, as LX2160ARDB's performance of IPFWD, the performance data can achieve around 50Gbps above 1024B packet size.

0 件の賞賛
返信

1,535件の閲覧回数
sedat_altun
Contributor III

Thank you very much  yipingwang,

In dpaa and dpaa1 single flow traffic can be distributed to all cores . So dpaa2 is not backward compatible with dpaa1.

Out of order of frames is not a issue for us, we only need to get at least 9Gbps with a single flow. 

So it is not possible to distribute single flow traffic to all cores with software(driver) changes.

Is there any way to downgrade dpaa2 and use it with the features of dpaa1? I

If there is no any way to find a solution with dpaa2 , we have to change the cpu and our custom board and find a new cpu which we can get 9Gbps performance for a single flow.

 

0 件の賞賛
返信

1,522件の閲覧回数
yipingwang
NXP TechSupport
NXP TechSupport

DPAA2 is different from DPAA1.

You could raise paid service requirement to support the single flow distributed all cores in software.

0 件の賞賛
返信

1,480件の閲覧回数
sedat_altun
Contributor III

If it is possible to distribute single flow traffic among cores we can apply to paid service.

How can we apply? Could you please inform us about the procedure.

Thank you very much.

0 件の賞賛
返信

1,412件の閲覧回数
yipingwang
NXP TechSupport
NXP TechSupport

I have sent email to you.

0 件の賞賛
返信