LX2160A/LX2080A- DPDK question with PTP function

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 
已解决

LX2160A/LX2080A- DPDK question with PTP function

跳至解决方案
8,205 次查看
allenwu622
Contributor III

Hi NXP team:

 

When LX2160A/LX2080A initialize DPDK from test.sh script for out application, PTP function can't work.
In this condition, can LX2160A can use PTP function?
If we want to use PTP, kernel stack and DPDK on single Ethernet port , how to configure DPDK correctly?

 

Please refer to console log below and attachment file.(Initial DPDK script)

root@localhost:~# ptp4l -H -s -m -i eth0 --tx_timestamp_timeout=20 -f G.8275.2.cfg
ptp4l[87.630]: selected /dev/ptp0 as PTP clock
ptp4l[87.630]: port 0: hybrid_e2e only works with E2E
ptp4l[87.644]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[87.644]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[92.218]: selected local clock 807871.fffe.12c429 as best master
ptp4l[94.052]: port 1: new foreign master 000580.fffe.07f653-5
ptp4l[96.929]: selected local clock 807871.fffe.12c429 as best master
ptp4l[98.045]: selected best master clock 000580.fffe.07f653
ptp4l[98.045]: updating UTC offset to 37
ptp4l[98.045]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE
ptp4l[101.240]: master offset -1729499211274331771 s0 freq +0 path delay 3189
ptp4l[102.239]: master offset -1729499211274332136 s1 freq -366 path delay 3479
ptp4l[103.237]: master offset -771 s2 freq -1137 path delay 3479
ptp4l[103.237]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
ptp4l[104.236]: master offset 329 s2 freq -268 path delay 3479
ptp4l[105.234]: master offset 659 s2 freq +161 path delay 3326
ptp4l[106.232]: master offset 544 s2 freq +244 path delay 3260
ptp4l[107.231]: master offset 289 s2 freq +152 path delay 3260
ptp4l[108.229]: master offset -12 s2 freq -63 path delay 3326
ptp4l[109.228]: master offset 13 s2 freq -41 path delay 3326
ptp4l[110.226]: master offset -35 s2 freq -85 path delay 3326
ptp4l[111.224]: master offset -381 s2 freq -442 path delay 3769
^Croot@localhost:~# ls-listni
dprc.1/dpni.2 (interface: eth0, end point: dpmac.3)
dprc.1/dpni.1 (interface: eth1, end point: dpmac.6)
dprc.1/dpni.0 (interface: eth2, end point: dpmac.17)
root@localhost:~# ./test.sh
+ echo dpni.2
+ restool dpni destroy dpni.2
dpni.2 is destroyed
+ export DPDMAI_COUNT=0
+ DPDMAI_COUNT=0
+ export DPIO_COUNT=8
+ DPIO_COUNT=8
+ /usr/local/dpdk/dpaa2/dynamic_dpl.sh dpni dpni -b 00:00:00:00:17:00
parent - dprc.1
Creating Non nested DPRC
NEW DPRCs
dprc.1
dprc.2
Using board type as 2160
Using High Performance Buffers

##################### Container dprc.2 is created ####################

Container dprc.2 have following resources :=>

* 1 DPMCP
* 16 DPBP
* 8 DPCON
* 16 DPSECI
* 2 DPNI
* 8 DPIO
* 2 DPCI
* 0 DPDMAI
* 0 DPRTC


######################### Configured Interfaces #########################

Interface Name Endpoint Mac Address
============== ======== ==================
dpni.2 UNCONNECTED 00:00:00:00:17:01
dpni.3 UNCONNECTED 00:00:00:00:17:02

+ export DPIO_COUNT=8
+ DPIO_COUNT=8
+ export DPDMAI_COUNT=8
+ DPDMAI_COUNT=8
+ export DPMCP_COUNT=2
+ DPMCP_COUNT=2
+ /usr/local/dpdk/dpaa2/dynamic_dpl.sh dpni.3 dpni -b 00:00:00:00:18:00
parent - dprc.1
Creating Non nested DPRC
NEW DPRCs
dprc.1
dprc.3
dprc.2
Using board type as 2160
Using High Performance Buffers

##################### Container dprc.3 is created ####################

Container dprc.3 have following resources :=>

* 2 DPMCP
* 16 DPBP
* 8 DPCON
* 16 DPSECI
* 2 DPNI
* 8 DPIO
* 2 DPCI
* 8 DPDMAI
* 0 DPRTC


######################### Configured Interfaces #########################

Interface Name Endpoint Mac Address
============== ======== ==================
dpni.4 dpni.3 00:00:00:00:18:01
dpni.5 UNCONNECTED 00:00:00:00:18:02

+ ls-addni --no-link --mac-addr=80:78:71:12:c4:29
Created interface: eth0 (object:dpni.6, endpoint: )
+ sleep 2
+ echo dprc.2
+ restool dpdmux create --default-if=1 --num-ifs=2 --method DPDMUX_METHOD_CUSTOM --manip=DPDMUX_MANIP_NONE --option=DPDMUX_OPT_CLS_MASK_SUPPORT --container=dprc.1
dpdmux.0 is created under dprc.1
+ restool dprc connect dprc.1 --endpoint1=dpdmux.0.0 --endpoint2=dpmac.3
+ restool dprc connect dprc.1 --endpoint1=dpdmux.0.1 --endpoint2=dpni.6
+ restool dprc connect dprc.1 --endpoint1=dpdmux.0.2 --endpoint2=dpni.2
+ restool dprc assign dprc.1 --object=dpdmux.0 --child=dprc.2 --plugged=1
+ echo dprc.2
root@localhost:~# ptp4l -H -s -m -i eth0 --tx_timestamp_timeout=20 -f G.8275.2.cfg
ptp4l[240.582]: selected /dev/ptp0 as PTP clock
ptp4l[240.582]: port 0: hybrid_e2e only works with E2E
ptp4l[240.601]: port 1: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[240.604]: port 0: INITIALIZING to LISTENING on INIT_COMPLETE
ptp4l[246.375]: selected local clock 807871.fffe.12c429 as best master
ptp4l[247.006]: port 1: new foreign master 000580.fffe.07f653-5
ptp4l[250.999]: selected best master clock 000580.fffe.07f653
ptp4l[250.999]: updating UTC offset to 37
ptp4l[250.999]: port 1: LISTENING to UNCALIBRATED on RS_SLAVE
ptp4l[254.294]: master offset -863600868970450888 s0 freq -441 path delay 863600868970456647
ptp4l[255.292]: master offset -863600868970450768 s1 freq -321 path delay 863600868970456647
ptp4l[256.291]: master offset -1089 s2 freq -1410 path delay 863600868970456647
ptp4l[256.291]: port 1: UNCALIBRATED to SLAVE on MASTER_CLOCK_SELECTED
ptp4l[257.289]: master offset -431800435712170306 s2 freq -249999999 path delay 1295401304682627354
ptp4l[258.287]: master offset -431800435707937952 s2 freq -249999999 path delay 1295401304682627354
ptp4l[259.286]: master offset -431800435806817514 s2 freq -249999999 path delay 1295401304785740265
ptp4l[260.284]: master offset -431800435905697073 s2 freq -249999999 path delay 1295401304888853177
^Croot@localhost:~# ls-listni
dprc.1/dpni.6 (interface: eth0, end point: dpdmux.0.1)
dprc.1/dpni.1 (interface: eth1, end point: dpmac.6)
dprc.1/dpni.0 (interface: eth2, end point: dpmac.17)
dprc.1/dprc.3/dpni.5
dprc.1/dprc.3/dpni.4 (end point: dpni.3)
dprc.1/dprc.2/dpni.3 (end point: dpni.4)
dprc.1/dprc.2/dpni.2 (end point: dpdmux.0.2)
root@localhost:~# restool -m
MC firmware version: 10.36.0
root@localhost:~# uname -r
4.19.90-rt35

Thanks.

标签 (1)
0 项奖励
回复
1 解答
6,749 次查看
SebastianG
NXP TechSupport
NXP TechSupport

HI @allenwu622,

This is a repsonse  from the specialist team:

----

Why do you require a single interface from MAC?

In my understanding , DPDK can't support ptp now,

In general we reccomend a HW split solution in order to support ptp with DPDK

 

NXP’s dpdmux is designed to make application not feel multiple interfaces.

----

在原帖中查看解决方案

11 回复数
8,088 次查看
SebastianG
NXP TechSupport
NXP TechSupport

Hi @allenwu622,

You can run a make config and check the kernel in order to see if the function PTP is enabled,

Also you can refer to the sections 7.7.5 and 7.7.6 on the LLDP (https://www.nxp.com/docs/en/user-guide/UG10081_LLDP_6.1.55_2.2.0.pdf) for find commands to verificate the PTP clock.

Regards

0 项奖励
回复
8,069 次查看
allenwu622
Contributor III

Hi @SebastianG :


PTP can work before initialize DPDK from test.sh. 
PTP function test LSDK20.04, LSDK2108, LLDP5.15 and LLDP6.1 that can't work correctly with DPDK.
Please refer to linker below:
https://community.nxp.com/t5/QorIQ/1588-and-DPDK/m-p/1174969/highlight/true
we want that single Ethernet port to use PTP from kernel stack and our applications from DPDK.
Can LX2160A/LX2080A support above application? How to do it?
Or this condition only use PTP from DPDK?


Thanks.

0 项奖励
回复
7,071 次查看
SebastianG
NXP TechSupport
NXP TechSupport

HI @allenwu622,

Talking with a specialist team, you can follow this response:

----

To support PTP with dpdk, we need dpdmux to split traffic:
Please refer attached guide to enale dpdmux, PTP traffic can be handled by ethx kernel port.

----

Regards

0 项奖励
回复
6,985 次查看
allenwu622
Contributor III

Hi @SebastianG :

 

I have followed your document to test DPDK and PTP.
The example used two Ethernet port, I change command to test single Ethernet port for our application request, but the result still fail. how to run dpdk-pkt_split_app on single Ethernet port correctly? More detail refer to attachment file.

In this case, LX2160A should been used DPDK and kernel stack for PTP application.
But ptp is different test result between DPDK and non-DPDK in same condition, please refer to picture below, PTP seem packets rules change that cause this issue.

allenwu622_0-1733971644270.png

Thanks.

0 项奖励
回复
6,972 次查看
SebastianG
NXP TechSupport
NXP TechSupport

Hi @allenwu622,

Could you please tell me the following details?

  1. Do you use dpdmux to split the traffic for their case?, I asked you this for the reason that dpdmux can split traffic from single MAC to multiple interfaces.
  2. Could you please tell me what kind of packets are required to be processed in dpdk?

Regards,

 

0 项奖励
回复
6,961 次查看
allenwu622
Contributor III

Hi @SebastianG :

 

  1. Do you use dpdmux to split the traffic for their case?, I asked you this for the reason that dpdmux can split traffic from single MAC to multiple interfaces.
    • In this case, we should set single mac and single interface on one port. This result can refer to ls-listni result from dpdk_ptp_log.txt.
  2. Could you please tell me what kind of packets are required to be processed in dpdk?
    • I mean PTP should use the kernel stack and output the same result in both DPDK and non-DPDK. But the DPDK test result isn't the same as the non-DPDK test result.  


Thanks.

0 项奖励
回复
6,750 次查看
SebastianG
NXP TechSupport
NXP TechSupport

HI @allenwu622,

This is a repsonse  from the specialist team:

----

Why do you require a single interface from MAC?

In my understanding , DPDK can't support ptp now,

In general we reccomend a HW split solution in order to support ptp with DPDK

 

NXP’s dpdmux is designed to make application not feel multiple interfaces.

----

7,881 次查看
SebastianG
NXP TechSupport
NXP TechSupport

Hi @allenwu622,

Sorry for the late response,

Just to let you know that I still working on your questions, When I have any update I will let you know

Regards

7,746 次查看
allenwu622
Contributor III

Hi @SebastianG :

Thank you for your support.
I am looking forward to your reply.

Thanks.

0 项奖励
回复
8,164 次查看
allenwu622
Contributor III

Hi @SebastianG :

Thank you for your support.
I am looking forward to your reply.

Thanks.

 

0 项奖励
回复
8,167 次查看
SebastianG
NXP TechSupport
NXP TechSupport

Hi @allenwu622,

Just to let you know, that I am working on your questions, when I have any update I will let you know

Regards