SDK1.1 Freescale DPA Ethernet Driver JUMBO Frame issue

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 
已解决

SDK1.1 Freescale DPA Ethernet Driver JUMBO Frame issue

跳至解决方案
2,378 次查看
amrutkulkarni
Contributor II

Hi All,

Currently i am using SDK1.1 source with P4040 processor which has Freescale's DPA Ethernet driver.

  1. 1. When JUMBO frame are disabled i'm able to transfer max UDP datagram size upto 65507 bytes which is max UDP can handle and it working perfectly OK
  2. 2. When JUMBO frames are enabled i can transfer only max of 16272 bytes and i see huge drop in the max UDP datagram size from 65507 to 16272 bytes.
  3. 3. Bellow is the kernel config option :

CONFIG_DPA_MAX_FRM_SIZE=9600

For the test #1 & # 2  MTU size on bth the end is 1500 bytes. In test #2 only JUMBO Frame support was enabled in kernel as above

is this some bug it the DPA Ethernet driver ? How can i fix the issue ??

Just for the testing propose when i have set different MAX_FRAM_SIZE i get different UDP max datagram size

  1. 1.9600 - 16272 bytes
  2. 2.4500 24000 bytes
  3. 3.3000 28000 bytes

Please let me know if anyone has any hint/answer for above question

Thank you for your help and time !!

Regards,

Amrut

标签 (1)
标记 (3)
0 项奖励
回复
1 解答
1,591 次查看
yipingwang
NXP TechSupport
NXP TechSupport

The DPAA Linux Device driver sets up buffer pools for use by the FMAN for receiving packages. The driver may allocate a pool dynamically, or it can use static configuration information from the device tree.  Each dpa-ethernet node may, optionally, have a "fsl,bman-buffer-pools" property. This property consists of an array of phandles, each one pointing at a buffer pool node. The buffer pool nodes are contained within the bman-portals node. Here is an example:

buffer-pool@1 {

compatible = "fsl,p4080-bpool", "fsl,bpool";

fsl,bpid = <1>;

fsl,bpool-ethernet-cfg = <0 256 0 192 0 0x40000000>;

};

buffer-pool@2 {

compatible = "fsl,p4080-bpool", "fsl,bpool";

fsl,bpid = <2>;

fsl,bpool-ethernet-cfg = <0 100 0 1728 0 0xa0000000>;

};

Each buffer pool node has a compatible property which declares it as a buffer pool, plus

an fsl,bpid property which defines the bpid for the pool. There is also an optional

"fsl,bpool-ethernet-cfg" property. This has the format:

<count size base_address>

ethernet@1 {

compatible = "fsl,p4080-dpa-ethernet-init",

"fsl,dpa-ethernet-init";

fsl,bman-buffer-pools = <&bp1 &bp2>;

fsl,qman-frame-queues-rx = <10 1 11 1>;

fsl,qman-frame-queues-tx = <12 1 13 1>;

fsl,fman-mac = <&enet1>;

};

The FMAN driver will choose an appropriate buffer pool according to the coming ethernet frame size.

For details about this topic, please refer to SDK documents.


Have a great day,
Yiping Wang

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

在原帖中查看解决方案

0 项奖励
回复
4 回复数
1,591 次查看
yipingwang
NXP TechSupport
NXP TechSupport

In SDK 1.0, the default MAXFRM is 1522, allowing for MTUs up to 1500.

The maximum value one can use as the MTU for any interface is

(MAXFRM - 22) bytes, where 22 is the size of an Eth +VLAN header (18 bytes),

plus the Layer2 FCS (4 bytes).

If MAXFRM is set much higher than the current MTU, a more insidious scenario

might ensue. Because usually there are no statically-defined buffer pools in the

device-tree, the DPAA Ethernet driver will conservatively allocate all new skbuffs

large enough to accomodate MAXFRM, plus some DPAA-private room. (Note that

it's from these buffer pools that FMan will draw its buffers for Rx frames.) This

causes a lot of memory being wasted, and in such cases where the actual MTU is

smaller (e.g. 1500), but the MAXFRM is jumbo-sized (e.g. 9600), there will be a

high pressure on the buffer pools, possibly leading to memory exhaustion. In the

particular case of badly fragmented packets, which can happen if one is receiving

large packets over a small-sized MTU network, the reassembled IP datagrams would

be simply too large for the userspace sockets to accomodate, and will be

(nearly-)silently dropped.

So please adjust MAXFRM and MTU together to make them suitable for your system.

For changing MAXFRM, except the way to change Kernel configuration file, you also could set u-boot bootargs as the following.

setenv bootargs  root=/dev/ram rw console=ttyS0,115200 ramdisk_size=300000 fsl_fman_phy_max_frm=9600

Configure MTU:

ifconfig fm2-gb2 mtu 9578

In addtion, MAXFRM directly influences the partitioning of FMan's internal MURAM among

the available Ethernet ports, because it determines the value of an FMan internal

parameter called "FIFO Size". Depending on the value of MAXFRM and the number

of ports being probed, some of these may not be probed because there is not enough

MURAM for all of them. In such cases, one will see a message similar to the

following in the boot console:

cpu6/6: ! MAJOR FM Error [CPU-6, b2.3.1/linux-2.6/drivers/net/dpa/NetCommSw/

Peripherals/FM/fm.c:2047 FmSetSizeOfFifo]: Resource Is Unavailable;

cpu6/6: Requested fifo size and extra size exceed total FIFO size.


Have a great day,
Yiping Wang

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

1,591 次查看
amrutkulkarni
Contributor II

Hi Yiping,


Thank you for your reply. According to your explanation this means onces the size of MAX_FRM_SIZE is set to 9600 bytes, we cannot change the RX buffer allocation size as per new MTU size. is this correct ?

I asked this question because previous i worked on Marvell Ethernet driver and in this one the RX buffer can be changed dynamically according to new MTU size.

In our system we need JUMBO frame only in some specific case and for other cases MTU is always 1500.


DO you think will it be easy to modify the DPA driver to change the RX buffer size dynamically according the MTU size ??


Thank you for your time and support !!!


Regards,

Amrut


0 项奖励
回复
1,591 次查看
vasanthsri
Contributor III

Hello Amrut,

Were you able to setup the Jumbo frame on P4040 processor and tested it ?

I did follow the advice note provided in the Freescale document (Document Number: QORIQSDK-8.6.2 MTU and MAXFRM) , but still not able to successfully test the ping command with Jumbo frame between two hosts. Though, I'm able to modify the MTU size (manually using ifconfig) after updating the CONFIG_DPA_MAX_FRM_SIZE, but ping between two hosts (MTU is set properly on both the hosts) with packet length >8K failed.

For E.G

ping -c 4 -s 8972 -M do <target host IP> resulted in packet loss.

2 packets transmitted, 0 received, 100% packet loss, time 1007ms

Thanks.

0 项奖励
回复
1,592 次查看
yipingwang
NXP TechSupport
NXP TechSupport

The DPAA Linux Device driver sets up buffer pools for use by the FMAN for receiving packages. The driver may allocate a pool dynamically, or it can use static configuration information from the device tree.  Each dpa-ethernet node may, optionally, have a "fsl,bman-buffer-pools" property. This property consists of an array of phandles, each one pointing at a buffer pool node. The buffer pool nodes are contained within the bman-portals node. Here is an example:

buffer-pool@1 {

compatible = "fsl,p4080-bpool", "fsl,bpool";

fsl,bpid = <1>;

fsl,bpool-ethernet-cfg = <0 256 0 192 0 0x40000000>;

};

buffer-pool@2 {

compatible = "fsl,p4080-bpool", "fsl,bpool";

fsl,bpid = <2>;

fsl,bpool-ethernet-cfg = <0 100 0 1728 0 0xa0000000>;

};

Each buffer pool node has a compatible property which declares it as a buffer pool, plus

an fsl,bpid property which defines the bpid for the pool. There is also an optional

"fsl,bpool-ethernet-cfg" property. This has the format:

<count size base_address>

ethernet@1 {

compatible = "fsl,p4080-dpa-ethernet-init",

"fsl,dpa-ethernet-init";

fsl,bman-buffer-pools = <&bp1 &bp2>;

fsl,qman-frame-queues-rx = <10 1 11 1>;

fsl,qman-frame-queues-tx = <12 1 13 1>;

fsl,fman-mac = <&enet1>;

};

The FMAN driver will choose an appropriate buffer pool according to the coming ethernet frame size.

For details about this topic, please refer to SDK documents.


Have a great day,
Yiping Wang

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 项奖励
回复