MACLESS Interface to USDPAA Buffer Transfer

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

MACLESS Interface to USDPAA Buffer Transfer

1,488 Views
ramkrishnan
Contributor III

This is the setup that I have. It is the T1024RDB with mac4(which is labeled eth0) connected to a tester. In the modified dts file I have a macless interface what I want to connect Linux Kernel Stack to the USDPAA to send and receive ip packets from tester connected to ETH0 physical interface.

 

I have created the macless fsl,qman-frame-queues-tx  queues specified in the dts, in the USDPAA app to exit egress out of ETH0(MAC4).  

I have the pcd setup to classify and send IP packets to the USDPAA to the FQID specified on the fsl,qman-frame-queues-rx of the macless interface.
I have setup an IP interface on the macless interface.

 

On the p1024 linux shell if I ping the IP address of the tester  (i.e arp or icmp if I enter a dummy mac address), I can see the packets on the tester.

 

But if I ping from the tester. It crashes in the linux ethernet driver crashes with the following message

 

kernel BUG at <PATH>/tmp/work-shared/t1024rdb/kernel-source/drivers/net/ethernet/freescale/sdk_dpaa/dpaa_eth_shared.c:248!

 

It looks like the packet does reach the ethernet driver using the rx fqid specified in the pc and crashed because the kernel does not recognize the bpid. The bpid is 9 (based on the dts file).

 

Can you please help in figuring how to send the buffer across from the USDPAA to the Linux Kernel driver.

 

I do not want to use the shared-mac because I need that MAC to switch MPLS traffic but redirect IP traffic to the Linux stack.

 

Thank you,

Ram Krishnan

Original Attachment has been moved to: t1024rdb-usdpaa.dts.zip

Labels (2)
0 Kudos
3 Replies

929 Views
ramkrishnan
Contributor III

pastedImage_0.png

This was the configuration I was trying to accomplish which is mentioned under

QorIQ SDK 2.0 Documentation->Linux Kernel Drivers->Ethernet->Linux Ethernet Driver for DPAA 1.x Family->Linux DPAA 1.x Ethernet Drivers->Ethernet Advanced Drivers->Macless DPAA Ethernet Driver->Configuration

Unfortunately I am sure if shared would work because I need all 4 ports for switching MPLS and VLAN traffic at line speed but need the management traffic to hit the Linux Host and the Management traffic could come from any of the interfaces. But this could also be my lack of knowledge about the DPAA functionality. I will also look into the shared interfaces.

Thanks,

Ram

0 Kudos

929 Views
ramkrishnan
Contributor III

It seems to work if I do a minor change in the dts file.

For the ethernet@16 I need to keep the same buffer pools as the mac interface. For eg in this case I have kept bp9 shared between the macless interface and the mac interfaces. The second change is to set the third tuple in the definition of bp9 to be 0 insetad of 0xdeadbeef. This indicates to the ethernet driver to create a dpa_bp place holder but it will be seeded by the usdpaa. This way the driver also has a pool allocation for bp9 and will return the buffers back to to the pool when it is done with it.

fsl,bpool-ethernet-cfg = <0 1728 0 1728 0 0x0>;

Not sure if this is a legitimate way of solving it but it seems to work. The pcd can then be configured to forward all management traffic to one of the fsl,qman-frame-queues-rx configured for the macless port.

0 Kudos

929 Views
bpe
NXP Employee
NXP Employee

What you are doing is definitely not the intended case for a macless interface because it's main purpose is communication between partitions rather than sending something out on a real MAC or tapping some traffic from a linux private  MAC. Provided that the buffers are recycled to the correct BP, yes, may work,  but still the safer way is to arrange for a shared MAC, for which this mode is  legal and documented.


Have a great day,

Platon

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 Kudos