Issue DPAA2 PMD and chained mbuf

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Issue DPAA2 PMD and chained mbuf

492 Views
womble
Contributor II

I am encountering some odd behaviour which I think might be caused by the DPAA2 DPDK poll mode driver. I have a DPDK 22.11.1 LTS application (on a LX2160A SoC) which does the following:

1. receive a queue of mbufs from an input queue, and process each mbuf:

2. allocate a trailer for the mbuf using rte_pktmbuf_alloc. This allocation occurs from the same pool used to allocate received packets.

3. chain the trailer to the received mbuf using rte_pktmbuf_chain(rx, trailer)

4. transmit the mbuf chain via a second dpaa2 port.

The issue I am seeing is that the call to rte_pktmbuf_alloc fails (after processing a few thousand packets) with a panic:
"PANIC in __rte_mbuf_raw_sanity_check():"
assert "m->next == ((void *)0)" failed. 

This sanity check is a check on the consistency of the mbufs allocated from the pool (including whether they have the "next" pointer set to NULL as it should be). I only see this panic behaviour when previous operations have created mbuf chain using the rte_pktmbuf_chain API.

I think the issue is that the PMD is releasing a chained mbuf back to the pool (after transmit) without setting the next field to NULL first. This results in a panic when an allocation from the pool occurs when using an API that calls  __rte_mbuf_raw_sanity_check. It would be very helpful to confirm this with someone familiar with the DPDK PMD code).

I could look to put together a reproducible example if someone from NXP is available to help with this.  

Labels (1)
Tags (1)
0 Kudos
2 Replies

444 Views
June_Lu
NXP TechSupport
NXP TechSupport

You can refer test/test/test_mbuf.c to assess allocation and free of mbufs.

0 Kudos

438 Views
womble
Contributor II
Hi June,

Yes, I can see there are tests for the DPDK mbufs. Unfortunately that does not really help.

The problem is that the mempool gets corrupted when mbufs are returned to it after being sent by the DPAA2 PMD. This only appears to happen when all of the following conditions are true:

1. NXP's DPAA2 PMD is used to send the packets - the exact same DPDK code does not fail for other PMDs.
2. rte_pktmbuf_chain is used to attach another mbuf to the end of the mbuf being transmitted.
3. Enough packets have been processed for a cache of mbufs sent by NXP's PMD to be returned to the pool and allocated to rte_pktmbuf_alloc.

Put another way, the DPAA2 PMD is the only thing returning mbufs to the mempool in question and a sanity check failure is occurring when allocating mbufs from that pool.
0 Kudos