I am encountering some odd behaviour which I think might be caused by the DPAA2 DPDK poll mode driver. I have a DPDK 22.11.1 LTS application (on a LX2160A SoC) which does the following:
1. receive a queue of mbufs from an input queue, and process each mbuf:
2. allocate a trailer for the mbuf using rte_pktmbuf_alloc. This allocation occurs from the same pool used to allocate received packets.
3. chain the trailer to the received mbuf using rte_pktmbuf_chain(rx, trailer)
4. transmit the mbuf chain via a second dpaa2 port.
The issue I am seeing is that the call to rte_pktmbuf_alloc fails (after processing a few thousand packets) with a panic:
"PANIC in __rte_mbuf_raw_sanity_check():"
assert "m->next == ((void *)0)" failed.
This sanity check is a check on the consistency of the mbufs allocated from the pool (including whether they have the "next" pointer set to NULL as it should be). I only see this panic behaviour when previous operations have created mbuf chain using the rte_pktmbuf_chain API.
I think the issue is that the PMD is releasing a chained mbuf back to the pool (after transmit) without setting the next field to NULL first. This results in a panic when an allocation from the pool occurs when using an API that calls __rte_mbuf_raw_sanity_check. It would be very helpful to confirm this with someone familiar with the DPDK PMD code).
I could look to put together a reproducible example if someone from NXP is available to help with this.
You can refer test/test/test_mbuf.c to assess allocation and free of mbufs.