Why is my DPDK rx packet count different from DPAA stats?

cancel
Showing results for 
Search instead for 
Did you mean: 

Why is my DPDK rx packet count different from DPAA stats?

653 Views
Contributor I

Hi All.

I'm developing a userspace DPDK application using LSDK 19.09, running on LS1046A, and using pktgen-dpdk running on a PC to generate test traffic. I count the received packets in my code, using the value returned by rte_eth_rx_burst(), and also read the stats from DPDK/DPAA.

In packets per second, the DPAA rx stats approximately matches the pktgen-dpdk tx rate (within ~0.1% with some second-second variation). But the packet count from rte_eth_rx_burst() is consistently  around 10% lower. This happens for data rates ~80Mb/s, 300Mb/s and 700Mb/s. My application forwards all packets, and the pktgen receive count matches my transmit count, which matches my rte_eth_rx_burst() count. I can't figure out where my packets are going.

The count of available buffers returned by rte_mempool_avail_count() stays reasonably constant, so I'm not leaking buffers.

None of the DPDK/DPAA stats rte_eth_stats_get() or rte_eth_xstats_get() show any packet errors. (e.g. missed, mbuf allocation, fcs, undersized ....). The port is running in promiscuous mode.

Packets are normal size with no mbuf chaining. A cumulative count of mbuf.nb_segs matches my code packet count.

There is one place in my code where I receive packets:

        nb_rxd = rte_eth_rx_burst(port, 0, app.mbuf_rx.array, n_toread);

        port_statistics[port].rx += nb_rxd;

Could rte_eth_tx_burst() be reading more than I ask for? And/or returning an incorrect count? Any other ways I could be losing packets?

I have one of my 4 1GbE ports setup for linux, and 3 ports available in userspace. I wonder if some of my packets could be leaking into the linux driver? They don't show up in the stats from ifconfig.

Cheers,

Mark

Edit: The DPDK example app l2fwd also exhibits about a 10% packet loss in my test setup. It does not, unfortunately, show the DPDK/DPAA collected stats.

0 Kudos
22 Replies

19 Views
NXP Employee
NXP Employee

We have recently updated the behavior, where you can dump the dropped packets.

Based on your feedback, I have created a internal ticket and we will try to support it in next release.

0 Kudos

24 Views
NXP TechSupport
NXP TechSupport

When OS sends more packets than are being read with a single 'rte_eth_rx_burst' call, rx packets are getting stuck in the tap pmd
and are unable to receive, because trigger_seen is getting updated and consecutive calls are not getting any packets.

Do not update trigger_seen unless less than a max number of packets were received allowing next call to receive the rest.

Which version LSDK are you using? Please check whether the attached patch is applied.

0 Kudos