Out of Buffers Discard Counter

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Out of Buffers Discard Counter

Jump to solution
778 Views
sedat_altun
Contributor III

Hi,

I have a problem about out of buffers while using T4240rdb as a router, any idea will be appreciated.

Below I will try to explain the case.

I am using T4240rdb  with SDK 2.0 and using IXIA as a packet generator.

I am sending packets at high rate from IXIA to  10g interface fm2-mac10 , and at the same time I am pinging the 1G interface fm1-mac1.  After 10 seconds I am getting reply timeouts from fm1-mac1 .

When I looked at the statistics of fm2-mac10 and fm1-mac1 the "FMBM_RODC – Out of Buffers Discard Counter"  are increasing and the CPUs load is %100.  I assumed that  ingress packets  at this rate could not be processed and normally out of buffer occurs and the packets are dropped at both at fm2-mac10 and fm1-mac1.

My question is : Despite fm1-mac1 and fm2-mac10 are belongs to different  FMANs why out of buffer at fm2-mac10 cause packet drops at fm1-mac1.  I am asking this question because packets of fm1-mac1 are critical and I want them not not to drop, fm2-mac10 packets could be dropped.

Is there any way to avoid dropping  fm1-mac1 packets at this case ?

Maybe If I could succeed to  to separate the external buffers of fm1-mac1 and fm2-mac10 I will avoid packet drop at fm1-mac1. Is there any way to separate bman buffers for these MAC interfaces?

Thank you 

0 Kudos
1 Solution
636 Views
bpe
NXP Employee
NXP Employee

Performance is the key factor here. If the software is not fast enough
to handle frames and recycle buffers in time, your buffer pools will
starve sooner or later, no matter how much memory is allocated to them.
The main suggestion here is monitor CPU load with mpstat or similar
to see what your CPUs are doing and use a PCD policy that
distributes  the traffic processing load equally between the available CPUs.
If is also very important what you are doing with the ingress traffic.
For L3 forwarding, provided that things are done correctly, you can
expect much more than 10Gbps. See the reproducibility guide here.
As of pinging, I don't quite understand how this can be critical as it
is only a diagnostic tool. There is no HW acceleration to ICMP except L3
checksumming. If you are flood pinging at 1Gbps, you should not expect
the target device to respond to all echo requests.

Buffer pools in Linux are allocated and seeded automatically,
the user does not have control over it. You can read the pools used by
a network interface as described here and check the free buffers count
as described here
If you wish to have more precise control over buffer pools, consider
moving your application to USDPAA where static buffer pool parameters
can be specified in the Device Tree.

Note that static buffer pool assignment is only supported in shared and macless

modes which means you are using them from USDPAA


Hope this helps,
Platon

-------------------------------------------------------------------------------
Note:
- If this post answers your question, please click the "Mark Correct" button. Thank you!

- We are following threads for 7 weeks after the last post, later replies are ignored
Please open a new thread and refer to the closed one, if you have a related question at a later point in time.
-------------------------------------------------------------------------------

View solution in original post

0 Kudos
2 Replies
637 Views
bpe
NXP Employee
NXP Employee

Performance is the key factor here. If the software is not fast enough
to handle frames and recycle buffers in time, your buffer pools will
starve sooner or later, no matter how much memory is allocated to them.
The main suggestion here is monitor CPU load with mpstat or similar
to see what your CPUs are doing and use a PCD policy that
distributes  the traffic processing load equally between the available CPUs.
If is also very important what you are doing with the ingress traffic.
For L3 forwarding, provided that things are done correctly, you can
expect much more than 10Gbps. See the reproducibility guide here.
As of pinging, I don't quite understand how this can be critical as it
is only a diagnostic tool. There is no HW acceleration to ICMP except L3
checksumming. If you are flood pinging at 1Gbps, you should not expect
the target device to respond to all echo requests.

Buffer pools in Linux are allocated and seeded automatically,
the user does not have control over it. You can read the pools used by
a network interface as described here and check the free buffers count
as described here
If you wish to have more precise control over buffer pools, consider
moving your application to USDPAA where static buffer pool parameters
can be specified in the Device Tree.

Note that static buffer pool assignment is only supported in shared and macless

modes which means you are using them from USDPAA


Hope this helps,
Platon

-------------------------------------------------------------------------------
Note:
- If this post answers your question, please click the "Mark Correct" button. Thank you!

- We are following threads for 7 weeks after the last post, later replies are ignored
Please open a new thread and refer to the closed one, if you have a related question at a later point in time.
-------------------------------------------------------------------------------

0 Kudos
636 Views
sedat_altun
Contributor III

Hi,

Thank you very much for your reply.

Ping is not critical for me, I just give it as an example. All the ingress packets from fm1-mac1 are critical, I just use ping for testing purpose.

0 Kudos