DMA Bandwidth control T4240Qds rev 1

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

DMA Bandwidth control T4240Qds rev 1

787 Views
vigneshwarensan
Contributor III

We are working on T4204QDS rev 1

We have a 10Gbps fibre optic card connected to Slot 7. The slot 7 is configured as x8 link with link speed 5GT/s (The card supports only link speed 5 GT/s and is x4 link width)

The bandwidth that I get from the card is around 2.5 Gbps (which is rather slow). I checked the Manual, and the manual says that DMA performs bandwidth control on the 8 channels.

Section 24.4.1.7 in page 1988 says that bandwidth control can and should be disabled for getting higher performance.

How do I disable bandwidth control on my rev 1 board.

UPDATE:

Using @luminliang 's suggestion, I checked the kernel code and found that in ./drivers/dma/fsldma.h the Bandwidth control variable was defined to be:

FSL_DMA_MR_BWC         0x08000000

This corresponds to a value of MRn[BWC] = "1000"

Is this the bottleneck for the 10Gbps card?

Would giving it a value of 0x0f000000 disable bandwidth control and therefore give me better performance?

Thanks.

Tags (3)
0 Kudos
3 Replies

500 Views
lunminliang
NXP Employee
NXP Employee

I guess you have read the DMA mode register, in which if MRn[BWC] equals "1111", bandwidth sharing is disabled.

I am wondering where is the bottleneck, on the QDS or the card/driver? How do you analyze? How does the "2.5Gbps" bandwidth measured?

0 Kudos

500 Views
vigneshwarensan
Contributor III

I ran a netperf benchmark on the IP configured on the card.

Benchmark setup details: We had two rev 1 4240QDS boards and two 10 Gbps cards that I connected on each, and connected both the boards back-to-back through an optical fibre cable. There were no switches/routers in the network of these interfaces. And then I ran a long netperf benchmark that gave me arounf 2.5Gbps.

I further did a few optimizations (like using jumbo frames, increasing TCP memory usage etc) given in https://www.kernel.org/doc/ols/2009/ols2009-pages-169-184.pdf

which increased the bandwidth around 3.0 Gbps.

UPDATE:

I though i will give you some more details. The CPUs run at frequecy: 1666.667 MHz and DDR: 800MHz

and  SERDES Reference Clocks: SERDES1=125MHz SERDES2=125MHz SERDES3=100MHz SERDES4=100MHz

And also, where in the kernel how do I check the DMA register modes being set?

What portions do you recommend that I start concentrating on to identify the bottleneck?

Thanks.

0 Kudos

500 Views
yipingwang
NXP TechSupport
NXP TechSupport

Hello Vigneshwaren Sankaran,


No need to consider about DMA, because the DPAA driver doesn't use the normal DMA at all, please perform your test with the default kernel configuration file.


According to your performance data, it' seems that you only use one core in netperf testing, please try to use 24 cores of T4240 in your performance test.

Please refer to SDK document, the section "Netperf" introduces how to use netperf to affine multiple cores with the same Ethernet port.


Have a great day,
Yiping

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 Kudos