AnsweredAssumed Answered

P2020RDB throughput

Question asked by alessandro nasorri on Mar 10, 2015

Hello I'm Alessandro.

I'm evaluating P2020RBD throughput.

I've generated a core and a minimal image through YOCTO SDK 1.7 loading them on ramdisk. I haven't noticed relevant difference, in term of throughput, using core or minimal image.

For my test I've used the default kernel configuration given by Yocto for this board.

Looking at Freescale benchmark of APR 28 2010 the expected result for p2020RDB with  a monodirectional traffic of 64 byte was 510000 pkt/sec.

Below you can find U-boot parameters


CPU0:  P2020E, Version: 2.1, (0x80ea0021)

Core:  E500, Version: 5.1, (0x80211051)

Clock Configuration:

       CPU0:1200 MHz, CPU1:1200 MHz,

       CCB:600  MHz,

       DDR:400  MHz (800 MT/s data rate) (Asynchronous), LBC:75   MHz

L1:    D-cache 32 kB enabled

       I-cache 32 kB enabled

Board: P2020RDB CPLD: V4.1 PCBA: V4.0

rom_loc: nor upper bank

SD/MMC : 4-bit Mode

eSPI : Enabled

I2C:   ready

SPI:   ready

DRAM:  Detected UDIMM

1 GiB (DDR3, 64-bit, CL=6, ECC off)

Flash: 16 MiB

L2:    512 KB enabled

NAND:  32 MiB


Bad block table found at page 65504, version 0x01

Bad block table found at page 65472, version 0x01

PCIe1: Root Complex of mini PCIe SLOT, no link, regs @ 0xffe0a000

PCIe1: Bus 00 - 00

PCIe2: Root Complex of PCIe SLOT, x1, regs @ 0xffe09000

  02:00.0     - 1095:3132 - Mass storage controller

PCIe2: Bus 01 - 02

In:    serial

Out:   serial

Err:   serial

Net:   eTSEC2 is in sgmii mode.

uploading VSC7385 microcode from ef020000

PHY reset timed out


Hit any key to stop autoboot:  0



Doing a test with packet of 64 byte layer2  for a period 120 sec I've instead obtained the results below:


interrupt coalescing
buffer descriptor
throughput [pkt/sec]
P2020CORErx-frames=0 tx-frames=0bd-rx=256 bx-tx-256296914,06
P2020CORErx-frames=22 tx-frames=22bd-rx=256 bx-tx-256301933,69
P2020CORErx-frames=22 tx-frames=22bd-rx=128 bx-tx-128310937,50



I've modified interrupt coalescing and buffer descriptor according the optimized configuration shown in your benchmark.

Despite these modifications the throughput I've obtained is far from 510000 pkt/sec of you benchmark.


What i would like to know is:


  • Why my results are so different from yupr benchmark?
  • Is there an optimized kernel configuration to use. If yes should you please send me it?


Thanks in advance