LS1021A-TWR UDP packet loss

Showing results for 
Search instead for 
Did you mean: 

LS1021A-TWR UDP packet loss

Contributor II


I'm evaluating the network performance on an LS1021A-TWR eval board.

I'm using openwrt v19.07.3 building an image to boot from SDCARD, with the layerscape configured to route traffic between eth1 and eth2, with two PC's connected to the ports using iperf3.8.1 to generate / receive UDP traffic.

I'm observing that there is a small random background level of packet loss whilst routing between eth1 and eth2, this becomes more frequent as the number of packets/bandwidth increases.  This happens at levels significantly below the maximum throughput.

This is illustrated in the attached graphic (x-bandwith y-packet loss z-seconds into test)

From looking at the gfar source files they seem to be very close (includes the EEE disabled by default fix).

From 'tc -s qdisc' end 'ethtool -s' I'm observing that there is some packet loss in the driver and not in the queue, however this is less than the overall total packet loss I'm observing.

Are there any known issues or config changes required to reduce the amount of background packet loss?



Tags (1)
2 Replies

NXP TechSupport
NXP TechSupport

Please disable CONFIG_NETFILTER and CONFIG_CPU_FREQ in Linux kernel configuration file.

In addition, it's recommended to use NXP latest released OpenWrt.

$ git clone

$ git checkout OpenWrt-19.04

Contributor II

Thanks yipingwang‌ - Disabling CPU_FREQ stops the background packet loss.

FYI NETFILTER was not enabled

0 Kudos