T1040RDB upstream kernel ethernet issue

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

T1040RDB upstream kernel ethernet issue

1,971 Views
stefanlange
Contributor III

Hello NXP team,

 

I have an T1040RDB at my place with which I am experimenting. (It is not a T1040D4RDB.)

 

I am working with the Freescale SDK QorIQ-SDK-V1.8-20150619-yocto, provided in a virtual machine from freescale.

 

From the Yocto Project I extracted the SDK to be able to conveniently standalone-compile the kernel outside the yocto environment. (bitbake core-image-minimal -c populate_sdk)

 

I got the linux sdk sources from the freescale git

http://git.freescale.com/git/cgit.cgi/ppc/sdk/linux.git/

 

and compiled these using above SDK:

 

 

source /opt/freescale-sdk-64/environment-setup-ppc64e5500-fsl-linux

 

export LDFLAGS="${LDFLAGS//-Wl,/}"

 

make corenet64_smp_defconfig

 

make menuconfig

 

> set the FMAN Kconfig selection to FMANV3L

 

make uImage

 

make t1040rdb.dtb

 

The kernel compiles successfully.

Booting the kernel via nfs is successful. See attached log T1040RDB_sdk_bootlog.txt.

The root file system is a 64bit lsb-sdk network file system created from yocto for a different project involving a T1042.

The ethernet interfaces work as intended. See attached log T1040RDB_sdk_bootlog.txt.

All good so far.

 

 

For the future I would like to work with the mainline kernel though.

 

I now got the linux upstream sources from the freescale git

http://git.freescale.com/git/cgit.cgi/ppc/upstream/linux.git/

 

and compiled these using above SDK:

 

 

source /opt/freescale-sdk-64/environment-setup-ppc64e5500-fsl-linux

 

export LDFLAGS="${LDFLAGS//-Wl,/}"

 

make corenet64_smp_defconfig

 

make menuconfig

 

> FMAN Kconfig selection not available here. Do nothing in menuconfig.

 

make uImage

 

make fsl/t1040rdb.dtb

 

 

The kernel compiles successfully.

Booting the kernel via nfs is successful. See attached log T1040RDB_upstream_bootlog.txt.

The root file system is the same as above.

 

But now there is an issue with the regular ethernet interfaces:

 

root@tqmt1042-64b-stk:~# ifconfig fm1-gb3 192.168.90.244

IPv6: ADDRCONF(NETDEV_UP): fm1-gb3: link is not ready

root@tqmt1042-64b-stk:~# IPv6: ADDRCONF(NETDEV_CHANGE): fm1-gb3: link becomes ready

 

root@tqmt1042-64b-stk:~# ifconfig fm1-gb3

fm1-gb3   Link encap:Ethernet  HWaddr 00:04:9f:03:05:e7

          inet addr:192.168.90.244  Bcast:192.168.90.255  Mask:255.255.255.0

          inet6 addr: fe80::204:9fff:fe03:5e7/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:9 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 B)  TX bytes:932 (932.0 B)

          Memory:ffe4e6000-ffe4e6fff

 

root@tqmt1042-64b-stk:~# ping -I fm1-gb3 192.168.90.200

PING 192.168.90.200 (192.168.90.200) from 192.168.90.244 fm1-gb3: 56(84) bytes of data.

From 192.168.90.244 icmp_seq=1 Destination Host Unreachable

From 192.168.90.244 icmp_seq=2 Destination Host Unreachable

From 192.168.90.244 icmp_seq=3 Destination Host Unreachable

^C

--- 192.168.90.200 ping statistics ---

5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3999ms

pipe 3

root@tqmt1042-64b-stk:~# ifconfig fm1-gb3

fm1-gb3   Link encap:Ethernet  HWaddr 00:04:9f:03:05:e7

          inet addr:192.168.90.244  Bcast:192.168.90.255  Mask:255.255.255.0

          inet6 addr: fe80::204:9fff:fe03:5e7/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:16 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 B)  TX bytes:1468 (1.4 KiB)

          Memory:ffe4e6000-ffe4e6fff

 

The ping does not seem to work anymore, and no packets are received.

         

The interface is attached to a NIC on my desktop PC.

I can see the incoming ARP Frames from the T1040RDB in wireshark.

The desktop PC answers with an ARP reply.

However, this reply does not reach the respective linux stack. The RX packet count does not increase.

Checking the respective eMAC registers with a tool shows something interesting.

It seems that there are RX octets received, but somehow get stuck in the MAC:

 

Receive Dropped Packets Counter Register (RDRPn)

root@tqmt1042-64b-stk:~#  memmap64 -r 0xffe4e615c

reg 0x       ffe4e615c:   0x      8200000000

 

Incremented for each dropped packet due to internal errors. Occurs when a receive FIFO overflows.

Includes also packets truncated as a result of the receive FIFO overflow.

This counter is continuously increasing with frames sent from the desktop PC to the T1040RDB.

 

Receive Dropped Not Truncated Packets Counter Register (RDRNTPn

root@tqmt1042-64b-stk:~#  memmap64 -r 0xffe4e61cc

reg 0x       ffe4e61cc:   0x      8200000000

Incremented for each fully dropped packet (not truncated) due to internal errors. Occurs when a receive

FIFO overflows.

This counter is continuously increasing with frames sent from the desktop PC to the T1040RDB.

 

Receive Frame Error Counter Register (RERRn)

root@tqmt1042-64b-stk:~#  memmap64 -r 0xffe4e613c

reg 0x       ffe4e613c:   0x       100000000

 

Incremented for each frame received with an error (except for undersized/fragment frame) :

• FIFO overflow error

• CRC error

• Payload length error

• Jabber and oversized error

• Alignment error (if supported)

• Reception of PHY/PCS error indication (0xFE, not a code error)

 

 

Is the upstream kernel that I am using supposed to function in the way I am using it?

Or have I overlooked some points (e.g. regarding the initialisation of the DPAA / FMAN)?

 

Maybe you can share some advice.

 

Thanks and best regards,

Stefan

Original Attachment has been moved to: T1040RDB_upstream_bootlog.txt.zip

Original Attachment has been moved to: T1040RDB_sdk_bootlog.txt.zip

Labels (1)
0 Kudos
Reply
6 Replies

1,554 Views
sinanakman
Senior Contributor III

Hi Stefan

The upstream tree you mention seems to be rather old (latest commit

log is from 2015-11-05). Why don't you give a try the mainline tree

at kernel.org ? The problem you are seeing might be already fixed in

the mainline tree.

Hope this helps

Sinan Akman

0 Kudos
Reply

1,554 Views
stefanlange
Contributor III

Hi Sinan,

thanks for your response.

I tried the kernel.org mainline kernel, but with the same result.

But I found something interesting:

If I deactivate the IOMMU/PAMU driver in the mainline kernel, the respective ethernet interfaces are working.

This is an acceptable solution for us at this point.

I did not investigate further for the root cause behind this. There are no IOMMU references in the ethernet device tree nodes.

Best regards,

Stefan

0 Kudos
Reply

1,554 Views
sinanakman
Senior Contributor III

Hi Stefan

I would recommend to post the problem to kernel mailing list

and cc it to network maintainer. If I get access to a T1040RDB

board in the coming weeks I will debug it as well.

Regards

Sinan Akman

0 Kudos
Reply

1,554 Views
scottwood
NXP Employee
NXP Employee

Mainline kernels do not have all the parts of the ethernet driver (despite having some fman code merged) so I don't see how this could work regardless of whether PAMU is enabled or not.  There's nothing the netdev maintainer can do about this.  We're working on getting the rest of the driver submitted.

0 Kudos
Reply

1,554 Views
sinanakman
Senior Contributor III

Hi Scott

Thanks for your follow up on the thread and clearing up on the stage of mainlining the ethernet driver. Sorry for misleading Stefan with netdev maintainer cc suggestion. Hope the driver will be mainlined soon for Stefan to give a try.

Best regards

Sinan Akman

0 Kudos
Reply

1,554 Views
Pavel
NXP Employee
NXP Employee

NXP offers the latest SDK 2.0. This SDK supports T1040 RDB and DDR4 memory.

Perhaps this SDK is suitable for your task and mainline kernel is not needed.


Have a great day,
Pavel Chubakov

-----------------------------------------------------------------------------------------------------------------------
Note: If this post answers your question, please click the Correct Answer button. Thank you!
-----------------------------------------------------------------------------------------------------------------------

0 Kudos
Reply