The T2080 QDS system has 6 externally available DPAA-based ethernet ports: two 10/100/1000Mbps RGMII ports (GETH1, GETH2) and four XFI 10Gbps ports (SFP1, SFP2, SFP3, SFP4). The two 10/100/1000Mbps ports have standard RJ-45 jacks and the four 10Gbps ports are terminated in four SFP+ cages.
According to the documentation, it should be possible to use either direct attach SFP+ cables or SFP+ optical modules. I'm attempting to implement support in VxWorks for the 10GbE ports and I've been able to get them to work using direct attach cables, but when trying a 10GBase-SR fiber module, I'm unable to establish a link.
In addition to the T2080 QDS board, I have an Intel Corei5 system with a x8 PCIe slot and an Intel 82599 NIC. This NIC has two Intel-branded 10GBase-SR modules. If I connect a multimode fiber patch cable between these two ports, I get a link between both of them, so I know the modules and my fiber patch cable are good.
If I remove one of the fiber modules and fit it into one of the SFP+ cages on the T2080 QDS and run the fiber patch from that module to the remaining module in the 82599 NIC, I can't get a link. The LED on the NIC never lights up, and the PCS status registers on the T2080 show link down. If I remove the 10GBase-SR modules and use a direct attach cable instead, I can get a link between the 82599 NIC and the T2080 QDS and exchange traffic. The LED on the 82599 NIC lights up and the PCS status registers show that the link is up. I don't think there's anything special about the Intel optical modules that would prevent them from working in another SFP+ cage, and I have used them in another device before.
As a test I tried using the same 10GBase-SR module with a fiber loopback adapter but the PCS status registers on the T2080 still showed no link.
Also, although I'm not running Linux on the board, I am booting VxWorks with U-Boot, and U-Boot exhibits the same behavior as VWorks (I can get a link on the 10G ports with a DA cable, but not with the fiber module.)
I couldn't find anything in the documentation that suggests that there's anything special that has to be done to use a 10GBase-SR fiber module compared to a direct attach cable. My suspicion is that I may need to use a different optical module, but as they cost about $200 each I'm hoping to get a little more information before I spend the money.
My question is: has anyone else tried to use the 10GbE ports on the T2080 QDS reference board using fiber SFP+ modules? If so, did it require any special finessing or did it just work out of the box with Linux? Did you have to use a particular 10GBase-SR module?
It looks like this may be due to a subtle board configuration issue. In the SFP+ cages, there is a TX disable pin. This pin must be at 0 volts in normal operation. The SFP+ direct attach cables don't care about the state of this pin, but the optical modules do: if it's not 0 volts, then transmission is disabled.
I checked my board with a voltmeter, and the TX disable pin shows 3.3 volts, so this is why the modules don't work.
According to the reference manual, this signal is controlled by BRDCFG9[3] in the QIXIS. This bit is listed in the U-Boot source in board/freescale/t208xqds/t208xqds_qixis.h as BRDCFG9_SFP_TX_EN, however there doesn't appear to be any code that modifies it. I don't Linux changes it either: I tested the Linux kernel image in the SDK 1.9 and it shows the same behavior as my VxWorks code (i.e. link is good with direct attach cables, but not with optical modules). I'm going to add code to modify this bit when I get back to work tomorrow and see if that fixes the issue.
Well, I tried modifying the BRDCFG9 register _and_ forcing the TX disable pins to ground with a jumper wire, but neither helped the problem.. So I'm back to square one. :/
Please check the following hwconfig modification:
I'm familiar with hwconfig setting you refer to. Unfortunately all that is the the XAUI riser card, not the SFP+ cages built into the T2080 QDS. The Freescale XAUI riser card has a Teranetics PHY chip and both a 10GBase-T copper port and an SFP+ port. By default it uses the 10Gbase-T copper port. The hwconfig setting causes the Teranetics PHY driver in U-Boot to poke a magic register to make it switch to the SFP+ port so that you can use fiber instead of copper.
The SFP+ cages on the T2080 QDS are not set up the same way. They already function correctly, but _only_ with direct attach cables. In theory the DA cables and the fiber SFP+ modules should work exactly the same way and be plug-and-play, but for some reason the particular 10GBase-SR modules that I have don't work. (They do work int he XAUI riser card though.)
Please provide U-Boot booting log and settings of all on-board configuration switches.
U-Boot output is below:
U-Boot 2015.01+SDKv1.9+geb3d4fc (Dec 02 2015 - 15:41:18)
CPU0: T2080E, Version: 1.1, (0x85380011)
Core: e6500, Version: 2.0, (0x80400120)
Clock Configuration:
CPU0:1800 MHz, CPU1:1800 MHz, CPU2:1800 MHz, CPU3:1800 MHz,
CCB:600 MHz,
DDR:933.333 MHz (1866.667 MT/s data rate) (Asynchronous), IFC:150 MHz
FMAN1: 700 MHz
QMAN: 300 MHz
PME: 600 MHz
L1: D-cache 32 KiB enabled
I-cache 32 KiB enabled
Reset Configuration Word (RCW):
00000000: 0c070012 0e000000 00000000 00000000
00000010: 66150002 00000000 ec027000 c1000000
00000020: 00000000 00000000 00000000 000307fc
00000030: 00000000 00000000 00000000 00000004
Board: T2080QDS, Sys ID: 0x28, Board Arch: V1, Board Version: A, boot from vBank0
FPGA: v19 (T1040QDS_2015_1208_1534), build 448 on Tue Dec 08 21:34:03 2015
SERDES Reference Clocks:
SD1_CLK1=156.25MHZ, SD1_CLK2=100.00MHz
SD2_CLK1=100.00MHz, SD2_CLK2=100.00MHz
I2C: ready
SPI: ready
DRAM: Initializing....using SPD
Detected UDIMM
6 GiB left unmapped
8 GiB (DDR3, 64-bit, CL=13, ECC on)
DDR Chip-Select Interleaving Mode: CS0+CS1
Flash: 128 MiB
L2: 2 MiB enabled
Corenet Platform Cache: 512 KiB enabled
Using SERDES1 Protocol: 102 (0x66)
Using SERDES2 Protocol: 21 (0x15)
SRIO1: disabled
SRIO2: disabled
SEC0: RNG instantiated
NAND: 512 MiB
MMC: FSL_SDHC: 0
MMC: no card present
EEPROM: NXID v1
PCIe1: Root Complex, no link, regs @ 0xfe240000
PCIe1: Bus 00 - 00
PCIe2: Root Complex, no link, regs @ 0xfe250000
PCIe2: Bus 01 - 01
PCIe3: disabled
PCIe4: Root Complex, no link, regs @ 0xfe270000
PCIe4: Bus 02 - 02
In: serial
Out: serial
Err: serial
Net: Fman1: Uploading microcode version 106.4.17
Phy 4 not found
PHY reset timed out
Phy 5 not found
PHY reset timed out
Phy 6 not found
PHY reset timed out
Phy 7 not found
PHY reset timed out
FM1@DTSEC3 [PRIME], FM1@DTSEC4, FM1@TGEC1, FM1@TGEC2, FM1@TGEC3, FM1@TGEC4
Hit any key to stop autoboot: 0
=>
All the switch settings are at the factory defaults according to the quick start guide that came with the board.
hi
can you share the quick start manual of t2080qds board. i want to verify the DIP switch setting.
reg,
swaroop
You wrote:
> If I remove one of the fiber modules and fit it into one of the SFP+ cages on the T2080 QDS
Which exactly cage was used on the QDS?
Just as some added background, I also have a T4240 QDS board and a couple of the Freescale XAUI media adapter cards (in slots 1 and 2). These adapters have 10Gbase-T copper connections, but also include a single SFP+ cage which can be activated by poking a vendor-specific register in the PHY. (These are fairly old cards; I don't know if the process to switch to the SFP+ cage is different with newer ones.)
I've used the same module that I'm attempting to test with the T2080 in the SFP+ cage on the XAUI card on the T4240, and there it works correctly: I see the link L:ED on the 82599 NIC light up when I connect the fiber patch cable.
Am I correct in my assumption that plugging in a fiber module should "just work?"
I tested all four of them at one point or another. As far as I recall, I couldn't get a link with any of them using the 10GBase-SR module, but they all work with the DA cables.