MCF52235 as a slave mdio device.

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

MCF52235 as a slave mdio device.

2,285件の閲覧回数
diegomacedo
Contributor I

Hello,

 

As part of my project requirements, I need to use a MCU as a Slave Mdio Device.

 

I'm pretty sure I can use the Fec Mdio to communicate with others Mdio devices, but since the lack of information about this bus in the Reference Manual, I need to confirm if I can use this MCU as a slave mdio device.

 

Thanks in advance!

 

 

Diego

ラベル(1)
0 件の賞賛
返信
6 返答(返信)

1,611件の閲覧回数
scifi
Senior Contributor I

No, the FEC MDIO cannot be used as a slave device.

0 件の賞賛
返信

1,611件の閲覧回数
diegomacedo
Contributor I

scifi,

Thanks for your answer.

 

I'm developing an optical module, which has to be controlled by an mdio interface, do you have any suggestion of how to provide that interface? Basically my module has to receive commands by the mdio bus, interpret it and take actions.

 

Thank you.

 

 

Diego

0 件の賞賛
返信

1,611件の閲覧回数
TomE
Specialist II

Google for "mdio bus slave ip" and get:

 

http://www.vantis.com/products/intellectualproperty/referencedesigns/mdioslaveperipheral.cfm

http://www.latticesemi.com/documents/rd1074.pdf

 

The above is the hardware design to put in a Lattice hardware part. I get other hits as well:

 

http://syswip.com/mdio-verification-ip

 

The Lattice document states that the MDIO bus has a maximum clock rate of 2.5MHz. You may be able to run this bus SLOWER in your application.

 

The MCF52235 runs at 60MHz. The ratio of the CPU clock rate and the bus comms rate means that it *MIGHT* be possible to emulate an MDIO slave in SOFTWARE.

 

Possible, but probably not advisable. It might take months and may never work properly. You have the usual tradeoffs between "what is your time worth", "what is the time to market" and "maximum per-unit cost". The fastest way is to use a programmable device and buy the IP to go in it.

 

At the lowest level, the CPU could "bit-bang" the port and monitor it (and do all the timing) in software, like how people used to emulate serial UARTs in cheap hardware 30 years ago. If the CPU has some nice capture-timers (4 16 and 4 32 bit ones, I don't know their capabilities), then they may be pressed into service to capture the relevant timing edges and interrupt the CPU. You might be able to do something funky with the SPI port to try and help with the data transfer. The main critical factor is "can you recognise the start of a frame, take an interrupt and respond to it in time" and "can you afford to have interrupts locked out for the duration of the message transfer".

 

Tom

 

0 件の賞賛
返信

1,611件の閲覧回数
JimDon
Senior Contributor III

Might be possible? Your kidding I assume. It looks like it would be really easy to do in software.

 

On the coldfire the INT inputs would do fine.

The QSPI does not support slave mode, so it would be of no help, and besides the protocol is not byte based.

Just interrupt on the rising edge, and read the data input. On output, interrupt on the falling edge and drive th data line. Since the master controls all the timing with it's clock no worries about  timing at all.

The INT controller on coldfire can  interrupt on both edges, so you could just see if you need to ouput or read on the interrupt....

FYI I2C is done in software all the time, and it is more complicated. Software UARTs are also still used all the time...

 

Also, forget the Lattice IP - at best it converts to WHISHBONE, and then you would have to figure out how to talk to that. It  just writes registers internal to the FPGA - you still have to get the over to the MCU. Plus messing with an FPGA would take way more time, you would have to program for each board, and add extra cost for no good reason.

 

0 件の賞賛
返信

1,611件の閲覧回数
TomE
Specialist II

> Your kidding I assume.

 

My kidding what? You mean "you're kidding" I assume :smileyhappy:

 

Not kidding.

 

The CPU runs a 60MHz or 17us clock cycle, which is the instruction issue rate for executing NOPs. Correction, the TPF execution rate (NOPs take 3 clocks). To a first approximation, all bus cycles (instruction fetch, data read and write) take one clock cycle EXCEPT for successive stores which attract a two-cycle pipeline stall. And all reads (move from memory to register) which take 3 clocks instead of 2 for some reason.

 

The FLASH seems to be one-cycle-access, but there's a "factory option" of two cycles initial or something. I'd run my code from SRAM - I can trust its speed. At least this CPU doesn't have a cache to complicate the timing.

 

A 2,5MHz clock cycles in 400ns, with a 200ns high and 200ns low period. Or 12 TPFs for high and 12 TPFs for low. Or 4 "read instructions".

 

So when emulating an MDIO read cycle the software has to "see" the falling edge and drive the data bit way before the next rising edge, at least in half of the time. So "test, branch, write and loop back" in less than 6 clocks. I don't think so.

 

Apart from all the other problems, you can't assume the GPIO ports run at the same speed as the CPU. Sometimes they stall the CPU for multiple clocks on reads and writes. The MCF52 is probably fast. The MCF54 has been reported as slow. I've worked on an ARM cpu (PXA320) that took 400 CPU clocks to read or write an IO port.

 

Here's someone who's managed almost 2MHz as an MDIO Master on an MCF52233, but that is a very tight loop blindly generating clocks and data when it wants to send them and not syncing to someone else's clock:

 

https://community.freescale.com/message/52298#52298

 

Being "always ready" to receive one of these messages is a lot harder than sending them.

 

> Just interrupt on the rising edge, and read the data input.

 

I assume you mean "interrupt on the 2.5MHz clock".

 

A normally coded interrupt service routine is:

 

1 - Take the interrupt,

2 - Push 16 registers onto the stack

3 - Call the service routine

4 - Pop the 16 registers from the stack

5 - Return.

 

The Register push and pop is going to take 34 clock cycles. Of course this can be coded in special-case assembly, only using a subset of the registers, but that's really not going to fly. There's only 12 clocks from interrupt-to-read! As well, you don't know WHEN the interrupt happened (in time) and which clock edge you'd be up to so as to decode the Read or Write part of the instruction.


Which is why I said "capture the edge with a timer and slow the clock rate right down".

 

This device might be doing something else. It might be servicing other interrupts. Which causes bad interrupt latency problems. Which can probably be overcome with multi-level interrupts, as long as nothing else in the entire codebase is disabling interrupts to protect a data structure anywhere.

 

Tom

 

0 件の賞賛
返信

1,611件の閲覧回数
diegomacedo_cpq
Contributor I

Thanks for your help!! 

 

0 件の賞賛
返信