> What is the recommended method to implement these timings?
There are good ways and bad ways. The "bad ways" involve NOPS and other hand-generated delays and seem to work "now" but may not survive a compiler update, a recompile or a change in the clock or memory settings.
These chips usually have plenty of spare timers. The chip you're using has four DMA Timers - I'd use one of these.
Start one of these timers free-running at 1MHz or 10MHz or whatever is most useful for your timing. Then write something like:
#define LCD_TIMEOUT_NS (100)
#define DMA_TIMER_MHZ (10)
#define DMA_TIMER_NS_TO_TICKS(ns) (1 + ((ns) * DMA_TIMER_MHZ / 1000))
#define LCD_TIMEOUT_TICKS (DMA_TIMER_NS_TO_TICKS(LCD_TIMEOUT_NS))
uint32_t nTimeStart = MCF_DTIM1_DTCN;
while ((MCF_DTIM1_DTCN - nTimeStart) < LCD_TIMEOUT_TICKS)
{
;
}The above assumes DMA Timer 1 is runnng. You should make the above "function" a Macro or an Inline Function and call it with an appropriate parameter. Note that the minimum time of a timer delay like the above is LESS than you expect. if you call it with a count of "1" the timer may roll over during your test giving you a short time. That's why the macro above adds "1" to guarantee the minimum time.
100ns is pretty short. You'll probably find the overhead in the above code longer than that. The above sort of timer is usually good for delays in the order of microseconds and not nanoseconds.
If speed is important you could simply put 8 NOPs in a row at 80MHz to get a 100ns delay, except for a number of things:
- This assumes the CPU is running at 80MHz, and
- The code is running from the Cache, and
- A NOP takes one clock cycle.
The first two may or may not be true now or later, and the third is certainly false. NOPs take three clocks. The STF is the "one clock NOP" on this CPU. So you could simply use 3 NOPs to get the delay with an inline "asm" instruction. You'll have to check you compiler to see what the syntax for this is.
The "standard Linux way" of doing this sort of thing is to start up a "known timer" and then to calibrate a simple NOP-based delay loop against that timer and then store the resulting count in a global. Then if something changes on the system the code will accommodate it.
Tom