if a delay is used as shown below, how long is the delay in real time ? Say if the clock is 8 Mhz, is it as simple as dividing the counter by 8e6 to get real time delay ?
for(cnt2 = 35535; cnt2 != 0; cnt2--);
Or could the following CPU delay function generated by CPU bean could be used ? void Cpu_Delay100US(word us100) for example, if a 1 ms delay is needed, 10 would be use as an argument ? appreciate any help i can get.
For a simple timing loop delay there will be many cycles per loop, depending on the code that the compiler generates. The easiest way to "calibrate" the loop is to simulate operation using the debugger. Simply note the CPU cycles just prior to entering the loop, and immediately after leaving the loop, and subtract the two values.
To test your code, I needed to use the following to prevent timeout of the COP timer -
for (cnt2 = 35535; cnt2 != 0; cnt2--) __RESET_WATCHDOG();
To execute the loop with HC08 device required a total of 782193 bus cycles, or about 22 cycles per loop. The total delay will increase should any interrupts occur.
Incidently, using an incrementing loop (rather than decrementing) actually required about 28 cycles per loop.
i had the same code to prevent the timeout, sorry I didnt care to put it in, but thanks for your help. so i guess my final question is, wat is the difference between using the loop cycle delay and CPU delay that generated by the CPU bean ?