I’ve recently started to move a project across from MQX 3.8 to MQX 4.2, and have discovered that there is a change in functionality on the _time_delay() function. I believe in MQX 3.8, _time_delay() would always round down the given delay to the nearest tick, where as in MQX 4.2, it will round up to the nearest tick, but also add an additional tick. I performed some simple tests by toggling an output and observing this on an oscilloscope, and have found that MQX 4.2 increases the requested time by 1 tick.
So, I started to dig into the _time_delay() function in MQX 4.2, there indeed has been some code changes since MQX 3.8; a comment found in _time_delay()is as follows:
Now it seems that by adding “time_per_tick[ms] - 1” to the “required_time[ms]” is an attempt to force the ticks to round up, which I think it will. But I do not understand why there is the addition of the +1 tick - I believe this to be the problem.
I first noticed this when trying to get the ethernet PHY drivers going (phy_lan8720.c). Within phy_lan8720_init() there is a loop that spins an appropriate amount of times to simulate a 3s timeout on detecting the completion of the auto negotiation. In MQX 3.8, this loop takes 3 seconds to timeout, but in MQX 4.2, this loop takes 6 seconds to timeout. It appears that with a system tick resolution of 5ms, and requesting a _time_delay(5), we actually get a 10ms delay (+100% error) - this explains why my loop delay of 3s is now 6s on MQX 4.2.
I would suspect that for larger delays, this 1 tick is barely noticable, but when your delay is 1 tick, or thereabouts, this error is quite noticable.
Have I understood this correctly?