I have the FreeRTOS flavor of OSA running on a K22F FRDM board. The default tick frequency is 100 Hz. I need to do some homework to see if I need to tweak that.
Does anyone have a sense of how much overhead there is associated with the OS tick? It is a 120MHz M4F core, so it should be pretty zippy, but the impact is obviously non-zero. It would be very helpful to understanding the trade offs if I had quantitative data on the overhead.
BTW, I ran into an odd bug when FreeRTOS xTaskGenericCreate() was called by the OSA wrapper. The PRIORITY_OSA_TO_RTOS(priority) macro was clobbering the priority in OSA_TaskCreate() and a configASSERT() was failing. I simply passed a meaningful value direct to FreeRTOS and got it working. Not sure why that didn't work out of the box, but beware of that landmine.
First you have to define what you mean by 'overhead'. If you restructure a super loop design, that is spending most of the CPU cycles doing nothing other than polling interfaces and spinning in a loop, into an event driven mutl-threaded design, that only ever uses CPU cycles when there is actually something to do, then the overhead of the tick is negative. That is, it can save you huge amounts of time.
As you might expect, the overhead of the tick interrupt depends on the FreeRTOSConfig.h settings.
For the best performance, but also the least amount of development support, use the following settings:
That will make the selection of the next task use a single CLZ (count leading zeros) instruction, with the limitation that you cannot have more than 32 priorities (0 to 31).
That will turn off both forms of stack overflow checking, which is actually the thing that takes the longest in the tick interrupt when it is on.
That will turn off data collection.
Also, if you want the fastest performance, don't use the trace functionality either.
Depending on the application you are writing, you can turn the tick off during idle periods.
Hope this helps.