I'm working on a power-optimized bare-metal application using the S32K311.
Our system runs a periodic task every 10ms, and remains idle the rest of the time.
Currently, I'm evaluating two different strategies for reducing power consumption during idle periods:
Option 1 – WFI only:
Let the MCU stay in RUN mode, and simply use __WFI() at the end of the main loop when there's nothing to do:
while (1) {
if (task_ready) {
DoSomething();
}
__WFI(); // Sleep during idle
}
This approach keeps the PLL and high-speed clocks active, but gates the CPU core during idle time.
Option 2 – Dynamic VLSR/Run mode switching:
Switch the MCU to VLSR mode (FIRC 3MHz) when idle, and back to RUN mode (PLL) when a task is ready:
while (1) {
if (task_ready) {
ChangeToRunMode();
DoSomething();
} else {
ChangeToVLSRMode();
}
}
This reduces the clock domain frequency during idle, but introduces frequent MC_ME and clock domain switching.
My question:
Is it practical and safe to switch between RUN and VLSR mode every few milliseconds?
Could the overhead of frequent mode transitions outweigh the power saving benefit?
Would NXP recommend one approach over the other for 10ms-periodic task scheduling?
My primary goal is to achieve maximum power saving without causing system instability or excessive latency.
I would really appreciate your technical opinion on this.
Thanks in advance!