I am working on a project currently using S32K SDK 3.0.1 that I would like to upgrade to the current SDK, 4.0.1. In this project, an S32K148 MCU is acting as a SPI slave. The MCU is running FreeRTOS, with tickless idle enabled. When FreeRTOS enters low-power mode, the MCU code starts a SPI transfer with DMA and then enters VLPS. When the SPI transfer completes, the system wakes to process the received data. This works with SDK 3.0.1.
When I use SDK 4.0.1, the SPI transfers in VLPS fail. LPSPI_DRV_SlaveIRQHandler gets called, and the LPSPI1->SR TEF (Transmit FIFO underrun) bit is set, so the transfer fails. I'm transferring 25 32-bit words at a time (100 bytes), and it is always the 23rd word that doesn't get transferred.
What changed between the SDK versions? It doesn't seem to be something in the LPSPI driver, as a diff between those doesn't show a lot of changes, and none of the changes look like they'd cause this. The only change I am making is changing the base directory for the SDK in my Makefile; no code changes and I'm doing a full rebuild after changing SDK versions.
EDIT Some additional information:
If I reduce the transfer size to 88 bytes, it works fine with the newer SDK. The 100 byte transfers work occasionally (maybe 1 out of 10 times).
The compiler is supported with the options listed in the Release notes
8. Compiler options
Also, please have a look at Chapter 7. Known issues and limitations.
I think it can be something in the DMA configuration rather than LPSPI.
Can you compare all the register values before the MCU enters VLPS?
I added debug prints for the LPSPI1 and DMAMUX modules, and DMA channel 1 registers (the channel I'm using for SPI transmit, since the error that occurs is a transmit underrun). The only difference is the DMA CR register: the EMLM bit is set with the newer SDK and clear with the older SDK, which matches a change in the driver code. Changing the SDK code to make that register match has no effect, the problem still occurs. I know VLPS works slightly differently when the debugger is connected, and I see this whether or not the debugger is connected. Here's the output from my prints. Any other ideas?
But still it seems like the DMA fails to transfer the data to the TX FIFO in time.
Do you see any DMA errors after the underrun is detected?
Is round-robin arbitration enabled?
Round-robin arbitration is disabled (the default, with both SDKs). Enabling it did not make a difference.
I put a breakpoint in lpspi_slave_driver.c at line 444, (void)LPSPI_DRV_SlaveAbortTransfer(instance);
When the breakpoint occcurs, here's what I see
The Enable Request Register ERQ shows no bits set. When the transfer is first started, bits 0 and 1 are set. Any ideas what would be clearing those bits in the middle of the transfer, if that seems like it could be the problem?
I added an additional breakpoint on the DMA Error Interrupt Handler and that never gets hit, so it does not appear that that there are any DMA errors. Instead it looks like for some reason the DMA gets disabled before the transfer completes.
I'm not sure that I can do this in RUN mode without changing the code architecture, since I'm relying on FreeRTOS tickless idle and expecting the code to stall waiting for the SPI interrupt to occur.
I do know that SPI works fine in RUN mode since my application has two modes of operation - a normal mode when the MCU is in RUN mode, and a low-power mode when the MCU is in VLPS. The normal mode works as expected. When switching to low power mode, the code stops any pending SPI transfer, reconfigures the clocks for low power operation, restarts the SPI transfer, then enters VLPS.
Is this what you were suggesting? I've added a watchpoint on DMA->ERQ (0x400800C) and I don't see the watchpoint getting hit. If I enable the watchpoint on the SERQ and CERQ registers (0x400801A and 0x400801B) then the code doesn't run at all, no idea why.
Good catch on the addresses. I now see that watchpoint getting hit but I haven't managed to get my code to the point where it gets hit only unexpectedly. My application starts in normal RUN, then transitions to VLPS on a command received over SPI, and I have not been able to get the breakpoint set so it only executes after that transition.
I tried another debugging approach: starting with a copy of SDK 3.0.1, I'm copying in modules from SDK 4.0.1 one at a time until I see the failure. I did not see any issues with the lpspi, edma, and clock modules. However, after copying over the power module, I do see the errors. I ran out of time this evening to investigate this more fully and I'll debug more tomorrow, but I wanted to mention it here in case that helps. I looked at the differences between the power module in 3.0.1 vs 4.0.1 and the major difference seems to be that the 4.0.1 version sets PMC->BIASEN, and the 3.0.1 version does not. Following the recommendations in the reference manual, we have code in place to set BIASEN, so I'll try taking out our code and see if it makes a difference. It would be nice if the reference manual had more detail on exactly what BIASEN does.
The problem is definitely in the "power" SDK module. I made a copy of the 4.0.1 SDK, replaced the power module with the one from the 3.0.1 SDK, and it works. I'll update here if I figure out what specific changes I need to make in my code to allow this to work.