I'm using a S32K144 and trying to make the LPSPI work with DMA for Transmit and Receive.
I am not using any SDK.
In a function that is called at a 10ms interval, I am calculating a value to send, and I am then kicking off a DMA service request to transfer that value to the SPI via DMA. I am then using channel linking to grab the received data in response to the value I am sending. The basic DMA configuration is good, because the value I want sent is being transmitted, and the linking is working because I am able to read the correct value presented on MISO into a variable which is only written via DMA.
The source data is a uint16 value which I have forced to be at a long-aligned address. I have set the source and destination data sizes to be 16 bits in the transfer control descriptor.
To a fashion, this is all working OK however there are two problems I would like to understand and solve:
1. Although the frame size is set to 16 bits, I am getting two transfers of 16 bits each per DMA request. I don't understand what is causing the second request; is it possible that DMA requests are assumed to be 32 bits wide, and so the peripheral is doing 2 16-bit transfers? You can see this behaviour in the attached SPI.png. Other than TCR.FRAMESZ, what governs the length of the transferred data per request?
2. If you look carefully at the SPI2.png you can see the startup of the code. The edges at 50ms are the first SPI packets. You can see that for the first 4 packets, the clock returns low and the enable line goes high after the SPI packets were sent. However at SPI packet 5 (90ms) and thereafter, the clock stays high and the enable line remains low for the remainder of the 10ms period before the next kick-off of DMA - in effect, the last bit of that transfer is not terminated until I make the next DMA request to write to the TDR. I'd like to understand what is going on here - because there is no difference to the DMA kick-off in the code. If I make a write to the TCR every 10ms immediately prior to my request for DMA service, this does not happen and all SPI packets look like the correct ones at 50, 60, 70 and 80ms (if I zoomed in on those, you see something exactly like that in SPI.png). What is special about those first 4 packets? Is it something to do with the transmit FIFO? I shouldn't need to keep writing to TCR because I'm not making any adjustments to PCS or bit-timing - I should just be able to write that once and then transmit data.
Kickoffs to the DMA are done using:
dma->ERQ |= 0x03; /* Apply a request on channels 0 and 1 */
dma->TCD.CSR |= 0x01; /* Request start on DMA */
As I said, the DMA and channel linking is working otherwise I wouldn't send/receive the expected SPI values, but any ideas on the other two items are much appreciated.
Identified why 2 transfers take place - on the initial setup of the DMA, I incorrectly added a superfluous setup of the DMAMUX - using one of the periodically triggered channels. Removing this configuration, I now get a result as per SPI3.png and SPI4.png. As you can see, the configuration now produces a single send/receive of 16 bits, which is correct - however, the configuration now stops working after 6 writes/receives.
****UPDATED 2 ****
Appears that the CFGR1.NOSTALL must be set to enable no FIFO underrun without stalling - is this expected behaviour in this kind of configuration?