AnsweredAssumed Answered

UART gets FIFO underflow in DMA mode

Question asked by SCOTT MILLER on Oct 20, 2017
Latest reply on Oct 23, 2017 by SCOTT MILLER

I just lost a few hours to this quirk and I'm hoping someone can explain it to me so I don't get bitten by this again - or at least maybe I'll save someone else some trouble if they come looking for an answer.

 

I'm receiving packetized data on a MK22FN1M0's UART0 at 2 Mbps.  As discussed in previous threads, the IDLE interrupt is unusable in DMA mode (and not much use in general) because it can't be cleared safely.  To deal safely with high speed data with a minimum of interrupts while keeping latency low I have to set up a DMA channel to write incoming data continuously into a circular buffer.

 

The major loop counter is used only to generate 'done' signals for each received packet - thankfully with this protocol the application always knows how much data to expect - and the transfer runs continuously regardless.  The new major loop iteration count is set at the start of each packet (if the expected packet is not entirely in the buffer already) and has to take into account the number of bytes already received.

 

To do this safely, I (according to instructions I found here) had it disable ERQ for the channel and wait for the ACTIVE flag to clear before reading DADDR to find the current position.  It then sets CITER and BITER, re-enables INTMAJOR, and finally sets ERQ again.

 

The problem with this is that there's apparently some kind of race condition in the DMA hardware.  If a new byte came in to the UART FIFO before ERQ was re-enabled, it'd (at least sometimes) cause a FIFO underflow error.  As far as I know, DMA operation should never cause an underflow.

 

I tried disabling RIE in the UART so it wouldn't generate DMA requests, I tried disabling the request in the DMXMUX, and I even tried polling S1 for RDRF and reading pending bytes out before restarting the DMA channel, and none of that worked.

 

What did finally work (so far, anyway - I'll need to do more testing) was to set HALT in DMA_CR before clearing ERQ, and then clearing HALT again before setting ERQ.

 

What's going on here?  Is this behavior documented somewhere?

 

Thanks,

 

Scott

Outcomes