We had some serious delays in our software and started digging:
fsl_lpuart_edma.c:
static void LPUART_SendEDMACallback(edma_handle_t *handle, void *param, bool transferDone, uint32_t tcds)
{
[...]
if (transferDone)
{
LPUART_TransferAbortSendEDMA(lpuartPrivateHandle->base, lpuartPrivateHandle->handle);
/* Ensure all the data in the transmit buffer are sent out to bus. */
while (0U == (lpuartPrivateHandle->base->STAT & LPUART_STAT_TC_MASK))
{
}
In the lpuart/edma_transfer example this while loop takes around 25μs.
But it uses 115200 baud. We need 19200 baud and even slower speeds. With 19200 this loop takes between 850 and 900μs. With slower baud rates we even saw times above one ms.
The attached scope shows the edma-transfer-example with 19200 baud, with a debug pin and with the UART switched to UART2 (because its easier to probe): blue is the uart data, yellow is a debug pin right before and after the while(). Statistics at the bottow show 850μs as pulse width.
Tested on i.MX7ulp EVK, with SDK v2.9
The whole point of dma is fire and forget. Start it and get notified when everything is ready so the application can do other things. With this driver it can't do other things, because it blocks for an eternity in an IRQ.
Solved! Go to Solution.
This behavior it is related to the frequency that other UARTs requested, this means the UART debug port it is by default in 115200 so you need to modify the UART debug or disable the debug UART in order to handle lower frequency than the UART debug port.
This behavior it is related to the frequency that other UARTs requested, this means the UART debug port it is by default in 115200 so you need to modify the UART debug or disable the debug UART in order to handle lower frequency than the UART debug port.