Can someone please explain the difference between using Interrupts vs. DMA?
I understand the concept of interrupts, but what is DMA?
DMA allows you to transfer large amounts of data through the system without getting the processor core involved. Rather than the CPU laboriously reading a byte, the writing a byte from one buffer to another, you'd give the DMA controller a pointer to the start of a source buffer, a pointer to the start of the destination buffer, and then specify how many bytes you need moved. The DMA hardware would then handle the data transfer between the buffers, while the CPU would be free to do other things. It DMA controller can also do this with little overhead, whereas an interrupt is going to, as its name states, interrupt the CPU in order to handle whatever the interrupt needs to be done.
This is a highly simplified explanation, but should get the idea across.
DMA sounds wonderful. So, why would anyone not use it if it were available?
Sounds like it has every advantage over traditional interrupts. Are there any disadvantages?
DMA is very efficient for transmission when a block of data is to be sent. It can however become a bit complicated when flow control can cause the block to have to be suspended during mid-transmission (using CTS or XOFF flow control) because it is necessary to suspend the DMA transfer and then release it again later when the cause of the flow control has been removed (in the mean time more data may have been added to the output buffer that requires a different DMA setup to consider them when restarting). How this is done and what possibilities exist depends on the processor - some can do it quite easily and others not always so well and/or simply.
Again, the DMA capabilities of the processor are to be considered: some allow scatter-gather and may be suitable for working with the typical circular buffers used in serial interface output and some may only be able to do linear buffer transfers, requiring circular buffers to often be handled as multiple DMA transfers.
DMA on reception can be tricky because one doesn't know that there has been reception until a complete DMA block transfer has completed (DMA end interrupt for example). Im many UART reception cases the size of a data packet is not known in advance and so a fixed DMA transfer size is on little use (if one byte is received from a terminal input but it is not known that it is in the buffer it won't be handled by the software without it having to poll the buffer, cancelling any DMA overhead savings).
DMA reception is very efficient when combined with break detection on the UART input (protocols that signal a complete block end by using a break (continuous '0'). The break detection interrupt can then be used to read out the complete data packet that was transferred by DMA.
In some protocols, such as MODBUS RTU, DMA may not practical due to inter-character space requirements that can usually not be done by the UART itself. In this situation the individual character interrupts need to be used as timing base for the sortware handling this layer.
Some additional information at http://www.utasker.com/docs/uTasker/uTaskerUART.PDF
Retrieving data ...