LPC 1820 - GPIO - DMA - SRAM speed

Discussion created by lpcware Employee on Jun 15, 2016
Latest reply on Jun 18, 2016 by lpcware
Content originally posted in LPCWare by kellykan on Sat Mar 05 14:02:58 MST 2016

Hi Guys,

I've been working on a project for a while where we implemented a GPIO DMA transfer to memory to capture 2 bit parallel data coming from an ADC at 15.5Mhz. I noticed the captures would go out of sync a couple of kBs in (say 12Kb). I configured the Audio Clock to clock the M3 core at 240Mhz during these transfer to try and speed up the loading of DMA descriptors which seemed to resolve the issue.

Now I'm looking at pushing this a bit further , we have a 22Mhz output from an ADC I want to capture. I've currently tried different options , implementation using one channel with a linked list to fill memory and implementation using two dma channels. Both methods have the dma triggered by the SCT , the SCT is setup to trigger an event on either rising and falling edge of the input clock. In the mode where I try two dma channels I  trigger the first dma channel on the first falling edge and the second on the second falling edge. In the later scenario, I can get the first 40 bytes before losing a byte. In the single dma channel method I loose a byte just as it loads the next descriptor. In this method I transfer about 4K at a time.

So my question is if I have 8 bit parallel data being measured from a gpio port using SCT as the trigger and DMA, how fast could I clock the input data reliably. Or is there a better way of doing this?