GPDMA library + HSADC

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

GPDMA library + HSADC

2,738件の閲覧回数
lpcware
NXP Employee
NXP Employee
Content originally posted in LPCWare by mch0 on Wed Sep 17 10:04:09 MST 2014
Hi,

I have just implemented a first DMA based read from the HS ADC (LPC4370).
While doing so I noticed, that the current V2.12 of the GPDMA library does not support the HS ADC.
I have added the required entries for support.
See the attached files, the modifications are marked by "mch".
So far I can now set up the DMA transfer for reads reliably.
Support for the other way (DMA for descriptor updates) I have added, but not tested.
I don't need that for my project, but it should work, too.

Whoever can use it, have fun.

Mike

Original Attachment has been moved to: gpdma.zip

ラベル(1)
4 返答(返信)

1,897件の閲覧回数
lpcware
NXP Employee
NXP Employee
Content originally posted in LPCWare by heffalump on Sat Mar 21 22:18:44 MST 2015
Hi again Moenk,

I gather you want to collect a set number of samples when you receive a trigger via GPIO interrupt.

It is not clear to me why you don't use a single DMA descriptor to move the exact number of samples into memory. Afterwards you can shut off the ADC, update the DMA for the next memory location and rearm the ADC ready for software trigger.  Is there not enough time? How often are these triggers occurring?

For example, have you looked at the Labtool code? When a trigger is received, the ADC acquisition is stopped by breaking the DMA LLI. Their case more complicated as a circular buffer is used to allow acquiring pre-trigger samples.

By the way, it might be worth running your interrupts from RAM if they are taking too long running from the LPC4370 external flash.

Good luck!
0 件の賞賛
返信

1,897件の閲覧回数
lpcware
NXP Employee
NXP Employee
Content originally posted in LPCWare by MoenK on Fri Mar 20 09:03:25 MST 2015
My application involves sampling 1 channel in 5 bursts (each in 1 memory block or LLI). Each burst will be triggered by a GPIO interrupt. I want to process each sample separately as well, hence I thought that the easiest way would be to store the blocks in separate LLIs using DMA instead of setting the DMA to store in 1 block of memory and then move it to another location before the next burst is triggered.

My sampling rate is actually only at 500kHz. And since the CPU and DMA runs at 204MHz, I am surprised that it takes so long to write the halt descriptor into the ADC to halt. It should have 408 clock cycles to do whatever it needs to do between samples. I tried 2 setups for my ADC.

1) ADCHS_clock at 500kHz. Descriptor match set as 1. And the number of extra samples it took before the ADC halts is 5679!
2) ADCHS_clock at 80MHz. Descriptor match set as 160. And the number of extra samples it took before the ADC halts is only 159.

From this I can see that the 'responsiveness' of the ADCHS depends on the ADCHS clock. However, the trade-off is that my samples look pretty bad when I use option 2. The values will vary and it takes some time for the values to 'settle down'.

When you say use the M0SUB core to "peek" at the top speed, does it mean that it is counting how many samples the HSADC is collecting? If so, what exactly is the M0SUB peeking at?

MoenK
0 件の賞賛
返信

1,897件の閲覧回数
lpcware
NXP Employee
NXP Employee
Content originally posted in LPCWare by mch0 on Fri Mar 20 01:01:58 MST 2015
Hi,

my very first question is:
If you want to stop the HSADC at TC, why don't you simply mark that block as the last one in the linked list?
Then the HSADC would continue to run a little bit, but without effect ...

Generally speaking I'd say stopping the HSADC precisely might indeed be not that easy, this probably depends mostly on the condition and the ratio "sampling rate/CPU clock". At 80 MSPS you'll get a compressed sample every 5 clocks ...

It might help to know more about your application.
I have considered in the past to use the M0SUB core to "peek" at top speed at the HSADC and stop either the HSADC or modify the LLI on the fly, since this should be still faster than an ISR (of the M4). After all, the SUB-core can run at full speed from its private memory. There will still be the latencies of the bridge.
But I did not need that until now, so no experience.

Mike
0 件の賞賛
返信

1,897件の閲覧回数
lpcware
NXP Employee
NXP Employee
Content originally posted in LPCWare by MoenK on Thu Mar 19 10:00:17 MST 2015
Hi Mike,

First of all, I am using your library and thanks for sharing your work!

I have something to ask regarding the ADC. I have set up the DMA to read the HSADC. After gathering my samples, I wanted to use the DMA_read Terminal Count to start a DMA_write to halt the ADC. However, it seems to take too much time to do that, causing my DMA to continue to read the ongoing ADC and write into my next LLI. Even If I disable the DMA_read channel, it still takes too much time causing the same overflow into the next LLI.

Do you know of a way to stop the DMA_read or ADC descriptor table exactly on its Terminal Count?

Thanks alot.

0 件の賞賛
返信