Hi all.
I'm linking 3 DMA channels, to run an ADC in ping pong mode, as described in app notes etc. PDB triggers the ADC, ADC in turn triggers the 1st DMA. 1st DMA reads data out of the ADC, links to the second DMA which loads the ADC source register. The second DMA, when reached major loop end, then links to a third that moves the data somewhere else for processing.
All seems to be OK most of the time, the system runs quite happily. Sometimes though, when making changes to seeming unrelated code, the DMA's fail to link consistently. For example, they will run as expected for thousands of ADC reads, many minor and major loops, then all of a sudden the first DMA fires but fails to link to the second one. This in turn causes a sequence error within the PDB.
The ADC time is short in comparison to the PDB trigger, so no timing issues there.
I'm stumped. Can anyone suggest where this maybe going wrong, or a way to effectively narrow it down, debug it, etc?
I'm running a K10, at 46Mhz, with IAR, and the latest MQX. At home just now, but will add configuration info, and code tomorrow when back at work. Just wanted to get this out there ASAP, as sometimes just a little clue can help see the wood for the trees.
Cheers in advance,
NDBill
Hi
Maybe there is another HW module that intercepts the DMA somehow. I had this problem too when streaming data over uart but never solved it, just switched the streaming pattern.
Once the problem occured that I did reset the source address of the DMA channel by hand (processor instruction) while another DMA channel was still running. The instruction did not apply and the DMA channel run to infinity. Realy hard to debug... But I guess thats not your problem.
As a quick update, and additional information...
After exhausting every avenue I could think of, I thought I would just try a release version (rather than a debug one). In my initial look, this seems to have cured the problem. Now I have to find out where the difference lies, might be hard as officially we don't use _DEBUG, and all the setting are "meant" to be the same!
Ah, it would seem that the problem goes away if optimization is set to max, but exhibits itself when set to none. Anyone suggest where I go with this now? Is it worth getting IAR involved at this point?
I was fairly confident of my settings, they've been running OK for a while now. Just not sure where to go next...
Cheers
Bill (pulling my hair out!!)
For me it sounds of a timing problem, when you use the PDB like a hardware trigger to start ADC conversions, there is no way to prevent starting new conversion before the ADC COCO is cleared. A fast way to know if this is a timing issue, increment the PDB delay channel or look for errors at the PDBx_CHnS register.
Ah, the mystery deepens.
I have a simple loop, for( I = 0; I < literal; ++I ). If the literal is 799* or 850*, everything works fine. If I change nothing else, but make the literal 800, everything breaks. What's interesting is that, looking at the hex images, the 799 and 850 are identical bar one number, as expected. But when comparing the 799/850 to the 800 hex image there are many many changes. The list files confirms that things have been moved about, section to section, and even the for loop generated assembly is different.
*broadly arbitrary number, aside from work/don’t work effects.
The quest continues...
Cheers for the reply Aleguzman,
Agree with your thoughts regarding timing, its the 1st thing I thought of and then ruled out. When the original problem started the timing was indeed quite tight, but I've slacked it off and it doesn't improve the situation. The ADC read now takes about 2uS, the PDB is set to trigger at 115uS. So not even close.
Its a really frustrating problem, if I change seemingly unrelated bits of code the problem comes and goes. When its working the PDB/ADC/DMA system works faultlessly, but when its failing it fails fairly (thought not totally) predictably.
More ideas please.
Bill