AnsweredAssumed Answered

K26 ISR jitter much greater than expected

Question asked by Scott Barry on Jun 25, 2018
Latest reply on Jun 28, 2018 by xiangjun.rong

I've been developing firmware for a K26-series chip that requires highly repeatable timing for ADC sample intervals.  I've tried using the ADC conversion completed interrupt, then switched to the PIT timer interrupt, but have the same issue with both approaches: the interrupt jitter is substantial and highly influenced by code in the main control loop.


My PIT timer ISR has been reduced down to setting a GPIO pin high, performing an ADC conversion, clearing the PIT interrupt status flag, and setting the same GPIO pin low (the high and low pulse allows me to measure frequency and execution time on an oscilloscope).


What I see happening is code in my "while(1)" loop in main.c can significantly alter the time interval (as measured by rising edge to rising edge of my GPIO pin pulse) and influence the execution time of the ISR (as measured by the pulse width of the GPIO pin pulse).  Both jitter and execution time vary by as much as 50 cycles.  


The main.c while loop, below, is very simple: 

   GPIO_TogglePinsOutput(GPIOE, 2 << 1U);
   for (y = 0; y < 10; y++){




The ISR (set to interrupt at 1 MHz):

   GPIO_SetPinsOutput(GPIOE, 1 << 1U);
   adcVals = (uint32_t)((ADC_SMA->R[PDA_CHANNEL_GROUP] << 16) | ADC_PD->R[PDA_CHANNEL_GROUP]);
   GPIO_ClearPinsOutput(GPIOE, 1 << 1U);


Whether using a loop with a counter or just writing several "nop\n\t" instructions, changing the execution length of this while loop can lead to either rock solid interrupt intervals on my PIT ISR or lead to anywhere between 10 and 50 cycles of jitter (one interrupt will be clearly delayed on the scope, the subsequent interrupt occurs early relative to its predecessor so the average interrupt time is about right, but the edge-to-edge timing varies a great deal).


I've also written the ISR and main while loop in assembler in an attempt to eliminate the compiler as a potential source of error... Same thing, except now I find that simply adding a ".align" compiler directive can have the same influence over jitter and execution time.  Similarly, switching between different levels of optimization in Eclipse (Kinetis Studio using GCC compiler and KSDK 2.0) influences jitter and execution time.


I've been reading about instruction cache misses causing some issue, but there's so little going on here that it's difficult to imagine a lot of cache line turnover.  I'm really hopeful that someone knows what's causing this and can offer a suggestion or two, I'm not sure how much more my forehead can take of getting pounded into my desk!


Also, as an important point, I'm not using DMA here because at a later point I need to put some real-time intelligence into the ISR and so can't afford to acquire a block of data and post-process it (I wish I could!)


If there's any additional information needed please let me know and I'll provide it right away.  Thanks in advance for any help!