I have a very simple program running on the M7 in the imx8mp using FreeRTOS from the the NXP 2.10.0 SDK. I have GPT1 triggering an interrupt at 600 KHz and then inside the interrupt I set a GPIO. The problem is the huge variability in servicing the the interrupt and setting the GPIO. In the capture below the top channel is the GPT compare toggle output (which looks great). The bottom channel is the GPIO which is set in the interrupt handler. The latency fluctuates between 125ns and 325ns. Since the M7 is running at 800 MHz (0.8 cycles/ns) this 200ns difference means sometimes there are 250 extra cycles used? Something seems wrong.
Are there any tricks to reduce the variability. I actually don't care if the latency is high but it has to be consistent. For my application it would be fine if the latency was always 325ns.
Here is the code. I don't believe any other interrupts besides the GPT are enabled. Adding NVIC_SetPriority(GPT_IRQ_ID, 1U); had no effect. The main() loop just sits in __WFI(). This is a pretty simple program and don't see what is causing the interrupt variability.
volatile uint32_t data[255];
volatile uint8_t idx = 0;
/*******************************************************************************
* Code
******************************************************************************/
void EXAMPLE_GPT_IRQHandler(void)
{
EXAMPLE_LED_GPIO->DR = data[idx];
GPT_ClearStatusFlags(EXAMPLE_GPT, kGPT_OutputCompare1Flag);
idx++;
__DSB();
}
/*!
* @brief Main function
*/
int main(void)
{
gpt_config_t gptConfig;
gpio_pin_config_t led_config = {kGPIO_DigitalOutput, 0, kGPIO_NoIntmode};
/* M7 has its local cache and enabled by default,
* need to set smart subsystems (0x28000000 ~ 0x3FFFFFFF)
* non-cacheable before accessing this address region */
BOARD_InitMemory();
BOARD_RdcInit();
BOARD_InitBootPins();
BOARD_BootClockRUN();
CLOCK_SetRootMux(kCLOCK_RootGpt1, kCLOCK_GptRootmuxSysPll1Div2); /* Set GPT1 source to SYSTEM PLL1 DIV2 400MHZ */
CLOCK_SetRootDivider(kCLOCK_RootGpt1, 1U, 4U); /* Set root clock to 400MHZ / 4 = 100MHZ */
GPT_GetDefaultConfig(&gptConfig);
GPT_Init(EXAMPLE_GPT, &gptConfig);
GPT_SetClockDivider(EXAMPLE_GPT, 1);
GPT_SetOutputOperationMode(EXAMPLE_GPT, kGPT_OutputCompare_Channel1, kGPT_OutputOperation_Toggle);
GPT_SetOutputCompareValue(EXAMPLE_GPT, kGPT_OutputCompare_Channel1, 81);
GPT_EnableInterrupts(EXAMPLE_GPT, kGPT_OutputCompare1InterruptEnable);
//NVIC_SetPriority(GPT_IRQ_ID, 1U);
EnableIRQ(GPT_IRQ_ID);
GPT_StartTimer(EXAMPLE_GPT);
GPIO_PinInit(EXAMPLE_LED_GPIO, EXAMPLE_LED_GPIO_PIN, &led_config);
uint32_t val = 0;
for(int i=0; i<255; i++) {
data[i] = val;
val = val == 0 ? (1<<20) : 0;
}
while (true)
{
__WFI();
}
}
Hi Doug
for FreeRTOS interupt latencies on M cores one can look at AN12078 Measuring Interrupt Latency
and post issue (if necessary) on dedicated FreeRTOS forums:
https://forums.freertos.org/t/cortex-m0-interrupt-latency/1949
Also for latency critical applications one can consider Real-time Edge Software
Best regards
igor