Hey folks,
I need to measure how many milliseconds a block of code takes to run in order to measure its performance; I am wondering if there is an easy way to do so.
Thanks in advance.
Solved! Go to Solution.
The way I do this is to set up a "general purpose timer" (gpt1 or gpt2) to count up continuously in microseconds forever..
Then you bracket the code you want to measure, with an initial call to fetch the gpt1 value, then after the code has run, fetch the gpt1 value again and subtract the initial value. The result is the number of microseconds it took the code to run. Using unsigned 32 bit uint's this is good for up to a little over 4000 seconds.
Thank you guys, it completely solved my problem.
Thanks Mark.
For info, those using mcuXpresso IDE can use the structures defined in core_cm7.h for this.
CoreDebug->DEMCR |= CoreDebug_DEMCR_TRCENA_Msk; // enable trace for DWT features
DWT->LAR = 0xc5acce55; // unlock access to DWT registers
DWT->CYCCNT = 0; // reset the cycle count value
DWT->CTRL |= DWT_CTRL_CYCCNTENA_Msk; // enable the cycle counter
-Nick
Hi
Typically the cycle counter in the Cortex-M7 is used for measuring execution time.
DEMCR |= DHCSR_TRCENA; // enable trace for DWT features
DWT_LAR = DWT_LAR_UNLOCK; // unlock access to DWT registers
DWT_CYCCNT = 0; // reset the cycle count value
DWT_CTRL |= DWT_CTRL_CYCCNTENA; // enable the cycle counter
... code to be measured
DWT_CYCCNT now contain the number of cycles required (and thus the time with high resolution from one cycle to about 7s on a 600MHz i.MX RT part).
Regards
Mark
[uTasker project developer for Kinetis and i.MX RT]
Contact me by personal message or on the uTasker web site to discuss professional training, solutions to problems or rapid product development requirements
For professionals searching for faster, problem-free Kinetis and i.MX RT 10xx developments the uTasker project holds the key: https://www.utasker.com/iMX/RT1050.html
The way I do this is to set up a "general purpose timer" (gpt1 or gpt2) to count up continuously in microseconds forever..
Then you bracket the code you want to measure, with an initial call to fetch the gpt1 value, then after the code has run, fetch the gpt1 value again and subtract the initial value. The result is the number of microseconds it took the code to run. Using unsigned 32 bit uint's this is good for up to a little over 4000 seconds.