MCF5301x - 1ms resolution timer

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

MCF5301x - 1ms resolution timer

Jump to solution
1,875 Views
pheredia
Contributor I

Hi,

 

I'm working with a MCF5301x microprocessor using a 2.6.26 Kernel.

The microprocessor has a resolution of 10ms as you can see on timer_list

 

# cat /proc/timer_list

Timer List Version: v0.3

HRTIMER_MAX_CLOCK_BASES: 2

now at 1880234849000 nsecs

cpu: 0

clock 0:

  .index:      0

  .resolution: 10000000 nsecs

  .get_time:   ktime_get_real

active timers:

clock 1:

  .index:      1

  .resolution: 10000000 nsecs

  .get_time:   ktime_get

 

I need a timer tick with 1ms resolution. How can I get this timer? Is it possible?

 

Thanks in advance,

 

Patricia

Labels (1)
0 Kudos
1 Solution
1,216 Views
TomE
Specialist II

This is more a general Linux Software question than a Coldfire Hardware question.

Unless you want to get to hardware timers from your userspace code. To see how to do that, find the source for "devmem". With that you would be able to program spare CPU timers and manually (polling from your user code) be able to get timing down to better than a microsecond. Plus or minus 10ms of Linux scheduling delays.

The tick rate is determined by the definition of "CONFIG_HZ" in (probably) arch/m68k/Kconfig. Change it to "1000", rebuild the kernel and see if that does what you want.

Otherwise, check to see if your kernel has CONFIG_HI_RES_TIMERS enabled and read Documentation/timers, specifically hrtimers.txt and highres.txt. You should probably be using these.

Remember "Linux isn't real-time". The Kernel can decide to spend a lot of time somewhere other than your "time critical" programs. Just because you can measure time to a finer measure doesn't mean your code will get scheduled to do the read.

Tom

View solution in original post

0 Kudos
6 Replies
1,216 Views
rsdio
Contributor II

Linux is not the best choice for timing precision. You can write bare-metal firmware for the MCF5301x that will control a timer directly, and execute timing-critical code in assembly at interrupt time. You can also use a RTOS and leverage the timer API for tighter thread control than Linux, although I do not know whether 1 ms is achievable. If you absolutely need Linux for other reasons, then you may need to write your own drivers to handle the precision timing.

Alternatively, you may need to redesign your code so that it does not use signals for timing and scheduling. There are many techniques on many operating systems (see CoreAudio and CoreMIDI on OSX) to allow sub-millisecond scheduling of audio/video events, but these techniques rely heavily on the data formatting and buffering to achieve their goals. In other words, the solution is highly dependent on what you need to do precisely.

Often, it is the wrong approach to cause task switching every millisecond when there is not an event scheduled every millisecond. Some solutions dynamically reprogram the timer (with 1 ms precision) to only fire when the next event is needed, to avoid losing a large amount of processor time to frequent task switching.

0 Kudos
1,217 Views
TomE
Specialist II

This is more a general Linux Software question than a Coldfire Hardware question.

Unless you want to get to hardware timers from your userspace code. To see how to do that, find the source for "devmem". With that you would be able to program spare CPU timers and manually (polling from your user code) be able to get timing down to better than a microsecond. Plus or minus 10ms of Linux scheduling delays.

The tick rate is determined by the definition of "CONFIG_HZ" in (probably) arch/m68k/Kconfig. Change it to "1000", rebuild the kernel and see if that does what you want.

Otherwise, check to see if your kernel has CONFIG_HI_RES_TIMERS enabled and read Documentation/timers, specifically hrtimers.txt and highres.txt. You should probably be using these.

Remember "Linux isn't real-time". The Kernel can decide to spend a lot of time somewhere other than your "time critical" programs. Just because you can measure time to a finer measure doesn't mean your code will get scheduled to do the read.

Tom

0 Kudos
1,216 Views
pheredia
Contributor I

Thanks Tom!

I get a 1ms timer!

I'm using a "build system" based on Arcturus Networks for the evaluation board uC57017EVM.

As you told me "The tick rate is determined by the definition of "CONFIG_HZ"" so I checked other Kconfig files for this platform and it was inarch/m68knommu/configs/uC53017EVM_deconfig

Thanks very much!

0 Kudos
1,216 Views
pheredia
Contributor I

Thanks Tom for your fast response.

Yes, it is a software question. I'm programming an application to send datas through the network and I need timers with precision at 1ms.

I'm using ITIMER_REAL and I catch SIGALARM so I can get ticks to create my custom timers:

    

//_TimerCount desired 1ms

setitimer(ITIMER_REAL, &_TimerCount, NULL);

signal(SIGALRM, HandleSignalAlarm);

If I use a 10ms or more to set ITIMER_REAL I get correct custom timers multiple of this value. But if I use a 1ms in ITIMER_REAL, the SIGALRM is not generated each millisecond.

I change the definition of "CONFIG_HZ" in arch/m68k/Kconfig from 100 to 1000 and anything change.

My kernel don't has the definition "CONFIG_HI_RES_TIMERS", so I can't enabled it. Do you think that I should try to patch my kernel? At the moment, I don't know how to do it. Moreover, Do you know if I would use this high resolution timers in muy userspace code? or I would need a custom driver/module to access them?

Patricia

0 Kudos
1,216 Views
TomE
Specialist II

> I change the definition of "CONFIG_HZ" in arch/m68k/Kconfig from 100 to 1000 and anything change.

That's not how you reconfigure the kernel. I pointed out that CONFIG_HZ was defined there so you'd know where to find it in the config program, not that that is where to change it.

I don't know what "build system" you're using, but when building a kernel without running it under a "system", you run "make menuconfig" [1] and it brings up a program where you can select the options. In my 2.6 Kernel, building for i.MX, there's "Kernel Features / High Resolution Timer Support" to enable that. Some builds let you select the "HZ" value in there. In others it looks like you have to edit the "Kconfig" file. Most modern systems run "tickless", which changes things somewhat.

"make config" or "make menuconfig" reads the ".config" file, lets you make changes and then rewrites it based on your changes. You then "make" [2] to build a kernel according to the current ".config" file. You can view ".config" to see what the kernel has been built with. It is worthwhile enabling "CONFIG_CONFIGFS_FS" if you have it, as then from a running kernel you can see what options it was built with (and rebuild it like that).

> read Documentation/timers, specifically hrtimers.txt and highres.txt.

And in the latter it refers you to:

http://www.linuxsymposium.org/2006/linuxsymposium_procv1.pdf

But since this is the Internet and that was 9 years ago the link is dead. I found a copy here:

https://www.kernel.org/doc/ols/2006/ols2006v1-pages-333-346.pdf

Take good note of what "rsdio" said though.

Note 1: In my case "make ARCH=arm CROSS_COMPILE=arm-cortexa8-linux-gnueabi- menuconfig" or it may default to building for an X86.

Note 2: Ditto "make ARCH=arm CROSS_COMPILE=arm-cortexa8-linux-gnueabi-" or equivalent.

Tom

0 Kudos
1,216 Views
pheredia
Contributor I

Meanwhile, I wrote a simple application to test resolution timers in my system, using fuction "clock_getres" and the results are:

CLOCK_REALTIME: 0 s, 10000000 ns

CLOCK_MONOTONIC: 0 s, 10000000 ns

CLOCK_PROCESS_CPUTIME_ID: 0 s, 1 ns

CLOCK_THREAD_CPUTIME_ID: 0 s, 1 ns

Then, I wrote another simple application using "CLOCK_PROCESS_CPUTIME_ID" because I thought I could use the high resolution of this timer. You can see the code:

#define NSEC_PER_SEC 1000000000L

#define timerdiff(a,b) ((float)((a)->tv_sec - (b)->tv_sec) + \

                         ((float)((a)->tv_nsec - (b)->tv_nsec))/NSEC_PER_SEC)

static struct timespec prev = {.tv_sec=0,.tv_nsec=0};

static int count = 5;

void handler( signo )

{

  struct timespec now;

  if(count >= 0)

  {

  clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &now);

  printf("[%d]Diff time:%lf\n", count, timerdiff(&now, &prev));

  prev = now;

  count--;

  }

  else

  {

  exit(0);

  }

}

int main(int argc, char **argv)

{

  timer_t t_id;

  struct itimerspec tim_spec;

  tim_spec.it_value.tv_sec = 0;

  tim_spec.it_value.tv_nsec = 1000000; //1ms

  tim_spec.it_interval.tv_sec = 0;

  tim_spec.it_interval.tv_nsec = 1000000; //1ms

  if (timer_create(CLOCK_PROCESS_CPUTIME_ID, NULL, &t_id))

  perror("timer_create");

  if (timer_settime(t_id, 0, &tim_spec, NULL))

  perror("timer_settime");

  clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &prev);

  signal(SIGALRM, handler);

  while(1)

  {

  }

  return 0;

}

But, when I execute the application, the result is 10ms, as you can see:

# ./TestTimers.flt

[5]Diff time:0.010000

[4]Diff time:0.010000

[3]Diff time:0.010000

[2]Diff time:0.010000

[1]Diff time:0.010000

[0]Diff time:0.010000

So, I don't know if I can obtain 1ms of resolution with this hardware (Coldfire MCF53017). Do I have to change the Kernel?

Thanks,

Patricia

0 Kudos