Sub Jiffy(10ms) interrupts

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Sub Jiffy(10ms) interrupts

Jump to solution
1,044 Views
mendelbullex
Contributor III

Hi, Im am relatively new to microcontroller programming and I am trying to find a way to do sub jiffy(10ms) intrupts. We are currently running a linux kernel on the M54451EVB and are attempting to create interrupts that are in the microsecond range. We need this in order to implement DMX, which requires 4uS between signals. We have tried udelay and ndelay, but can not get enough accuracy for what we need. We also want to be able to interrupt, not just delay. We are guessing that we just need to be able to implement one of the 4 PIT timers on the MCU but are unsure how to go about this. Does anyone know how we can get sub Jiffy interrupts?

 

Thanks

Labels (1)
0 Kudos
1 Solution
719 Views
mendelbullex
Contributor III

Using eDMA timers allowed us to get precision between 3.99uS and 4.01uS, which is exactly what we need.

View solution in original post

0 Kudos
7 Replies
719 Views
FridgeFreezer
Senior Contributor I

You're right, you need to use one of the micro's timers (PIT or DMA) to give you the accuracy & speed you need without tying up the processor counting ticks at insane speeds.

 

I'm not very familiar with DMX (is it the stage lighting control protocol?) but if you're trying to run a communications protocol then I would look closely at whether one of the dedicated comms ports (UART, SPI, I2C) can do the job on its own, that will sort all the timings etc. out properly and you just have to put data in & out of it at the right time (it will interrupt you when it's ready). I'd be very surprised if you can't do it with a UART as most protocols are designed to be easily implemented, even if they're a bit obfuscated on occasion.

 

Failing that, a 3rd party comms chip may be a better way to go.

 

If you really must bit-bang it, then it's not always easy - we are bit-banging an obscure and quite rubbish FSK comms protocol using the PIT and GPT timers. Our PIT runs a very small loop at 12KHz looking for the correct conditions to trigger (it's also doing a couple of other high-res things). Then the GPT is set going to interrupt at the correct intervals - it's clocked very fast so is very accurate, but requires no overhead until it fires the next interrupt (in our case, exactly 50uS apart).

 

Beware of the PIT timer accuracy pitfall, if you search "PIT" on this forum you'll find a big thread about it.

0 Kudos
719 Views
FridgeFreezer
Senior Contributor I

Just reading https://secure.wikimedia.org/wikipedia/en/wiki/DMX512

It's a standard RS-485 style protocol, you should be able to do it all with the hardware UART, no timers required.

0 Kudos
719 Views
mendelbullex
Contributor III

Thanks for all of the responses guys. As far as using another interface for this, we have used this in the past and are simply porting the whole project to a new system. We know how to do it through GPIO and will need to get smaller level interrupts for other outputs as well, so other interfaces are not options at the moment. It does appear that using Pit/eDMA timers are the way we need to go. I know that the interrupts are very tough on the processor, but it is only for when we transmit DMX frames. We can simply turn the timer off when we are not using it and can proceed normally.

 

Again thanks for the help

0 Kudos
720 Views
mendelbullex
Contributor III

Using eDMA timers allowed us to get precision between 3.99uS and 4.01uS, which is exactly what we need.

0 Kudos
719 Views
tchristoffel
Contributor I

in linux/arch/m68knommu/platform/coldfire/  look into the module pit.c.  It appears to provide an interface to the PIT timers.

I have not tried this so take it for what it's worth.  It may or may not solve your problem.  Just a note of minor concern, 250,000 interrupts per second is a lot to handle if you expect to do much else.  Let me know if this helps or not.

0 Kudos
719 Views
angelo_d
Senior Contributor I

hello mendelbullex,

 

do you really need linux for this DMX implementation ? I would simply try a C program that use one of the timers/interrupt, depending on your system clock, but precision of 1 usec should be possible also.

 

 

 

0 Kudos
719 Views
TomE
Specialist II

> We need this in order to implement DMX, which requires 4uS between signals.

 

Unless you're using  "Real Time Linux" you may not be able to get the required interrupt latency. We're using Linux and have had it lock out interrupts for many milliseconds. And it was a "real time linux" too.

 

I think you're trying to detect or generate the 8us MAB (Mark After Break) in the DMX512 protocol. That uses standard Async data at 250kbps. I don't see that you need to detect the "mark" as that's just to resync the receiver after the Break, and that is specified as 92us.

 

As far as I can tell (from the Wikipedia article) you only need to have a UART that can signal the "Break" as a framing error and then start receiving again after the 4us (old) or 8us (new) mark period. The harder bit might be generating the Break to specifications. You may simply want to lock the CPU into a service routine with interrupts disabled and do all the timing in code, or using a hardware timer - don't bother with interrupts. That would work for one channel, but not for multiple ones.

 

Tom

 

0 Kudos