Kirk Humphries

Switch Debounce Software

Discussion created by Kirk Humphries Employee on Jan 26, 2006

This message contains an entire topic ported from a separate forum. The original message and all replies are in this single message. We have seeded this new forum with selected information that we expect will be of value to you as you search for answers to your questions.

 

Date: Mon Dec 26, 2005  2:40 pm

 

I am looking for a suggestion or software to do dynamic switch debounce that is more then a timed delay. I have a hardwired (const variable) time delay now but the key response is slow and lethargic. The reason seems to be that the timed delay has to be set for the worst or slowest switch. I have tried to do things like catch the first closure and time the second which would be a bounce and then exclude further bounces in a time period. But this inevitably leads to time events within interrupt routines which is not desireable, so my methodology is missing something... it doesn't work. Hardware debounce is not a viable options unfortunately. Thanks guys.

 


 

Date: Mon Dec 26, 2005  3:12 pm

 

>I am looking for a suggestion or software to do dynamic

>switch debounce that is more then a timed delay. I have a

>hardwired (const variable) time delay now but the key

>response is slow and lethargic. The reason seems to be that

>the timed delay has to be set for the worst or slowest

>switch.

 

The usual approach is to test the keys at a regular interval (like 1 ms) and look for any state change. When a state change occurs, set a timer to the longest delay needed (which should be something like 20 - 50 ms). Every ms the timer is decremented and, if it ever reaches zero, you treat the event as occurring (i.e. the key is pressed or released, etc.).

 

Note that you do NOT have to wait for the longest period it takes the slowest key to debounce (for every key)... just for the longest period between state changes of any key.

 

Say you set it for 20 ms (a value most people will not notice). Most keys or switches will debounce within a ms. or so, so most of the time your delay is around 22 ms.

 

But suppose you have one switch that is especially noisy. Maybe it bounces around for 75 ms. As long as no one bounce is longer than 20 ms., that switch will also debounce correctly - but it will take somewhere between 95 and 104 ms to fully debounce.

 

If I understand you correctly, what you have been doing is setting a timer on first state change, and then checking again when the timer runs out. You are correct that this is not only slow but error prone.

 


 

Date: Mon Dec 26, 2005  8:48 pm

 

My keypad driver uses a 1ms debounce inside a state machine that checks for an 'up' event before allowing another debounced 'down' event (otherwise holding the key down counts as multiple strokes). Using a timer is the standard way of implementing this - an RTOS will simply run a keyboard scan task and buffer keys to a ring buffer that can be asynchronously managed - a foreground/background system can operate similarly by running a 'pseudo' task (function called from a timer isr). Both methods are basically nothing "more then a timed delay" except that they allow other meaningful work to be performed via multitasking.

 

Perhaps your switch is something quite different since I can't picture a keypad debounce requiring a period greater than a millisec.

 


 

Posted: Mon Dec 26, 2005  3:40 pm

 

>I am looking for a suggestion or software to do dynamic

>switch debounce that is more then a timed delay. I have a

>hardwired (const variable) time delay now but the key

>response is slow and lethargic. The reason seems to be that

>the timed delay has to be set for the worst or slowest

>switch. I have tried to do things like catch the first

>closure and time the second which would be a bounce and then

> exclude further bounces in a time period. But this

>inevitably leads to time events within interrupt routines

>which is not desireable, so my methodology is missing

>something... it doesn't work. Hardware debounce is not a

>viable options unfortunately. Thanks guys.

 

The bounce time varies with the details of the contact but is usually in the 10 ms to 50 ms region.

 

Simplest Method

---------------

 

Uses one timer interrupt or one fairly consistent timing loop.

 

Sample the input at a period that is longer than half the worst case bounce time. Do no filtering, just use the current value. This method has NO noise immunity. If the input is read when a glitch is on the input, it is taken as the state of the button.

 

The effect of bounce is that depending on the exact instant you sample the signal, You may recognize the new position one sample period later or earlier than you otherwise would.

 

You still get a response time of the worst case bounce time, which is often good enough.

 

This will give you a worst case response time of the bounce time, time.

 

 

More Complicated Method

-----------------------

 

Uses one timer interrupt or one fairly consistent timing loop, but more often than the simplest method, above. Has NO noise immunity. If there's a little glitch in the input signal, it will often be taken as a change in state.

 

Sample the input at a period that is the response time you require for button pushing.

 

If the signal has been in one state for a while, change as soon as you see the first sample of the other state. Lock out a transition back to the original state for the worst case bounce period.

 

This gives you faster response, but restricts the recognition or a full button cycle to twice the worst case bounce period.

 

For push buttons pushed by human fingers, this restriction is not a problem.

 

Think of game players. They can time the push a button with very accurate timing, but the button releasing is less accurate, and the repetition rate is much slower than the desired button push response time.

 

 

Hardware Imitation Method

-------------------------

 

Uses one timer interrupt or one fairly consistent timing loop, but more

often than the more complicated method, above. Also has bigger code. Can be tuned for some noise rejection.

 

Sample the input more often than 1/3 or 1/4 of the response time.

 

Feed the input into a simple IIR filter:

 

(Warning! Program written on the fly, and not tested.)

 

Code:

char AverageValue ;  char InputValue ;     if ( InputBit = 0 )  InputValue = 0x00 ;  else  InputVlaue = 0xFF;     Average Value += ( InputValue - AverageValue ) >> 2 ;     // Apply hysterisys to the reported value     if ( AverageValue > 0x90 )  ReportedInput = 1 ;  else if ( AverageValue < 0x60  ReportedInput = 0 ;



 

Tune the sample period, shift value and thresholds to get the desired noise rejection and response time.

 

Hope this helps,

Outcomes