> But some environments are notoriously noisy. Many years ago I put a system using several Z80s and a PDP-11 in a steel mill. A motor the size of a house drawing thousands of amps drove the production line. It reversed direction every few seconds. The noise generated by that changeover coupled everywhere, and destroyed everything electronic unless carefully protected. We optocoupled all cabling simply to keep the smoke inside the ICs, where it belongs. All digital inputs still looked like hash and needed an astonishing amount of debounce and signal conditioning.
:-)
I had a similar experience with a elevator motor and a terminal. The electronics worked absolutely fine, but when someone operated the elevator it occasionally produced phantom keypresses on the capacitive keypad.
This was perhaps understandable, but what really confused the users was that these phantom keypresses sometimes pressed the not fitted debug buttons (which weren't on the production keypad) which stuck the device into debug mode!
We learnt not to include the debug code in the production firmware and beefed up the transient suppression hardware and the debouncing firmware to fix.
My favorite noise story is from just a couple years ago. Our controller would run fine for hours or days and then reset for no apparent reason. Looking at the debug output, I could tell that it wasn't a watchdog or other internal reset (e.g., system panic) and there had been no user input. The debug log basically said that someone pushed the reset button, which clearly wasn't happening.
The EE and I were standing around the machine and he happened to be in front of the UI when it reset and I mentioned that I heard a soft click just before he said that it reset, but we had no hardware in the region where I thought the noise came from.
Finally, we put two and two together and realized that the system included a propane heater with an automatic controller and the noise I heard was probably the propane igniter. The high voltage from the igniter was wirelessly coupling into one of the I/O lines going to the controller board. The reason that the problem had suddenly started happening after months of trouble-free development was that the customer had rerouted some of the wiring when they were in the machine fixing something else and moved it closer to the heater.
In 30 years of doing this, I can count on one hand the number of times I've had to deal with noise that was coupling in through the air!
I’ve worked with electron microscopes and in silicon fabs and it’s super fun being on the teams hunting for sources of noise during construction and bringup. In fabs there are multiple teams because it’s so specialized, the HVAC team being the most interesting one because they’ve got tons of mechanical and electronic sources all over the buildings. They were also the long tail for problems with yield (which was expected and budgeted for). I think the EM startup I worked for failed in part due to not taking the issue seriously enough.
I can’t tell any specific stories but poorly shielded USB ports were the bane of our existence in the 2000s. Every motherboard would come with them and the second a random floor worker would plug something in it’d take down a section of the fab or all of the microscopes even if it were on the other side of the building. For some god forsaken reason all the SBC manufacturers used by tons of the bespoke equipment were also adding USB ports everywhere. We ended up glueing all of them shut over the course of the several months it took to discover each machine as floor workers circumvented the ban on USB devices (they had deadlines to meet so short of gluing them shut we couldn’t really enforce the ban).
Only time I’ve ever run into this was:
- an am radio station nearby coupling into a pbx
- when some genius thought it would be a good idea to run Ethernet down the elevator shaft, right next to the lines feeding its truck sized motor.
>Many years ago I put a system using several Z80s and a PDP-11
many years ago I wired up my own design for an 8080 system, but i was a self taught beginner and not very good at stuff like a capacitive RC debounce circuit so I couldn't get my single-step-the-CPU button to work.
I was reading the spec sheet for the processor and I realized I could kluge it with signals. There was something like a "memory wait" pin, and another one called something like "halt", but one fired on the leading edge of the clock, and the other on the trailing edge, so I was able to use a SPDT push button and a flip flop to single step halt/memory wait on the first bounce, and then restart only on the first bounce when the button was released.
> Years ago a pal and I installed a system for the Secret Service that had thousands of very expensive switches on panels in a control room. We battled with a unique set of bounce challenges because the uniformed officers were too lazy to stand up and press a button. They tossed rulers at the panels from across the room. Different impacts created quite an array of bouncing.
It's impossible to guess how users will use a system until they can be observed in the wild.
This probably induced a lot of Object Thrown Operation (OTO) once word spread to everyone -- not just the lazy -- that it was possible to activate the buttons from afar.
> One vendor told me reliability simply isn't important as users will subconsciously hit the button again and again till the channel changes.
Orthogonally to the point of this excellent article, I found it striking how this was probably true, once--and then TVs got smart enough that it took seconds to change channels, instead of milliseconds. And then it was no longer possible for input failures to be corrected subconsciously.
Lightswitches are like this for me now. Activating the switch still produces an audible and subtly-tactile click, but then some awful software has to think about it for a moment and close a relay, and then a whole power supply in an LED module somewhere has to start up.
It's slower enough, compared to incandescent, to juuuuust make me flinch back and almost hit the switch again, but nope, it worked the first time after all.
I don't have a term for the annoyance of that flinch, but I should.
It used to be fun and rewarding to flip through channels on analogue equipment. No buffering, no delay, just press, flash to the next channel.
What's truly been lost is the speed
20 years ago was when I could flip through all (40ish) analogue CATV channels * in under 20 seconds * and could tell you what shows were going on with each channel.
Yes, it only took around 500ms to filter and decide if each station broadcast was on commercial, news, sports, nature, or something else worth watching.
To this day, with all the CDNs and YouTube evolutions, we still have not come close to receiving video variety anywhere near that speed.
Seriously. Analog stuff was wild. You could have telephones in adjacent rooms, call one from the other, and your voice would come out the telephone (having traveled electrically all the way across town and back) before the sound came down the hall. Analog cellphones were like that too -- ludicrously low latency.
Being able to interrupt each other without the delay-dance of "no, you go ahead" *pause* was huge. Digital cellular networks just enshittified that one day in about 2002 and apparently most folks just didn't care? I curse it every time I have to talk on the godforsaken phone.
>Digital cellular networks just enshittified that one day in about 2002 and apparently most folks just didn't care?
People cared, your comment made me remember comments my parents made about this problem. However, digital cell signals fixed a ton of congestion issues that analog was having and lowered the price so much people could actually afford to have cell phones.
Puffer channel changes are near-instant. https://puffer.stanford.edu/
Well, that one was lost for a really reasonable increase in video quality, reception reliability, and number of channels.
I have thought about that for a while and I wonder if it has to do with how memory becomes bigger more than it becomes faster. For example, compared to 30 years ago, PCs have about 1000x times more RAM, but it is only about 100x faster with about 10x less latency. It is a general trend for all sorts of devices and types of storage.
It means that for instance, storing an entire frame of video is nothing today, but in the analog times, it was hard, it means you simply didn't have enough storage for high latency. Now, you can comfortably save several frames of video, which is nice since more data means better compression, better error correction, etc... at the cost of more latency. Had memory be expensive and speed plentiful, a more direct pathway would have been cheaper, and latency naturally lower.
And yet if manufacturers cared enough about UX it wouldn't take much for input failures to be subconsciously correctable again. All you need is some kind of immediate feedback - an arrow appearing on-screen for each button press, for instance (or a beep - but I'd be the first to turn that off, so for the love of all that is holy, don't make it always-on!).
What's crucial, though, is that mistakes or overshoots can be (a) detected (for example, if three presses were detected, show three arrows) and (b) corrected without having to wait for the channel change to complete.
nobody cares enough to actually do it. but what would it take to have near instantaneous channel changes again?, prestarting a second stream in the background? And realistically the linear array of channels is also dead so it really does not matter. so I guess the modern equivalent is having a snappy UI.
A horrible idea, as if our current tv features were not already bad enough. the modern equivalent to quick channel changes would be a learning model that guesses what you you want to see next, has that stream prestarted then have the "next channel" button tied to activate that stream. The actual reason this is a bad idea, I mean above and beyond the idea that we want learning models in our tv's. Is that the manufactures would very quickly figure out that instead of the agent working for their customers they could sell preferential weights to the highest bidder.
closing thought... Oh shit I just reinvent youtube shorts(or perhaps tik tok, but I have managed to avoid that platform so far)... an interface I despise with a passion.
There was some article from early instagram times about this the other week - that an innovation there was that they started the upload as soon as the picture was taken, so after the user filled out the caption part and hit “submit”, the “upload” was instantaneous
a workaround for IP-based TVs may be some sort of splash/loading screen that shows recent-ish screenshot of the channel very quickly. It'd still take a long time for picture to start moving, but at least user will see something and could switch away quickly if they don't care about content at all.
Of course this will be non-trivial on server side - constantly decode each channel's stream, take a snapshot every few seconds, re-encode to JPEG, serve to clients... And since channels are dead, no one is going to do this.
It could simply be the most recent I-frame from the other stream in question. That would require neither decoding nor encoding on the server's part, merely buffering, and I suspect transport-stream libraries have very optimized functions for finding I-frames.
Furthermore, once a user starts flipping channels, since most flipping is just prev/next, you could start proactively sending them the frames for the adjacent channels of where they are, and reduce the show-delay to nearly nothing at all. When they calm down and haven't flipped for a while, stop this to save bandwidth.
I think it's irrelevant because TV is dead. But I do remember with rose tinted glasses the days of analog cable where changing channels was done in hardware and didn't require 1.5s for the HEVC stream to buffer
I've been given a lot of suggestions for debouncing switches over the years. I'm just doing hobby stuff, either I have an endstop switch for some CNC axis, or more recently, some simple press buttons to drive a decade counter or whatever. My goal for one project was just to have a bit counter that I could step up, down, reset, or set to an initial value, with no ICs (and no software debounce).
I got lots of different suggestions, none of which worked, until I found one that did: 1) switch is pulled high or low as needed 2) switch has capacitor to ground 3) switch signal goes through a schmitt trigger
I designed this into its own PCB which I had manufactured and soldered the SMD and through-hole and ICs to that, and treat it as its own standalone signal source. Once I did that, literally every obscure problem I was having disappeared and the downstream counter worked perfectly.
When you look at the various waveforms (I added a bunch of test points to the PCB to make this easy) the results of my PCB produces perfect square waves. I found it interesting how many suggested hardware solutions I had to try (simple RC filter did not work) and how many "experts" I had to ignore before I found a simple solution.
I've been using the perhaps-too-simple:
- Button triggers interrupt
- Interrupt starts a timer
- Next time interrupt fires, take no action if the timer is running. (Or use a state variable of some sort)
Of note, this won't work well if the bounce interval is close to the expected actuation speed, or if the timeout interval isn't near this region.> this won't work well if the bounce interval is close to the expected actuation speed
lol, or you could do what my TV does- say "to hell with user experience" and use an interrupt timer anyway.
If I hold down my volume-up button (The physical one on the TV!), I get a quick and fluid increase. But if I hammer the button 4 times per second, the volume still only goes up 1 ticke per second.
Loosely related, my HiSense TV has a wifi remote that apparently sends separate key up and down events to the TV. If the wifi happens to go down while you're holding the volume up button, it never sees the "key up" so will just hold whatever button indefinitely, including the volume button, which is how I discovered it.
This is functionally identical to the capacitive approach. Pressing the button charges the cap whose voltage decays when released (starts the timer). If the button is pressed before it decays below the "release" threshold (timer expires), the cap is recharged (timer restarted).
This is an interesting take on debouncing, but I found the choice of TV remotes as an example a bit confusing. From my understanding, the issues with remote controls aren’t typically caused by bouncing in the mechanical sense but rather by the design of the IR communication. Most remotes transmit commands multiple times per second (e.g., 9–30 times) intentionally, and the receiver handles these signals based on timing choices.
If there are problems like double channel jumps or missed commands, it’s more about how the receiver interprets the repeated signals rather than a classic switch debounce issue. There’s definitely a similarity in handling timing and filtering inputs, but it seems like the core issue with remotes is different, as they must already handle repeated commands by design.
Then there's the "hold to repeat" mechanic, where if you hold it long enough it'll add virtual presses for you.
This is one of the best treatises on debounce, I've read it a number of times and probably will again.
One of the best things I've done to help with really bad debounce is spend time testing a number of buttons to find the designs that have, at the hardware/contact level, much less bounce. Some buttons wind up with tens of ms of bounce, and it's hard to correct for it and meet expectations all in software.
Just don't implement SR debouncer, OK? And don't use MC140* series chips, those don't work with 3.3V used by modern micros. And when he says:
> Never run the cap directly to the input on a microprocessor, or to pretty much any I/O device. Few of these have any input hysteresis.
that's not true today, most small MCU made in 2005 or later (such as AVR and STM8 series) have input hysteresis, so feel free to connect cap directly to it.
And when he says:
> don't tie undebounced switches, even if Schmitt Triggered, to interrupt inputs on the CPU
that's also not correct for most modern CPUs, they no longer have a dedicated interrupt line, and interrupts share hardware (including synchronizer) with GPIO. So feel free to tie undebounced switch to interrupt lines.
What's wrong with the SR latch debouncer?
It needs SPDT switch, and that rules out most of buttons.
And if you do end up choosing SPDT switch, then there are much simpler designs which have switch toggle between Vcc and GND, like Don Lancaster's debouncer [0]. That design is especially useful if you have many switches, as you can wire all VCCs and GNDs in parallel, and use 8-channel buffers to debounce multiple ones.
The SR latch schematics only makes sense if you are working with TTL logic (popular in 1970/1980) which did not nave a symmetric drive output pattern, and there is absolutely no reason to use it in 2000's.
[0] https://modwiggler.com/forum/viewtopic.php?p=275228&sid=52c0...
Ah, that is better. Thanks!
Agree. Helped me (a software guy) when I needed it. Automatic upvote.
Analysis is nice, although the graph style is very much 2005. The conclusion is that as long as you don't get a crappy switch, 10mS debounce interval should be sufficient.
I would not pay much attention to the rest of the text.
The hardware debouncer advice is pretty stale - most of the modern small MCUs have no problem with intermediate levels, nor with high frequency glitches. Schmidt triggers are pretty common, so feel free to ignore the advice and connect cap to MCU input directly. Or skip the cap, and do everything in firmware, MCU will be fine, even with interrupts.
(Also, I don't get why the text makes firmware debouncer sound hard? There are some very simple and reliable examples, include the last one in the text which only takes a few lines of code.)
> Also, I don't get why the text makes firmware debouncer sound hard?
The article links to Microchip's PIC12F629 which is presumably the type of chip the author was working with at the time.
This would usually have been programmed in assembly language. Your program could be no longer than 1024 instructions, and you only had 64 bytes of RAM available.
No floating point support, and if you want to multiply or divide integers? You'll need to do it in software, using up some of your precious 1024 instructions. You could get a C compiler for the chips, but it cost a week's wages - and between the chip's incredibly clunky support for indirect addressing and the fact there were only 64 bytes of RAM, languages that needed a stack came at a high price in size and performance too.
And while we PC programmers can just get the time as a 64-bit count of milliseconds and not have to worry about rollovers or whether the time changed while you were in the process of reading it - when you only have an 8-bit microcontroller that was an unimaginable luxury. You'd get an 8-bit clock and a 16-bit clock, and if you needed more than that you'd use interrupt handlers.
It's still a neat chip, though - and the entire instruction set could be defined on a single sheet of paper, so although it was assembly language programming it was a lot easier than x86 assembly programming.
You've read the article, right? None of the code author gives need "64-bit count of milliseconds" nor floating-point logic.
The last example (that I've mentioned in my comment) needs a single byte of RAM for state, and updating it involves one logic shift, one "or", and two/three compare + jumps. Easy to do even in assembly with 64 bytes of RAM.
Do you mean this code, from the article?
uint8_t DebouncePin(uint8_t pin) {
static uint8_t debounced_state = LOW;
static uint8_t candidate_state = 0;
candidate_state = candidate_state << 1 | digitalRead(pin);
if (candidate_state == 0xff)
debounced_state = HIGH;
else if (candidate_state == 0x00)
debounced_state = LOW;
return debounced_state;
}
That doesn't work if you've got more than one pin, as every pin's value is being appended to the same candidate_state variable.The fact the author's correspondent, the author, and you all overlooked that bug might help you understand why some people find it takes a few attempts to get firmware debouncing right :)
I don't think anyone is overlooking anything, because it should be pretty clear this code is a template, meant to be modified to fit the project style.
In particular, that's not in assembly (we were talking about assembly), and uses arduino-style digitalRead and HIGH/LOW constants, which simply do not not exist on PIC12F629 or any other MCUs with 64 bytes of RAM. Translating this to non-Arduino would likely be done by replacing digitalRead with appropriate macro and removing "pin" argument.
But if you want to talk more generally about atrocious state of firmware development, where people are just copy-pasting code from internet without understanding what it does, then yeah... there seems to be something in firmware development which encourages sloppy thinking and wild experimenting instead of reading the manual. I've seen people struggle to initialize _GPIO_ without the helpers, despite this being like 2-3 register writes with very simple explanations in the datasheet.
That chip has a 200ns instruction cycle though. Whatever program you're running is so small that you can just do things linearly: i.e. once the input goes high you just keep checking if it's high in your main loop by counting clock rollovers. You don't need interrupts, because you know exactly the minimum and maximum number of instructions you'll run before you get back to your conditional.
EDIT: in fact with a 16 bit timer, a clock rollover happens exactly every 13 milliseconds, which is a pretty good denounce interval.
Sure! I'm not saying debouncing in software was impossible.
But a person working on such resource-constrained chips might have felt software debouncing was somewhat difficult, because the resource constraints made everything difficult.
This is basically the answer.
Note that a lot of the content that Jack posted on his site or in the newsletter was written years, if not decades ago in one of his books or when he was writing for "Embedded Systems Programming" magazine. He was (completely retired last year) pretty good about only reposting content that was still relevant, but every so often you'd see something that was now completely unnecessary.
>No floating point support, and if you want to multiply or divide integers? You'll need to do it in software, using up some of your precious 1024 instructions.
Very much not true as almost nobody ever used floating point in commercial embedded applications. What you use is fractional fixed point integer math. Used to be working in Automotive EV motor control in the past and even though the MCUs/DSPs we used had floating point HW for a long time now, we still never ued it for safety and code portability reasons. All math was fractional integer. Maybe today's ECUs started using floating point but that was definitely not the case in the past, and every embedded dev wort his salt should be comfortable doing DSP math in without floating point.
Plenty of embedded microcontrollers in the 70s and later not only used floating point but used BASIC interpreters where math was floating point by default. Not all commercial embedded applications are avionics and ECUs. A lot of them are more like TV remote controls, fish finders, vending machines, inkjet printers, etc.
I agree that fixed point is great and that floating point has portability problems and adds subtle correctness concerns.
A lot of early (60s and 70s) embedded control was done with programmable calculators, incidentally, because the 8048 didn't ship until 01977: https://www.eejournal.com/article/a-history-of-early-microco... and so for a while using something like an HP9825 seemed like a reasonable idea for some applications. Which of course meant all your math was decimal floating point.
>Plenty of embedded microcontrollers in the 70s
Weren't those just PC computers and less microcontrollers?
No, though chips like the 80186 did blur the line. But what I mean is that different companies sold things like 8051s with BASIC in ROM. Parallax had a very popular product in this category based on a PIC, you've probably heard of it: the BASIC Stamp.
Intel 8052AH-BASIC. I loved the manual for that chip! Written with a sense of irreverence that was very unlike Intel.
Are you saying that it isn't true that there was not floating point support? That there actually was, but nobody used it? I don't see how that changes the thrust of the parent comment in any significant way, but I feel like I may be misunderstanding.
No. They're saying that instead of floating-point, fixed-point math was used instead. Floating point hardware added a lot of cost to the chip back then and it was slow to perform in software, so everyone used integer math.
The price of silicon has dropped so precipitously in the last 20 years that it's hard to imagine the lengths we had to go to in order to do very simple things.
>Are you saying that it isn't true that there was not floating point support?
NO, that's not what I meant. I said you didn't need it in the first place anyway since it wasn't widely used in commercial applications.
What's with the advice about interrupts and undebounced signals?
What does it mean that a flip-flop gets confused? What kind of undesired operation could that cause?
Because, quite honestly, if connecting directly means occasional transient failures, then having less hardware is a tempting tradeoff on small PCBs.
My test whenever I get handed someone else's code with a debounce routine is to hammer the buttons with rapid presses, gradually slowing down. That shows if the filter is too aggressive and misses legitimate presses. I also see strange behavior when they're implemented wrong like extra presses that didn't happen or getting stuck thinking the button is still held when it isn't.
What kind of line of work gives you the ability to discuss debounce routines as an everyday enough occurrence to speak with authority on the matter, if you don’t mind me asking?
Pretty much anything that involves direct conversations with hardware.
I build medical devices.
Be sure to read Jack’s mega treatise on low power hardware/software design if you haven’t yet.
https://www.ganssle.com/reports/ultra-low-power-design.html
One of the best practical EE essays I’ve ever read, and a masterwork on designing battery powered devices.
I saw this site in one of the comments on https://hackaday.com/2025/01/04/button-debouncing-with-smart...
I'm a big fan of debouncing in hardware with the MAX3218 chip. It will debounce by waiting 40ms for the signal to "settle" before passing it on. This saves your microprocessor interrupts for other things. It also will work with 12 or 24 volt inputs and happily output 3.3 or 5v logic to the microprocessor. It is pricey though at $6-10 each.
That chip is more expensive than having a dedicated microcontroller that polls all of its GPIOs, performing software debouncing continually, and sends an interrupt on any change.
It's price is it's biggest drawback, but it is also replacing any electronics used to run the switches at 12 or 24v which gets you above the noise floor if you are operating next to something noisy like a VFD. from the 6818 data sheet: "Robust switch inputs handle ±25V levels and are ±15kV ESD-protected" [1]
[1] https://www.analog.com/media/en/technical-documentation/data...
My thought is: This introduces latency that is not required (40ms could be a lot IMO depending on the use.) It's not required because you don't need latency on the first high/low signal; you only need to block subsequent ones in the bounce period; no reason to add latency to the initial push.
Also, (Again, depends on the use), there is a good chance you're handling button pushes using interrupts regardless of debouncing.
I guess I should rephrase. It saves all the interrupts except the one triggered at 40ms delay. For every button press without hardware debouncing, you can have 10s - 100s of 1to0 and 0to1 transitions on the microcontroller pin. This is easily verified on a oscope, even with "good" $50+ honeywell limit switches. Every single one of those transitions triggers an interrupt and robs cpu cycles from other things the microprocessor is doing. The code in the interrupt gets more complex because now it has to do flag checks and use timers (bit bashing) every time they are triggered instead of just doing the action the button is supposed to trigger. None of this is to say one way is the "right" or "wrong" way to do it, but putting the hardware debouncing complexity into hardware specifically designed to handling it, and focusing on the problem I am actually trying to solve in firmware is my personal preferred way of doing it.
that seems like a real overkill - it's a full-blown RS232 receiver _and_ transmitter, including two DC-DC converters (with inductor and capacitor) that you don't even use... Also, "R_IN absolute max voltage" is +/- 25V, so I really would not use this in 24V system.
If you want slow and reliable input for industrial automation, it seems much safer to make one yourself - an input resistor, hefty diode/zener, voltage divider, maybe a schmitt trigger/debouncer made from opamp if you want to get real fancy.
Thanks for pointing that out. I realized I called out the wrong chip. I was actually trying to call out the Max6818.
That's a neat chip, especial max6816/max6817 version in SOT23 package!
but yeah, very expensive for what it does. If my MCU was really short on interrupts, I'd go with I2C bus expander with 5v-tolerant inputs and INT output - sure, it needs explicit protection for 24V operation, but it also only needs 3 pins and 1 interrupt.
Edit: I meant to call out the MAX6818, not MAX 3218
I read through a whole page, wondering when we are getting to legal stuff, before I went back and re-read the title. "Contact" not "Contract"....
I've actually never really had much issue with switch bounces - I think many modern microcontrollers have built in gpio circuits which seems to help with this.
the evoluent vertical mouse I use has a notorious short lifespan because the switches aren't debounced or aren't debounced for a long enough period after only a year or two of wear. i demonstrated on an arduino that additional debouncing logic could fix the issue transparently to the user but never went further than that.
I've used that article several times in the past. I'll also note that many years ago, on projects which were not cost sensitive, we would often use the MC14490 IC for hardware debouncing (mentioned in part 2 of this article).
I just bought a horrible Lenovo keyboard with terrible debouncing. For some reason, it seems to only affect the vowels... is this sabotage?
By ancient Egyptians?
Wish I had known about this article last year when developers added debouncing to the Sensor Watch project. I had to learn a lot of this from scratch in order to review and merge in their changes.
I'm still running their code right now on my watch. It uses timers to allow the switch to settle before triggering input events. Dramatically improved its usability. Latency noticeably increased but without it interfaces which required holding down buttons simply didn't work reliably.
Debouncing doesn't need to delay a touchstart event, in general, so latency shouldn't really increase in a carefully designed system, especially if you can be animating in advance of the touchend event.