My gut is that there's a lot more to this, and that it's not broadly true.
I live in the UK and have worked from Australia a few times, and the latency is around 220ms. When typing over SSH, the latency is very noticeable. Not in terms of pressing a key and feeling lag in it showing up, but in a feeling of sluggishness when typing more than a few words at a time. I also notice this feeling in web apps that use React managed inputs for example.
My guess is that while 100-200ms isn't noticeable in one instance, adding that to every interaction in a system that has multiple interactions per second is noticeable.
A similar comparison is that I can't detect audio and visuals being incorrectly synced with much less than 200ms precision for individual beeps, however if I watch someone talking on a TV that's 30ms out I can tell something is wrong (even if I can't tell which direction).
If you read a little closer this is in his "counter-arguments" section stated as a claim made by other people - a claim he doesn't buy. If you look at some of his other articles about latency he points to user studies and research that shows the 100-200ms claim is not true.
So he's making the opposite argument you think he's making - he agrees with you.
The rule of thumb floating around when I was working on physical input devices last was to keep latency under 50ms so people wouldn't get frustrated. I don't have any sources but it becomes a bit more obvious when you look at it as fractions of seconds and then compare that to how slowly seconds actually tick by on a clock.
I remember Dan posting that he typed at ~120wpm somewhere. 120wpm is 0.002 words per ms or 0.4 words per 200ms. Wouldn't one notice if they're almost halfway through a word and nothing has shown up on the screen?
Even for certain applications, 50ms is quite a lot. When using MIDI devices and playing piano over the computer, I certainly notice anything above 20ms.
We may be slow to react to sudden events, but we are very good at noticing the lag for predictive events (keyboard typing, piano playing, ...).
Martin Molin of the Wintergatan fame recently posted some experiments regarding latency on his YouTube channel[0]. From watching those videos it becomes quite obvious that your statement is definitely correct.
Yeah this rule of thumb was for low fidelity things like hand radio buttons.
> We may be slow to react to sudden events, but we are very good at noticing the lag for predictive events (keyboard typing, piano playing, ...).
It does make sense in my lay person’s understanding that the chain of eye -> brain -> hand for reactive events would be slower than brain —> eye + hand for proactive events.
> Wouldn't one notice if they're almost halfway through a word and nothing has shown up on the screen?
There is quite a delay between the eye detecting something and it being consciously processed to the point of being used in decision making. Away from built-in reactions (move hand away from burning sensation and so forth) that are not consciously controlled there is a sometimes surprising amount of latency in our view of the world. It is surprising because the brain does a rather good job of smoothing it out with guesses/predictions such that you generally don't notice.
Only when things fall outside the range of patterns you naturally cope with or expect because they are consequences of your own actions, do you really become aware of them. So he might not notice usually, but would if an error or something else unexpected occurred.
This is why 100ms, or even sometimes 200ms, is fine for many things, but others cause irritation despite delays of 40ms or sometimes significantly lower. The brain smooths out what it can, the rest sets of warnings of potential trouble.
I work on VR systems, where latency is of paramount concern.
There is definitely a big difference between consciously noticeable latency and latency that has an effect on people.
Yes, to get to the point where a person can notice that latency is occurring and can call it out, you need to be in the 100ms range.
But if you want to make a VR system that doesn't make most people physically ill, you have to get "motion-to-photon" latency down to around 13ms (75hz). Even at that range, you still get a relatively large portion of people experiencing issues; I'd estimate 20%. At 90hz, you can get that down to about 10%. At 120hz (less than 9ms!), I haven't seen specific numbers, but my gut feeling from my own work with users is that it's lower enough to care about the extra work to hit that target. I suspect that we can't be fast enough on this issue, that 240hz, maybe even 1000hz, will be necessary to get down to 1%.
Luckily, most VR systems today have asynchronous reprojection systems to decouple view movements from application updates (to a degree). It is an extrapolation system, so there are some artifacts, especially around the edges of objects, especially if there is a huge difference between application update rate and view update rate. But I've seen applications running updates at 30fps, which should be headache inducing for sure for the vast majority of people, but they're running on headsets with a 120hz refresh rate and all you really notice are that animated objects look a little choppy.
Oof. I play piano (as a hobby, but seriously). When practicing on an electric piano, anything more than 20ms of latency is VERY noticeable for me, possibly even 10ms. There's a reason people still use wired headphones for practicing on electric pianos. Bluetooth just does not work. The latency interrupts the emotional state of flow and it's extremely jarring.
Typing on a computer is a different story. The latency perception via the eyes is very different than through the ears.
I play a lot of video games. In Tekken e.g. Heihachi has a move called OTGF. The input sequence is listed as f,N,df+2. Each of those movements must be done in 16.6ms or it doesn't work. On an 8-way joystick, there are only 4 switches for the cardinal directions. That means the final movement of df+2 requires hitting the d switch, f switch, and 2 button within the same 16.6ms window to count as simultaneous.
There are players that can do OTGF move every time they try.
My audio interface for connecting my keyboard/guitar to my PC is USB-C. I tried USB2 devices and find they have too much latency. It's very noticeable. USB-C has 1ms latency or less.
It's a world of difference playing FPS shooters online between a 50ms ping and a 100ms ping. If you care about your aim, 100ms ping is barely playable.
> My audio interface for connecting my keyboard/guitar to my PC is USB-C. I tried USB2 devices and find they have too much latency. It's very noticeable. USB-C has 1ms latency or less.
Are you sure that's not just USB 2.0 interface being shit ? USB2.0 is certainly fast enough to have sub-ms latency.
I actually wondered about how to go with testing USB interface latency, maybe I should test mine when I have slow weekend...
>There are players that can do OTGF move every time they try.
That has nothing to do with latency. Human nerve conduction velocities just aren't quick enough for things like that to be done closed-loop under the control of the brain - it's just physically impossible.
Although the person doing the action may feel otherwise, that's just a well-trained sequence of actions performed open-loop.
I remember playing CS at the turn of the century on French or UK servers with 100ms or a bit more without problems, could it be that the problem is that now we are sending more stuff than before?
I have friends who still play (making it 20 years). They're at a level where playing on public servers gets them banned for cheating (some are/have been semi-professional).
Any latency, whether it's hardware, software or server, drops their performance significantly. It's at a point where they only play/practice under prime conditions, as their sensitivity is such that they will throw themselves off by practicing with more latency than they normally do.
In FPSes in particular, every millisecond of latency typically translates into peeker's advantage due to how lag prediction works. Or sometimes disadvantage, in the netcode that is poorly implemented or optimized for poor connections. It also depends on the movement velocity.
In games with really bad netcode it can get ridiculous, influencing the entire gameplay - for example in PUBG beta, which had about 200-300ms of actual client-to-client latency due to the very low server tickrate, you had to avoid running directly towards the corner to look around it, as enemies could see you running out in the open for a split second then rubberbanding back.
I play guitar, but I'm in the same boat (probably with a longer signal chain) -- 10ms is about the limit before it's noticeable. The idea that people don't notice 100ms is crazy talk; anyone playing 16th notes at a pretty reasonable 120bpm is hitting a note every 125ms.
Our brains are much more sensitive to phase/delay in audio sources than any other stimuli. There is a reason the lip sync setting on many home theater setups can be changed in increments of 1ms or better.
One place where latency has been felt very obviously for me is the MMO World of Warcraft. I’ve played it for many years from several different geographical locations, with reported latency between my computer and their servers ranging from as low as 20ms to as high as 240.
I can pretty reliably tell if I’m playing on a PST or EST (North America) server, with the latter having 50-60ms latency compared to a PST server with 20-30ms due to differences in distance. It’s not massive, but on EST servers everything is ever so slightly less responsive, and it’s particularly visible in combat where many abilities are being used rapidly and outcome is dependent on order of operations.
The differences starts becoming obvious when crossing the 80-90ms latency and unmistakable at 120ms+.
I would agree that it would be difficult to tell the difference in a one-off test but when it’s something you’re actively doing for long stretches of time it’s easy to pick up on.
I played WoW on dialup will into the lifespan of WotLK. Average ping in the 320ms range. I learned to play around it to a degree, 'queueing' my spell casts by triggering it when my current one was close to completion, such that the command would hit the server about the time it would complete my previous spell. I could reliably chain cast spells this way once I got used to it, using the order of operations to my advantage as best as possible, as you say.
Once I moved and got onto broadband, and found myself with a miraculous 100ms ping, I basically had to learn the game all over again as my instincts would cause me to try to do actions I couldn't yet do, since I wasn't quite done casting a spell, harvesting a node, etc.
Later on I changed servers and was below 80ms ping, and the difference was again palpable but not game changing.
I also notice this feeling in web apps that use React managed inputs for example.
I can tell when there's an extra render in a React app, on an input or just when something renders slightly later than expected like an update is being done in a useEffect after the first render, and I agree that it's really annoying. However, the input latency of a React app should be less than 16ms. There's nothing intrinsic to React (or any JS framework) that means there has to be any appreciable lag. Obviously there are badly written ones that do things very poorly, but if you find one like that you should vote with your feet and move to a competitor.
> Obviously there are badly written ones that do things very poorly, but if you find one like that you should vote with your feet and move to a competitor.
What if there are no well written ones? No competitor to move to? I for one haven't seen a single React app that wasn't annoyingly slow, under the standards we're discussing here.
> There's nothing intrinsic to React (or any JS framework) that means there has to be any appreciable lag.
And yet, for common SPA frameworks, it seems it's always there. So if it's not the framework and the language, then something common in the overall process of frontend dev is a likely candidate.
What if there are no well written ones? No competitor to move to? I for one haven't seen a single React app that wasn't annoyingly slow, under the standards we're discussing here.
If that's the case then React is a huge problem.
I've seen lots of apps that nail 60fps that are written in React though, so I don't think it is the case.
That does not show you can distinguish < 20 ms latency. Latency is specifically the time between you taking an action and getting a result. I would doubt that most people would be able to tell the difference between 8ms and 16ms of latency between a button and light or speaker.
There are two other ways scrolling on a 60Hz vs 120Hz phone would allow you to tell the difference: the increased smoothness of the scroll (not related to latency at all) and how far the content lags behind your finger. The content lagging behind where you initially touched does allow you to experience the latency, but that is more of a visual only experience than a temporal one.
In case of this particular interaction its just easy to notice because you have direct view of pen position and what it is drawing so any latency means the pen is dragging but there is nothing under it.
It's different for where you don't have same senses for both. "Pressing" a key is feeling + sound but reaction is change in image, you don't see keyboard and screen at same time so it feels more tolerable.
> "Pressing" a key is feeling + sound but reaction is change in image, you don't see keyboard and screen at same time so it feels more tolerable.
It's not really more tolerable, unless you never experienced low latency here, and your brain is used to the effect of a key press being perceptually immediate.
The author is rebutting this. "This line of argument is like saying that you wouldn’t notice a flight being delayed by an hour because the duration of the flight is six hours."
I was messing around with CSS animations the other day and even at 100ms you can definitely see the animation happening. One of these days I'll get around to making a site to A/B test how fast they are noticeable.
Maybe there are circumstances where 100ms+ latency is not noticeable, but I suspect they are few and far between if you're actually paying attention.
> even at 100ms you can definitely see the animation happening
See it happening is very different from noticing lag. Here's a gif I like to reference when trying to determine how noticeable a lag spike is in milliseconds. For me it become unnoticeable at a 30ms lag spike. But 100ms is very plainly obvious.
Edit: Also, movies are typically filmed at 24FPS which is around 41ms per frame. There's something called the "Soap Opera Effect" which is when you're watching an older movie on a newer TV that automatically interpolates frames and boosts it to 120FPS (8.3ms per frame). People have noticed this effect so much that it has its own term coined after it, the soap opera effect. That's a 32ms difference, and people may not immediately be able to tell what's wrong, but they know something is off
For me it's noticeable down to around 20ms, although of course it becomes less noticeable the lower you go. Surely the existence of high FPS screens is clear evidence that humans can see latency spikes? 60 FPS is ~16ms. 120 FPS is ~8ms.
The linked comment is poorly written — that is actually the claim which the author of that comment is arguing _against_. He has the correct answer which is that you need to be somewhere around 10ms to be imperceptible but explains where some of the different claims are coming from.
There are a number of confounds for measuring things like this because we're often not talking about exactly the same thing: for example, the times needed to perceive motion, size, color, orientation, and recognize something are all different (to a first approximation, the relative latencies are roughly what you'd expect if the most important selection criteria was a relatively weak fructivorous ape trying not to get eaten) and there are measurable differences between how you respond to things you're predicting will happen, continuous motion, or truly unexpected events. This also varies considerably for novel and trained behavior — e.g. if you're waiting to shoot when you see a pixel change, your body is primed and presumably quite well practiced vs. something random happening in a video. All of that means that something like extra keyboard latency is especially noticeable because you're initiating the action, it's extremely practiced, and you thus have a well-honed expectation for when the response should be visible
Games are a bit different – how the latency is measured depends on the game, and typically doesn't account for the rendering or local state update required to render correct data, may not account for the server's computation of the new state, won't account for input latency, etc. I agree that it's very noticeable in games, but a quoted 20ms network latency on a game could probably be anywhere from 40 to 200ms of effective latency, so I don't think they're a great example to use here.
What absolutely everyone is measuring today in games is motion to photon latency, with the help of a high-speed camera. This is the only thing that actually matters, along with motion-to-photon over network (with RTT measured separately), so you can tell how well both client-side and server-side parts are optimized, how high the server tick rate is etc. For the server part, different actions such as movement or attacking could even have wildly different latencies, depending on how it's implemented in the code.
The most responsive games achieve sub 30 ms motion to photon locally, with proper hardware.
> "It’s not a coincidence that the quickest keyboard measured also has the shortest key travel distance by a large margin."
High-end keyboards often have user changeable key switches and there are a hundreds of different switches available, each tuned for a different mix of specified parameters including actuation depth, pressure (force) and many more. Switches made for competitive gamers are generally linear (no click), with light force and very shallow actuation depth. For example, some users love the feel and sound of clicky Cherry Blue switches but they're obviously a poor choice from a latency perspective. On my personal keyboards, I even have different switches installed on certain individual keys (Enter, Spacebar, etc) which have different actuation depth and force. Keyboard geeks and pro gamers can be very picky and precise about their preferred switches and the first step in getting serious about keyboards is swappable switches and key caps.
I appreciate the rigorous approach to testing the entire latency pathway but I think the key switch aspect of the chain is already very well specified, characterized and studied and is in the control of users who would value objective latency tests. Personally, I'd be far more interested in tests that measure the latency of the rest of the path after electrical switch actuation. Otherwise, a keyboard manufacturer may have the shortest post-actuation latency but it's invisible to these tests due to being masked by the subjective choice of a non-linear, full travel switch the manufacturer may have chosen due to a "feel" preference.
Sounds like the wooting keyboard is something you’d appreciate: hot swappable switches, but no need, as you can configure the actuation points with their software: https://next.wooting.io/wooting-60he
The Corsair K65 achieved the fastest latency of 0.1ms. By comparison the Apple Magic Keyboard with TouchID had a latency of about 27ms, both wired and bluetooth. Pretty wild that the Apple keyboard is 270x slower!
Now I personally use the Logitech G915 TKL, low-profile. The 1.3ms latency is excellent and I love the key feel.
Typing in Vim/Sublime feels instant compared to your run of the mill IDEs. It's painful having to work in those behemoths, esp. considering the fact that I'm literally waiting for them to put text into a buffer.
That difference is less than 150ms, and I hate it.
EDIT:
Here's a video depicting latency. The difference between 10ms and 1ms is monumental.
150 ms is the time it takes for a person to see the input and then do something (like pressing a key or blinking). That's a two way communication with processing (thinking) time included. The actual input reaction, as the time it takes for your brain to register something, is faster.
In addition to that, the reaction time does not actually matter. You would be able to see a sub-reaction-time delay because your brain has a way of timing and synchronizing events. Look at it this way. You send a letter on March 1. You receive a reply on March 10. It doesn't matter how much later you actually read the reply, on March 11, 15, or in April - you still would know that it took 10 days to get the reply.
Reaction time is a different measurement than perception time. The linked article goes into this.
You can absolutely tell the difference between 150ms and 1.3ms. Hell, people can easily tell the difference between a 60 FPS framerate and 30 FPS. That's a difference of only 16 ms per frame.
Reaction time is time to react to (randomly generated) stimulus. So it measures the delay of our inputs, and delay of our outputs.
You are not reacting to stimulus. You are producing event (keypress) and waiting to see result.
Brain sends the message, it hits the fingers some miliseconds later but brain already knows what to expect and is already watching. So the net effect is "okay I knew I pressed the key, why nothing changed?"
I understand that key travel time is included in the latency measurements to facilitate the camera-based measurement, but wouldn't it make more sense to measure latency purely in terms of electrical signals? For example, measuring the time between the first connection of the circuit in the keyswitch to the time at which the USB packet including the keypress is sent across the wire? This seems like it would be equally possible to test with a second logic analyzer, without relying on a high FPS camera. Many people who use "special" mechanical keyboards are well aware of the actuation points on their keyboards, and understand that there are tradeoffs between travel time, physical feedback, and so on.
Put another way, unless you think gently resting your fingerprints on the top of a key should count as a "press", then it doesn't make sense to include the key travel time in latency.
Yeah I think so. Relevant quote from the article below. How are the keys being pressed, manually? We could have skipped all this by just knowing key travel time will govern based on his experiment. If you really want to know the fastest keyboards look at what the winners of typing competitions use.
>A major source of latency is key travel time. It’s not a coincidence that the quickest keyboard measured also has the shortest key travel distance by a large margin. The video setup I’m using to measure end-to-end latency is a 240 fps camera, which means that frames are 4ms apart. When videoing “normal" keypresses and typing, it takes 4-8 frames for a key to become fully depressed. Most switches will start firing before the key is fully depressed, but the key travel time is still significant and can easily add 10ms of delay (or more, depending on the switch mechanism). Contrast this to the Apple "magic" keyboard measured, where the key travel is so short that it can’t be captured with a 240 fps camera, indicating that the key travel time is < 4ms.
Yeah, this is a pretty big issue that disproportionately effects mechanical keyboards, failing to account for the fact that the "ready" position in the context of gaming on a mechanical keyboard likely involves having the key being slightly depressed, just above the actuation point (think resting your fingers on WASD).
The author is concerned about perception of latency. You perceive the time from when you make the decision to press a button to when you see the result. From this perspective mechanical vs electrical is irrelevant.
That's assuming that the user always starts from a completely uncompressed key. On a medium-weight mechanical keyboard, for latency-sensitive actions you'd likely be hovering the key just above its actuation point (one of the reasons I actually prefer tactile keys for gaming: the idea keyboard holds the weight of my resting fingers just above the actuation point).
> A common response to this is that "real" gamers will preload keys so that they don't have to pay the key travel cost, but if you go around with a high speed camera and look at how people actually use their keyboards, the fraction of keypresses that are significantly preloaded is basically zero even when you look at gamers. It's possible you'd see something different if you look at high-level competitive gamers, but even then, just for example, people who use a standard wasd or esdf layout will typically not preload a key when going from back to forward. Also, the idea that it's fine that keys have a bunch of useless travel because you can pre-depress the key before really pressing the key is just absurd. That's like saying latency on modern computers is fine because some people build gaming boxes that, when run with unusually well optimzed software, get 50ms response time. Normal, non-hardcore-gaming users simply aren't going to do this. Since that's the vast majority of the market, even if all "serious" gamers did this, that would stll be a round error.
That's the best case scenario, not average case scenario. It only regularly happens with gun/mouse in shooters, as in any games that have abilities to cast you don't exactly know which one you might cast next in a given moment.
I notice very significant latency with my Moonlander Mk1, but that is most likely caused by the software and fancy layered kemaps I have created.
When I use the Moonlander, mainly for development, I end up typing a lot slower but producing the same output. I also have a lot fewer keyboard shortcut contortions, so my hands are happier.
Still it feels like we are trying to maximize the wrong thing, especially when writing software. It still feels very low level and mechanical, when the concepts and patterns are pretty well established. We just don't have a great interface for calling those up and making minor adjustments to suit our specific need.
I haven't used copilot, but from seeing devs on Youtube who have it enabled, I'm not sure it is the answer. The visual distraction from seeing possible but often slightly wrong solutions appear and then change seems like trading one chore for another.
> I haven't used copilot, but from seeing devs on Youtube who have it enabled, I'm not sure it is the answer. The visual distraction from seeing possible but often slightly wrong solutions appear and then change seems like trading one chore for another.
This feels reminiscent of my experience with most search systems, whether its the start menu in windows or google search:
-Type the first two or three letters of what I want
What I want appears first in the list
-"Oh good", I think as my brain processes that but I still type the next letter as I was already in the process of doing so
-Search results change thanks to that letter at the same moment I hit enter
> Search results change thanks to that letter at the same moment I hit enter
This happens so often, everywhere - browsers, spotlight, mobile keyboards, etc.
When Spotlight is behaving itself, it tends to work really well. But it has bugs (or inconsistent wrong behaviors) that cause it to sometimes suggest web searches instead of the app I was trying to launch. And there seems to be no way to actually, completely disable web search from Spotlight.
What does work really well is the JetBrains IDE open-anything search. Not only does it do a good job with normal typing, but if you remember to just type some of the letters of the thing you want to open, in sequence, it amazingly chooses the right one. "ores" -> OrderRefundService.
> I notice very significant latency with my Moonlander Mk1, but that is most likely caused by the software and fancy layered kemaps I have created.
I'm not noticing all that much latency with my Ergodox EZ, but I'm also running QMK master[1] (rather than ZSA's fork via Oryx). It would be useful to note the firmware + revision in these tests.
40ms sounds like a lot of latency for twitchy games like StarCraft II (for which I've extensively customized my layout[2]); while I'm definitely enjoying the improved ergonomics in the game, I'm starting to wonder how much it's actually impacting my performance.
> Is ZSA's build known to have latency issues compared to QMK?
ZSA is using a patched QMK - they even let you download the exact source for each firmware build they make for you. At the time when I switched to vanilla QMK (that was already some years ago), ZSA were veeery far behind master; running the latest QMK release fixed a couple of issues for me (like hotplugging the halves), so I guess there could be other improvements? No idea really.
> Would a faster microprocessor help
In the MCU world, latency and clock speed can have a very linear relationship - until they suddenly don't. The microcontroller's job is very simple really: scan the key matrix at a certain frequency, perform key debouncing, compare the current state with the previous, and craft a USB HID packet with key press/release events.
So having twice the clock speed could theoretically let you scan twice as often, so it might let you cut the latency in half. Except we have those pesky physics getting in our way! For simplicity let's assume we don't have split halves (where there's an extra serial connection slowing things down); I'm no EE so I only grasp these concepts at the surface level, but signals take time to propagate, and long traces on the PCB (and cables too) have a tiny bit of their own capacitance. (Capacitors are like really fast, really tiny batteries - but they still take a tiny amount of time to charge and discharge, which does all sorts of interesting things to high-frequency signals.)
On top of that, the electrical connection that the pieces of metal are making inside the switch, are never perfect at the exact instant the switch is supposed to (de)register: a couple electrons might start jumping over the air even before contact is made, and the physical connection is subject to normal wear, amplifying the "edge case" effect over its lifetime - which all together means we have to actually spend a certain amount of time "looking" at the state of the switch, to let it settle and make sure we got it right.
We end up spending so much time letting physics do its job that in a trivial firmware, the MCU is actually spending a significant amount of time... just sleeping. Which means we were later able to cram all sorts of madness like individual RGB lightning or status displays, and never decreased the poll rate.
Where would these 40ms come from then? Well I wouldn't get near the problem without an oscilloscope, and unfortunately I don't have one.
> I'm just starting to get into custom keyboards.
Then I recommend studying the original ErgoDox firmware & build instructions! It's extremely straightforward compared to a beast like QMK, which actually uses a whole RTOS.
This post inspired me a few years ago to start a very impractical learning side project. Most keyboards don't overly prioritize latency. Development of a keyboard is easy using a USB stack from the manufacturer, but it might not prioritize latency.
I'm working on making my own FPGA based USB 2.0 HID device that will let me minimize any latency in the stack. PCB layout is mostly done, I'm working on a DECA FPGA board to prove out the USB hid software now. I started this pre covid though when Mach XO2s were inexpensive and available, so I have no idea who I will need to fight these days to get parts when I get to that point.
There is lots of wiggle room too in how you implement your debounce algorithm to optimize latency too. I'm excited to control the whole stack to try to make this as fast as possible. The Logitech lightspeed products came out after I started this project though and are far more practical for most people. I have one of those at home and will try to benchmark and compare them when I get there.
I have written USB device firmware for AVR, and read docs for a bunch of microcontroller families' with different internal HID device units that I wanted to port to.
If you do take suggestions for your next version, there are some hardware features that I would like:
* Let the MCU replace the next unsent input report, and atomically. Many USB devices can only queue reports, not replace them.
* Allow the microcontroller to know when a input report has been received. (ACK)
The first would make it possible to get the lowest latency with reports such as keyboard reports that contains only the state of momentary switches.
The second would make it possible to ensure that an event has been received, even if the host would poll at a slower rate than what the firmware works at.
The situation is especially complex for mice, where the reports have inputs that are relative — to the previous report.
I'm not familiar with that definition, I typically see debouncing used as any means to filter out the state as it is changing from the mechanical action.
I've seen simple BSP debounce example code that affects latency for both press/release. For example you can make sure the IO hasn't changed in X ms before accepting it as settled and reporting the event up. This way would incur latency on both press and release. In fact, the first answer I see on google does this:
https://www.beningo.com/7-steps-to-create-a-reusable-debounc...
You could report the event right away when there is an activation, and just not allow a deactivation event to be reported until the debounce time has expired. I suspect this is what you mean by debounce only applying on deactivation, but I'll bet some of the keyboards tested on that list are not doing this.
I mean, sure, there if you're looking for a general-purpose way of doing things then that example is fine. If you have a normally-open switch and a latency sensitive application then there's one pretty clear implementation.
I've seen implementations where the CPU gets the pulse, waits for debounce interval, check whether the pulse is still happening and only then sends "the button is on" signal, which obviously is terrible for latency.
Proper debounce will send signal immediately then ignore the state for few milliseconds.
> Debouncing is about preventing inappropriate deactivation, and is unrelated to time to initial activation.
It's both. Contacts generate noise on both press and depress.
I thought it's about preventing inappropriate reactivation? As in, the key slightly bouncing back and forth when you press it, thereby registering two (or more) activations per press
I suppose we should really talk about changes in state rather than activation/deactivation as it's the same problem. But the basic point is that detecting an edge on an "armed" switch is all that's necessary to fire the related event and confirm the state change definitively.
The debouncing logic is about how we determine when to re-arm the switch for the next transition - i.e. how to reject the "false" reversions to the prior state. So it shouldn't have any impact on the "physical action to key-down event" latency in systems with a reasonable steady state. I guess for cases where "pound on the same key again and again as quickly as possible" are in scope it does?
If the test is between the keyboard and screen "For desktop results, results are measured from when the key started moving until the screen finished updating", then the classic "Transatlantic ping faster than sending a pixel to the screen?" should also be considered:
It is not uncommon to see desktop LCD monitors take 10+ 240 Hz frames to show a change on the screen. The Sony HMZ averaged around 18 frames, or 70+ total milliseconds.
This whole thing seems so technical and hardware focused. Scan rates, debounce algorithms, USB packets, etc. and then it comes out halfway through the blog post that he's actually largely measuring how long it takes his finger to depress the key!
He says that 16-32 ms of the latency ("4-8 frames") is attributed to his finger speed. In other words, he starts measuring from physical contact with the key rather than switch contact.
You'll see that the histogram of the keyboards they've tested suggests a bi-model latency distribution. About half tested show essentially no latency (0-3ms) and the other half show 10-15ms latency. There are outliers too, of course, but note that the finger speed measurements dominate these numbers.
It seems this post is really about key travel, but in disguise.
Unless I'm misunderstanding something, I see a possible issue with the experiment setup.
> The start-of-input was measured by pressing two keys at once -- one key on the keyboard and a button that was also connected to the logic analyzer.
In order to get an accurate measurement, wouldn't you need to directly connect the analyzer to circuitry on the underside of a specific key, as well as the USB output, and measure the diff? The article addresses this by mentioning it's only possible to get average latencies with their setup, but I wonder if this is why the keyboard with the shortest measured latency is also one with an extremely low travel distance.
If the major source of latency is key travel distance, does this really impact the user experience? My feeling is that a keyboard with longer travel distance feels more "solid" than one where the key hardly goes down at all, even if this adds some extra latency. Otherwise we could just type on virtual keyboards with zero travel distance, but then there is no tactile feedback when the key is pressed.
This post is why I believe the Apple Magic Keyboard is actually the best gaming keyboard currently available. I noticed it when I tried gaming on traditional mechanical keyboard, and then on an AMK. The Apple keyboard feels distinctly snappier, lets me react faster and take actions in quicker succession.
Now when I watch a real-time strategy gaming streamer or something, and hear them clacking furiously at their fancy mechanical keyboard, I wince. They're losing 10-20ms to key travel on every action they take!
Another great deep dive into latency of typing is this one [1] which considers input latency from the keyboard, the keyboard interface, the editor/IDE, and even the window manager.
Pavel found that Windoes Aero added 16.6ms extra latency (but this has probably been fixed by now).
It’s not quite ‘latency’ but one keyboard attribute I’ve come to notice is the key “spring-back” time. I’m not sure what that’s called. I recently switched from an older Apple wireless keyboard to a Logitech MX Keys keyboard. The feel is nice but the depressed keys feel like they “bounce back” too slowly for me leading to frequent typos.
It would be interesting to see concrete tests. Gamers have long preferred wired keyboards and mice supposedly for latency reasons, with the only wireless models they’ll accept being proprietary 2.4Ghz RF protocols designed specifically for lowering latency (e.g. Logitech’s Lightspeed lineup).
I have a Lightspeed mouse as part of my gaming setup and it does feel very responsive, easily good enough for my needs, but I’m not a chart topping FPS player who’d be pushing these devices to their limits.
The big problem I have with bluetooth that wires resolve is the seemingly-saturated 2.4Ghz in proximity to my mac. My mice (logitech MX Master3 and others) have a tendency to fritz out when they're on the other side of my desk from my laptop. Ditto keyboards. It's silly-frustrating.
What's funny is my bluetooth headphones work fine the whole time. They generally only freak out when I'm close to a running microwave.
It's a bit wild how variable experiences are between users with any flavor of wireless (Bluetooth, WiFi, RF). Over the past 18 years or so they've all been mostly trouble-free for me, save for software weirdness like how Windows 10/11 will cut off the first 0.3-1.5s of audio played to a Bluetooth device, but I've encountered several people over the years that wireless has been nothing but trouble for.
I randomly purchased a Logitech G305 (with Lightspeed) to replace my wired B100 (I wanted a mouse with a similar shape). I figured I'd sacrifice latency for losing the wire.
Turns out that this mouse feels exactly the same as using a wired mouse to me (not gaming). Lightspeed is absolutely not a gimmick!
(I too would like to see tests done with wireless devices.)
The author assumes that humans can't feel the difference between 100 and 200 ms latency, which is simply not true. When I had to move from gameport MIDI interfaces to USB ones back in the day, I could immediately feel the added latency, which was just a few milliseconds more, not a hundred or worse, but to a musical ear it was clearly noticeable. They also only measure USB and wireless devices (that is, USB and USB+wireless ones, since the dongles use USB) which of course are the main limitations wrt latency. Those keyboards can't beat for example any PS2 based 30 years old keyboard because the PS2 port was essentially an UART directly connected to the system bus, without any of the overhead a complex protocol such as USB would add.
To me very low latencies can be obtained with USB too, but that would likely require a total rewrite of the drivers and engineering keyboards to take advantage of low latency drivers just like USB music keyboards do.
Edit: please disregard any attribution to the author of the false assumptions about human latency perception; that was a citation from other sources they were also refuting. Thanks to all HNers who corrected me.
No, the author specifically questions that assertion.
> "[...] it’s common to hear people claim that you can’t notice 50ms or 100ms of latency because human reaction time is 200ms. This doesn’t actually make sense because there are independent quantities. This line of argument is like saying that you wouldn’t notice a flight being delayed by an hour because the duration of the flight is six hours."
Separately, I'm very much in the "can notice sub-50ms latencies," but a lifetime of work in digital audio and video will do that.
> The author assumes that humans can't feel the difference between 100 and 200 ms latency
The author does not believe that, the section you’re referring to is titled “counter-arguments to common arguments that latency doesn’t matter”. If you read the paragraphs that follow you will see a rebuttal to the argument that 100ms is insignificant.
The author certainly doesn’t assume that. ”Humans can’t notice 100ms or 200ms latency” is given as an example of a common claim, and the author gives a counter-argument.
My gut is that there's a lot more to this, and that it's not broadly true.
I live in the UK and have worked from Australia a few times, and the latency is around 220ms. When typing over SSH, the latency is very noticeable. Not in terms of pressing a key and feeling lag in it showing up, but in a feeling of sluggishness when typing more than a few words at a time. I also notice this feeling in web apps that use React managed inputs for example.
My guess is that while 100-200ms isn't noticeable in one instance, adding that to every interaction in a system that has multiple interactions per second is noticeable.
A similar comparison is that I can't detect audio and visuals being incorrectly synced with much less than 200ms precision for individual beeps, however if I watch someone talking on a TV that's 30ms out I can tell something is wrong (even if I can't tell which direction).