Hacker News new | past | comments | ask | show | jobs | submit login

UI latency has gotten horrendous, both desktop and web.

That is what people are experiencing - 500ms-3000ms delays for basic UI interactions (or more), frozen UI’s, jerky/laggy autocomplete and UI renders. Like the classic ‘button is a different button by the time you see the old button and click on it’.

On incredibly lightly loaded and overpowered hardware.

Everyone has been focused on some core algorithm, and completely ignoring the user experience. IMO.




I remember learning that you get about 100ms for a basic UI interaction before a user perceives it as slow. And you get about 1s for a "full page navigation", even if it's a SPA, users are a bit more understanding if you're loading something that feels like new page.

Getting under 100ms really shouldn't be hard for most things. At the very least it should be easy to get the ripple (or whatever button animation) to trigger within the first 100ms.


It is mental just how different the video game and (web/desktop front-end) realms are.

In the former, one can have a complicated and dynamic three-dimensional scene with millions of polygons, gigabytes of textures, sprites, and other assets being rasterised/path-traced, as well as real-time spatial audio to enhance the experience, and on top of that a real-time 2D UI which reflects the state of the aforementioned 3D scene, all composited and presented to the monitor in ~10 ms. And this happens correctly and repeatedly, frame after frame after frame, allowing gamers to play photorealistic games at 4K resolution at hundreds of frames a second.

In the latter, we have 'wew bubble animation and jelly-like scroll, let's make it 300 ms long'. 300 ms is rubbish enough ping to make for miserable experiences in multiplayer games, but somehow this is OK in UIs.


Agree it's like two separate worlds. Games and web aren't 1:1 tho in relation to whether visual responsiveness is blocking a task.

Games need ultra-responsiveness because rendering frames slower is essentially blocking further user interaction, since players are expected to provide a constant stream of interaction. Being 'engaged' is essentially requiring constant feedback loops between input/output.

On the web the task of reading a webpage doesn't require constant engagement like in games. UI (should) behave in more predictable ways where animation is only there to draw association, not provide novel info. Similarly UI animations are typically (or should not be) blocking main thread responsiveness and (should be) interruptible, so even low frame rates are not breaking the experience in the same way.

But still, your point stands, its crazy what we've come to accept.


I also expect my everyday tools to be responsive e.g. if a "desktop" application lags while typing I'm uninstalling that shit (if there is an alternative sigh).


I find VS Code unusable for this reason - typing is like communicating with an app in a parallel world.


It’s a good thing we don’t talk about Eclipse, hah.

How can a UI framework be abused so heavily that it’s that frustrating to try to use?


> It is mental just how different the video game and (web/desktop front-end) realms are.

There is absolutely nothing mental about it, and I'm saying this as someone who's worked on a couple of game projects myself.

Somehow people making these comparisons are never willing to put their money where their mouth is and give random web pages the same level and amount of access to system resources as they give to those "photorealistic games at 4K resolution at hundreds of frames a second".


Usually these issues are caused by doing work in the UI thread (easy, when it’s ‘cheap’) synchronously.

All UI thread frameworks end up doing the UI interactions on a single thread, or everything becomes impossible to manage, and especially if not careful, it’s easy to not properly setup an async callout or the like with the correct callbacks.

It is easy to make a responsive, < 100 ms UI, it’s often harder to keep it that way.


The only time I see delays like that is when there is something that has to happen like a network data fetch, database lookup, etc. I’ve written a ton of GUIs in javascript/python and they’re all indistinguishable from c++ qt apps I’ve written, basically faster than a human can hope to do another operation short of queuing up keyboard commands via a “keyboard only” interop (say in emacs). When I’ve seen slowness like the latency was always per what I said before data fetch in some format from a slow database/network connection


No doubt I'm picking an easy target here, but it takes like 700 ms to switch back and forth from the Chat tab to the Calendar tab in the Teams desktop client on my boring work-issued laptop (i.e. commodity hardware). This is repeatable, first time, every time. It doesn't even bother to animate anything or give feedback that a UI interaction has occurred until 500+ ms after the click.

Some things do run very quickly, for sure, but so many of the high touch pieces of code out there from big name corps have some of the worst performance. Hundreds of millions of people use Teams, and many of them use it a lot throughout the day. You must just be getting lucky in what apps you use on a regular basis.


I think they’re referring to actual implementation, which in their experience would require some sort of massive architectural stupidity to produce a slow UI on desktop.

And you’re referring to your (and most other people’s) daily experience, which is of a major software firm producing daily used software with a super slow UI on desktop.

With a little cynicism, these two views are quite compatible.


I wonder to what degree people are actually experiencing this.

I'm on a 5-year old intel Macbook, and I think in my daily experience, core software (browsers, emacs, keynote, music) are pretty snappy.

I do routinely work with some extremely frustratingly slow software, but it's pretty stark how much its performance stands out as, well... exceptionally bad.


> button is a different button by the time you see the old button and click on it

This is somewhat exacerbated by, ironically, UI mostly being async these days. Back then, if app was slow, it would usually block the UI thread outright, so you couldn't click anything until processing was done. But these days UI is usually "responsive" in a sense that you can interact with it, and lag instead manifests by UI getting out of sync with the actual state (and constantly trying to catch up with it, causing the problem you describe).


I think this problem is a bit overblown, as we tend to get very angry at the relatively few offending ones, and fail to notice the case where it just works. So might be just some human bias over-amplification.

Like, there are websites that absolutely suck, but that’s mostly due to some idiotic management decision to add 4 different tracking bullshit libraries, and download 6 ads per click. Thinking about the regular software I use.. could it be better? Certainly. But it is very far from unusable.


UI lag is a problem, for sure -- and so are some of the dynamic layouts that are meant to "solve" it[1].

One thing I've noticed lately is mouse lag. Like, say, in Ye Olde Start Menu on Windows: Move the mouse to the Start menu, and press the mouse button down. Nothing happens. Release the button, and then: Something happens.

The menu is triggered on button-up events, not button-down events. This adds a measurable delay to every interaction.

Same with Chrome when clicking on a link: Nothing happens until the button is released. This adds a delay.

I mean: Go ahead and try it right now. I'll wait.

And sure, it might be a small delay: After all, it can't be more than a few milliseconds per click, right[2]? But even though the delay is small, it is something that is immediately obvious when opening the "Applications" menu in XFCE 4, wherein: The menu appears seemingly-instantly when the mouse button is first pushed down.

[1]: It took me more than three tries to pick a streaming device from Plex on my phone yesterday. I'd press the "cast" button, and a dynamic list of candidate devices would show up. I'd try to select the appropriate candidate and before my thumb could move the fraction of an inch to push the button, the list had changed as more candidate devices were discovered -- so the wrong device was selected.

So I then had to dismantle the stream that I'd just started on the wrong device, and try (and fail) again.

[2]: But even small delays add up. Let's say this seemingly-unoptimized mouse operation costs an average of 3ms per click. And that 100 million people experience this delay, 100 times each, per day.

That's nearly an entire year (347 days) of lost human time for the group per day, or 347 years lost per year.

Which is 4.4 human lifetimes, per year that are lost within the group of 100 million, just because we're doing mouse events lazily.


> Same with Chrome when clicking on a link: Nothing happens until the button is released. This adds a delay.

UI buttons have activated on the mouse-up event since forever ago. There's a reason: to give the user the choice of backing out of an action even after the user has pressed the corresponding button. They can just move the pointer off the button, release, and the action will not be committed.

John Carmack made this same point recently. John Carmack is wrong. Maybe the trigger on your gun in Doom needs to respond on mouse down, for the immediacy and realism -- that's how real gun triggers work, no takesie-backsies once it's pulled. But especially for potentially destructive UI actions such as deleting or even saving over a file -- or launching the nuclear missiles -- you want to give the user every opportunity to back out and only commence the action once they've exhausted all those opportunities.

It's been that way since the 1984 Macintosh -- since a time people remember as having much snappier UIs than now.

Besides which, worrying about the few milliseconds lost each time a button waits for the release is pennywise and pound-foolish; there are much larger, more egregious sources of lag (round trips to the server, frickin' Electron, etc.) we should work on first.


The mouse event thing is a bit misleading.

Mouse interactions have (at least) 5 different events: hover, down, up, click and double click. The interactions you describe all happen “onclick”, which requires a down and up event to happen consecutively.

I get your point, that small delays add up, but mouse events aren’t a great example, IMO. Each of the events have a purpose and you want to make sure that a button is only activated with an onclick, not just an ondown.


I appreciate the clarity in nomenclature.

> Each of the events have a purpose and you want to make sure that a button is only activated with an onclick, not just an ondown.

That's stated as if it is a rule, but why is that a rule?

And if if is a rule, then why does Chrome -- for example -- handle this inconsistently?

Clicking on a different Chrome tab sure seems to happen with ondown, but clicking on the "reply" button below the text box I'm typing into works with onclick.


Doesn't Windows do the same when clicking a window titlebar?

Maybe it's so if you're going to start dragging the window/tab you can see what is in it.


Clicking anywhere in a background window (including the titlebar) in Windows 10 responds by immediately raising and focusing the clicked-on window when the mouse button is first pushed down.

The inconsistency is bizarre, since some here say that clicking-and-releasing before a resultant thing is allowed to happen is a hard-and-fast rule of GUI implementation that has been in place for decades, but that just doesn't seem to be the case at all.


There are no hard and fast rules, only conventions that are context dependent.

Buttons normally only respond to the “on click” event. This lets you move off the button if you change your mind mid-click.

Window focus could be (I haven’t tested) an “on down” event because you might want to see what’s behind it while doing something else before you release the button (like drag it around). But focus used to be “on hover”, where just moving your mouse over a window brought it to the foreground. “On up” wouldn’t make sense because if you wanted to do something like move the window around, you couldn’t as you’ve now released the mouse.

It all depends on what you’re trying to do and the OS. Each OS has a design language that “tries” to bring some consistency to event handling. But ultimately, it’s up to the application to handle many things.


I occasionally click a button only to change my mind mid-click. So I can move the mouse pointer off it and then let go of the button, in effect avoiding the operation. This ability to back out is a good for command-like events. Changing tabs not so much perhaps but probably traditionally done for consistency.

X window was the odd one out in the old days that would show a context menu on mouse down. Made it feel a bit unrefined.


Which, if any, GUIs have ever activated anything decisive on mousebutton-down events? IIRC, everyone uses mousebutton-up events as the imperative factor.


XFCE 4 is that way for most things that I've tried right now: The menu panels and desktop context menus work on button-down.

Switching Chrome tabs works on button-down. Activating the three-dot menu in Chrome also works on button-down (which is good) but then it needlessly fades from 0% to 100% opacity (which is a crime against nature).

I didn't even have to go digging in the memory hole to find this. It's right here in front of me.


Sure, many menus open on mousebutton-down events, but nothing is activated or started until the mousebutton-up event.

> Switching Chrome tabs works on button-down.

True; the same in Firefox. That is interesting. Perhaps it’s because simply selecting a tab is considered a safe and reversible operation. I did use the word “decisive” for a reason.


In fact, specifically for context menus this is a feature: In KDE ones, you can button-down to open the context menu and then release above the menu entry to activate it, making the entire process a single-click affair. In standard Windows ones it takes two full clicks, which is one of those tiny inconveniences that drives me crazy when using it. Of course, the Win way also works in KDE.


Windows does this for top-level menus for the same reason.

Or at least it did in classic Win32 UI. You can still see this in action if you open, say, Disk Management. But in Win11 Notepad (which is modern XAML), holding mouse down will open the drop-down submenus, but you won't actually be able to activate an item by releasing the mouse button while hovering over it, so it seems that someone partially copied the design without understanding its purpose.


Same in macOS, long press can substitute for right click in some things I’ve tried. So obviously you can’t use mouse down for that unless you you have a “time out” which defeats the purpose. Button presses are more complicated that what people expect, especially on embedded systems.


Don't games do this all the time?


I’m not sure games count as GUIs as pertaining to this discussion. GUIs in games would qualify, sure.


> The menu is triggered on button-up events, not button-down events.

I just tried this on Firefox and can confirm similar behaviour. Some of the things I clicked on appear to have some alternate long-press function despite having a pointer device with multiple interaction modes.

It seems we have condemned our desktops to large latencies on the off-chance someone might try and interact with them using touch.


Mouse click considered complete only on mouse-up rather than on mouse-down was a thing before web browsers even existed, much less before touch UIs. Windows worked that way for as long as I can remember, and IIRC so did macOS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: