Hacker News new | past | comments | ask | show | jobs | submit login
The Decline of Usability: Revisited (datagubbe.se)
229 points by myth2018 on Nov 15, 2023 | hide | past | favorite | 189 comments



Way back in 2008 or so, shortly after the "ribbon" arrived in the MS Word interface, I was teaching a CS class that had a unit on UI, so we spent a bunch of time deconstructing it and comparing it to the menu-based interfaces we were all (then) still used to. I was careful to point out that unfamiliarity with the new interface wasn't necessarily the same as it being bad, especially if with a little additional training and practice we could become more efficient and/or it aided in discoverability for newer users. Although I didn't use the same words as the OP I was unironically suggesting that MS must have done a bunch of research to show that these interfaces would be an improvement in usability.

Joke's on me! Fifteen years later and I still can't fucking find anything I want to do in the ribbon. I use various MS products at least a couple times a week (though day-to-day is more Libreoffice when it can't just be vim) and despite regular use I still find that I routinely have to google "how do I X in Microsoft Word/Excel/etc" for values of X that are very standard tasks that I am deeply confident I would be able to find on my own in a menu. And once I do find out how to do something, even if it's something I'm going to be doing a bunch, there's no indication of a shortcut or keyboard equivalent that I could train myself on, to become an actual power user of the interface. It's all completely maddening.


> Fifteen years later and I still can't fucking find anything

Of course people can't. That ribbon was a shite idea 15 years ago, and still is.

A textual menu is, by sheer necessity, hierarchical. Hierarchy imposes order. Text menus also aren't subject to the ever-shifting ideas of design-guru's idea of what X means in the form of a picture, making it, also by necessity, descriptive.

Descriptiveness and order mean discoverability.


The ribbon is just as hierarchical as a menu. Obviously it's organized in tabs, but in each tab you also have categories.


https://en.wikipedia.org/wiki/Ribbon_(computing)#/media/File...

Okay, please show me the categories in this picture, and the logic of their relation to one another.

For example, the ribbon in the picture is under the "Home" Top-Level category. It has a subcategory called "Editing". But there is another toplvl category named "Insert". Why is that not under "Editing"? Is inserting something in the document not editing? Why is "Editing" not a toplvl category? What is "Design" (another toplvl), and how does it relate to Editing, Inserting and Drawing? Why is "Find" under "Editing"?

Also, the entire Ribbon is RIDDLED with scrollbars and drop down menus. Some buttons are themselves text-labeled, others are not, there are different sizes and layouts.

Also, left of the "Home" toplvl is another giant button, that has only a logo, and apparently leads to further functionality. And above the toplvl bar is another bar with what seems to be save, back, forward, and what I can only assume is another dropdown menu.

So please, do tell, where and how, and by what principle is this organized?


The principle by which this is organized, and mind I'm not saying it's a good one, is available space. Screen space is extremely valuable, so the ribbon contents are constrained in two ways. The first is that the ribbon cannot be so large that it fills up the entire screen, because then you cannot see the thing you are working on. The second is that, once a size for the ribbon was chosen, not using some of those pixels was a crime against usability, so any empty space had to be filled, whether the functions are appropriate in that location or not.

This, then, leads to a massive overuse of tabs (because the available space is vastly insufficient), and incomprehensible icons (because they take up less space). Don't worry though: there will be tool tips, like little mini-menus, so eventually you can figure out what all this stuff is.

A menu bar doesn't have to deal with any of these problems: it only takes up space while you are actively engaging with it, instead of permanently obscuring your content. It can therefore take as much space as it needs, so functions can have a clear, textual label. It also has a natural hierarchical structure, so things don't need to be placed in non-obvious locations. It's a vastly superior solution to the problem the ribbon tries to solve, with its only flaws being, not being 'hip' and not working well on a phone.

Menu bars were the single greatest thing that ever happened to user interfaces: they made functionality discoverable in a way that earlier applications (who relied on countless magic key combos) could only dream of. Before the menu bar, an application needed a manual, and you _had_ to read it. After the menu bar, you could just look in the menu, and it taught you what functions were available, how to access them, and even how to access them from the keyboard if you needed them frequently. Getting rid of it was the stupidest thing designers could have done.


> Screen space is extremely valuable, so the ribbon contents are constrained in two ways.

With the menu i had more screen space.


And good luck with all that clutter on the title bar to find that only one pixel-wide point you can grab to drag the window around.


Visual Studio (not VSCode) is a case study in how to sprawl across different menus and require menu hunts to find what you want.


I've had to work with VS for the first time in my career recently. I love how they don't have a scrollable tab bar (like the Jetbrains IDEs or even bloody Notepad++), but you can have it spawn row after row of tabs and crowd out the code itself.


VS is an expert tool. Not a word processor for the masses.


Not really an excuse to be scatterbrained like that.

For either program.


Professional tools can optimize for efficiency at the cost to discoverability and new users. After all, they'll be used for many hours every week by largely the same cohort of expert users.


I've always kinda liked the ribbon, since it came out in 2007.

Traditionally, you get menus and the most important features doubled as rows of small icons. Icons lack text so if you can't find the one you want you need to hover them one by one; also they are not organized into clear named categories, and they all have the same importance. Menus are complete and organized, but not visual (it's just text, although in some programs they add small icons), and all features are given the same importance also.

The ribbon is exactly like a menu (complete and well organized by named categories), but in addition it's visual (you get large icons added to the text), and buttons are sized differently depending on their importance. Also it's collapsible.

The only drawback I can find is that if you know exactly what you want, it takes 2 clicks with the ribbon vs 1 click on an icon.


If you hold down <Alt> (I tend to think of it as Meta due to Emacs-induced brain damage), all of the targets on the ribbon should get associated keyboard characters, so you can quickly drill through the layers of the ribbon. After you’ve seen the shortcut, you can use it again without having to wait for any screen transitions. This used to work back in Office 2013, and I assume it still does…


>hold down <Alt> (I tend to think of it as Meta due to Emacs-induced brain damage)

when i hear ALT i reach to press ESC due to TECO-induced brain damage

not even joking at all. i'll finish the way you finish a teco command; ALT ALT

https://texteditors.org/cgi-bin/wiki.pl?TECO $$


>I was careful to point out that unfamiliarity with the new interface wasn't necessarily the same as it being bad, especially if with a little additional training and practice we could become more efficient and/or it aided in discoverability for newer users.

I mean I'd argue it would be bad even if this was true. A new design comes at the cost of all the accumulated muscle memory and skills in navigating the old interface. A miniscule increase in discoverability for new users is irrelevant because users spend a lot more time at the higher end of the learning curve when it comes to professional software. There is a reason why we don't create a new interface to control cars every 10 years.


I think this kind of user abuse is intentional. If people put the time and energy, and emotional effort into understanding a piece of software, they're less likely to jump ship.


Wouldn’t that suggest it is accidental? Making it harder to understand a piece of software seems to cause people to look for easier to understand alternatives.


If the software has a small marketshare and is struggling to get new users, then this would be true. When the software is already hugely dominant, the same dynamic does not apply. Trying to lock-in customers when they're not customers yet is a bad idea, but when they're already your customers, it makes business sense.


I think, MS Office products provide superb usability for those who make and/or influence these decisions: those who are installing the software. The enduser isn't really in that loop.


> there's no indication of a shortcut or keyboard equivalent

What do you mean? Besides the Ctrl+X-like shortcut in the tooltip of a ribbon button, once you start some Alt-H prefix to activate Home ribbon menu every button shows a AL AC AR AJ... tooltip that allows you to further drill down to Align paragraph Left which you can eventually remember as a sequence Alt-HAL

So it has both great key nav concpets - shortcus and key chords

(the main issue here is that it's hard to change those)


I've come to the realisation that to "use the ribbon" you're just meant to search for stuff. Typing "insert symbol" in the Word ribbon is much faster than trying to find the button.

And ok sure, I admit that's a valid paradigm. When there are a lot of options, you need to support a search function. Witness the Start Menu, or the options dialog in IDEA. Option search is vital once you reach a critical mass of options.


> I've come to the realisation that to "use the ribbon" you're just meant to search for stuff.

I think the idea is that you are supposed to customize the location of things you use frequently that aren’t already in the most prominent locations and search for things you use infrequently.


> Joke's on me! Fifteen years later and I still can't fucking find anything I want to do in the ribbon.

Yep. Despite having to use it mostly daily for the entire time, the ribbon is still a major impediment to actually accomplishing stuff. It incorporates almost every usability sin in the book.

I consider it one of the worst UI elements to have been invented.


Imo the ribbon was a good answer to a bad question. The question being "How do we make sure users can find all those features we put in successive new releases"?

They came up with a solution that it easier for users to find new functionality, at the cost of degrading existing workflows.

And the irony of the whole ribbon debacle is that they already came up with the correct solution (as proven by history) in Office XP - just put in a search bar for features.


> The question being "How do we make sure users can find all those features we put in successive new releases"?

I don't agree it was a good answer to that question. The ribbon makes things harder to find, not easier.


Resonates.

In vscode I just cmd-shift-p for almost anything.

macOS also has a menu bar search feature in the help menu. Used it a couple times on apple’s own numbers. Worked quite well.


Microsoft Office features a search box at the top of each application where one may search for a feature (for example, "tables") and a drop-down selection is presented as the user types.


Office has a similar search bar (though it should move to the center of the screen instead of being in the corner), and it's great, but the huge benefit of ribbons is progressive key chords with visual feedback that, when memorized, provide much faster, and guaranteed access to a given function (search bar can have multiple matches, so you can't be certain that typing a word and hitting enter will result in something)

(mac's search menu bar is worse as it's not typo-resistant, doesn't allow arrow up/down right away when the text box is focused, and includes various "Help" junk instead of showing nothing if you can't find anything)


That was fun to read as I did not expect second paragraph to go that way. Then there is use of curse word in the right place as it does not jump on me but sneaks up in the sentence to give a panache to it.


> In discussions like these, there's usually at least one person who shows up to demand data or research, but curiously never presents anything to back up their own claims about modern UI superiority.

It's easy to make fun of particularly bad modern examples, but I personally think people really take for granted that the baseline of usability for many areas has hugely increased over the last 20 years.

My recollection of some things that made older GUIs less usable, especially for non-techy people: vital functionality had to hunted for in menus or (much worse) right-click context menus, annoying modals everywhere, scary technical language, having to read the manual, lack of undo/redo/autosave, complex install processes, information overload sometimes, confusing file systems (like c:\ and weird file extensions), sharing and collaborating on files was cumbersome.

This is really obvious if you've ever had to help friends/family use something over the phone, as you realise how much knowledge you have to build up to confidently use some software.

It's really amazing what non-techy people can easily do on their phones and computers now, like real-time collaborative editing of a document (with automatic saving and unlimited undo) over a video call. Things like this used to be a nightmare to setup and use, and now it's taken for granted.


The menubars of yesteryear could get out of hand sure, but I’m not convinced that the patterns that came to replace them are clear wins.

For instance, menubars at least have the benefit of standardized behavior between programs and platforms. To me that’s better than what we frequently have today, where it’s not unusual for functions to get buried 3 modals deep with zero commonality between programs, aside from maybe involving a hamburger menu (which doesn’t benefit from being a standard, because it’s a junk drawer with contents varying dramatically between apps).


Calling the 'hamburger' a 'junk drawer' is just so apt!

Often one clicks it out of hope rather than expectation.


I mentally "tch" (the annoyance sound) whenever I go to GitHub or other sites to login, and then open KeePassXC, and the login button hides behind their junk drawer just because my window resized (tiling window manager).

There is still screen space to put it outside, ffs. Right next to the "Sign Up" button. Your 32x32px logo surely doesn't need all that padding, does it?


AutoCAD is a great example of menu bar explosion. You have tens of thousands of potential buttons if you loaded up everything.

Button spam ends where CLIs begin. When something truly becomes that complicated, it's time to embrace text commands.


AutoCAD does have a command line, though. The most common commands have names one or two characters long, and you can use Space instead of Enter, so it's really comfy, you can dedicate one of your hands entirely to the mouse, and use the keyboard with the other one.

Once activated, these commands start asking for arguments, which you can select with similar short abbreviations, or pick a point with the mouse.

You also have customizable toolbars, which can be placed both horizontally and vertically around the drawing area, and you can pick which buttons you want exposed and which you don't need (often).

The menus are mostly to help you explore the available functionality while you are still a beginner. Also, having a fixed location for a piece of functionality makes it easier to refer to it in a book, for example.

This was twenty years ago, though. I remember there was a push to make it more Windows-looking with every new release (It's Unix native, IIRC). Maybe, by now, it's a joke. I don't know.


> The menus are mostly to help you explore the available functionality while you are still a beginner.

Yes, this is one of the most valuable parts of a proper menu system: it’s basically an index of everything a program can do, and it can even be made searchable (like the menus of any app using the native menubar on macOS is).

Menus are also an amazing hook point for accessibility utilities and make things like implementing a HUD UI that works across apps (like Ubuntu’s Unity did) trivial.


Yes like having a command palette (in vs code, intellij and now in chrome) really helps.


Or in Emacs for 40 years.


> the baseline of usability for many areas has hugely increased over the last 20 years.

I genuinely think the opposite has happened. Most of the things you cite as being problems with UI of the 2000s is really better applied to software of the late 80s/early 90s, although in all eras there have been both fantastic and terrible UIs.

But from a usability perspective, I think things peaked in the early 2000s.

> It's really amazing what non-techy people can do easily on their phones and computers now

It truly is! But that's about functionality improving, not about usability improving.


it's also about usability improving because touchscreen models of interaction are less complicated to reason about than mouse based interactions.

on edit: really HN, after the years of articles showing computer illiterate people using ipads and phones for tasks they were not using computers for, probably seeing plenty of 1 year olds able to use ipads etc.

you're downvoting someone for pointing out touchscreens are easier to understand than using a mouse.


A few years ago, my then employer, due to a certain client's requirement, had everyone take a certification course on application security. One of the things that stuck in my head was the course advocating for cryptic error messages. The reason given was that the message shouldn't highlight a particular kink in the armor (if you will) that an able hacker could then try to exploit. Too vague and cryptic was best.

I admit, it makes a lot of sense. Annoying as a user, but necessary for application safety and security.


That's just security-by-obscurity and doesn't actually buy you anything except a speed bump for a hacker. It was a bogus argument from proprietary software vendors against open source a couple of decades ago, and it is a bogus argument for web services, too.

The presence of an error at all is a tell for the hacker as they search the surface area of the service's API, making the wording unclear is simply anti-user (sometimes quite literally when these errors are used as part of anti-fraud measures and shut down accounts without informing the user of what they even did wrong).


I mean showing the exact path to the configuration file likely isn't a good idea, so there is likely a mix of user friendliness and avoiding information leakage.


The messaging to the user should be actionable, so the exact filename going to them doesn't make any sense, but giving them a clear sentence of what exactly went wrong and what should be done to fix it if possible, or a UUID (or "Guru Meditation" value if you're feeling old school) to give to the helpdesk that can then be used to look up all of the relevant information on the other side is reasonable.

We were talking about obfuscating what the user did wrong and giving them a misleading message to somehow improve security. Saying that giving them information they can't actually use (like the path to a configuration file in a proprietary web service) is what we're discussing is moving the goalposts.

I think this is probably taking the advice about not letting people on the sign-in window know if a username/email exists in the service or not (to determine whether or not it is worth spending the time trying a list of potential passwords for another user and access data they shouldn't) and expanding it without understanding the nuance. Before they have signed in you don't know who or what is accessing the login path and therefore there's much less confidence that it's a legitimate user. Once the login is successful and that auth token is being used, though, the confidence is much higher and obfuscating details of the relationship between the company and that particular user is pretty strictly anti-user since the user can no longer be certain if the implicit contract of services provided will continue as expected or not. (Couple that with network effects, migration costs, etc, and the relationship becomes even more lopsided.)


You can't avoid showing the exact path to the configuration file. When your software opens the file, that's visible.


Yeah, I love the linux "File not found" without a file name and path that was searched.

It does a lot more to prevent fixing bugs rather than prevent exploiting them.


> I mean showing the exact path to the configuration file likely isn't a good idea

Why? This is not self-evident to me.


Now I should say in web facing applications, not user ran applications on a computer. Maybe that makes more sense.


There's a difference between "hey user, don't do that" and something going wrong under the hood.


And you don't think anti-fraud teams use error logs triggered by users as a signal for potentially banning them?

These teams are incentivized to eliminate fraudulent accounts that cost the company money and are pressured/punished when their tools produce false-negatives (fraudulent users that are considered okay), but get no such pushback on false-positives (okay users that get flagged as fraudulent), and accounts that are triggering errors in the backend service(s) can look a lot like someone actively trying to hack it. Basically any sort of anomalous behavior that correlates with negatives for the business get flagged by these tools, and doing so unjustly is not an explicit goal, but it isn't really punished within the corporation.

(The false-positives do get negative feedback in the rare instances when it blows up on social media, so these teams often include a whitelist of high profile accounts to just skip over but still impact the regular users capriciously, only "solving" the false-positive problem insofar as it impacts the business.)


If the user is authorized to take the action, and the error in question isn't related to authorization or authentication, then the error message should be clear and actionable.

The action may be "contact support" with a trace ID, but it should still be clear. When it's something the user can fix, and the application already knows the user is permitted to do what they're trying to do, the application should provide as much information as it can to help them complete the task.


> vital functionality had to hunted for in menus or (much worse) right-click context menus

Those are bad, but magic touch-sensitive areas are even worse. The article mentions scrollbars as a clear example.

> scary technical language

This could be bad, but frankly a technical error or code that you can google is a lot less bad than a generic "oopsy whoopsy something broke".


> This could be bad, but frankly a technical error or code that you can google is a lot less bad than a generic "oopsy whoopsy something broke".

Technical language where it's appropriate is fine. For errors, it's easy enough to show a friendly error message at the top with more technical details underneath if it could be useful.

Actually, I forgot about double-clicking which was everywhere on Windows. People would be confused about if single or double-click would do something different and when to do which.


> For errors, it's easy enough to show a friendly error message at the top with more technical details underneath if it could be useful.

What exactly is the point of an error message being "friendly"?

Does a sad cartoon face with an "Oops" in a speech bubble make users feel better? No. Not really. Because the user wants the thing to function, and if it doesn't, the user may want someone to help him, and if all that someone gets to work with is a little cartoon picture, that's not going to result in a happier user.

If there is an error´ in my app, I make it EASY for the user to see, copy and send it. Yes, I have written apps where an exception results in the stacktrace to be displayed in the actual app.

You know what these "unfriendly" errors caused? Countless very happy users, whos problems were quickly solved by knowledgable stuff who got actual error messages and stacktraces to work with.

> People would be confused about if single or double-click would do something different and when to do which

And did double clicks go away as a result? No. No they did not.

And btw. neither did Menu bars. We still have them. Only now, we have them IN COMBINATION with ribbons etc. and they are sometimes themselves hidden behin dsome element.


I mean friendly in the sense "Use human-readable language. Error messages should be plainspoken using legible and readable text (many writing apps can give you feedback on a message's readability). Avoid technical jargon and use language familiar to your users instead. The Web's most common error message, the 404 page, violates this guideline. Hide or minimize the use of obscure error codes or abbreviations; show them for technical diagnostic purposes only.":

https://www.nngroup.com/articles/error-message-guidelines/

> And did double clicks go away as a result? No. No they did not.

Double click has largely went away, yes. I see double click in desktop file managers still but it's mostly gone elsewhere. It never became a thing on touch screens because it's not discoverable (which is why it's not great on desktop either) and it's very rare to see in web interfaces. Related:

https://blog.codinghorror.com/double-click-must-die/

https://www.nngroup.com/articles/when-to-open-web-based-appl... ("Double-Click")


> Avoid technical jargon and use language familiar to your users instead.

The problem with that: Such messages are rarely helpful in diagnosing and actually fixing the problem, which is the point of error messages in the first place.

The 404 Error message is short, precise, and (unless the server misuses HTTP codes), tells me EXACTLY what is wrong: 404: Resource not found. The thing that the URL I entered denotes, isn't there.

That is a helpful error message. "Oh dear, oh dear, it didn't work." however, is a crap error message, and the only thing it accomplishes, is enraging both users and technical support staff.

> Double click has largely went away, yes. I see double click in desktop file managers still but it's mostly gone elsewhere.

Please do tell, where else was it used to begin with, where it no longer is?

> It never became a thing on touch screens because it's not discoverable

a) It never became a thing on touchscreens because there is no mouse on touchscreens

b) There were, and still are, apps that use double taps

> which is why it's not great on desktop either

Please, do tell, how would you differentiate, in a file browser, between marking an item, and opening an item?


> It never became a thing on touch screens because it's not discoverable

Is that why? Discoverability on touch screens has been deprioritized in most other respects, why would this one thing be special?


Besides hidden scrollbars, do you have examples of lack of discoverability on touch screen? Any specific apps?

I can't relate to this to be honest, I feel confident on my phone that main features are either right in front of me via a tap and less common features are behind a visible menu icon. Think how much worse it would be if double tap and long tap were required for common workflows. Some apps do require gestures but they either onboard you or the gestures are optional.


> I can't relate to this to be honest, I feel confident on my phone that main features are either right in front of me via a tap and less common features are behind a visible menu icon. Think how much worse it would be if double tap and long tap were required for common workflows. Some apps do require gestures but they either onboard you or the gestures are optional.

Not my experience at all. Snapchat famously made their UI deliberately undiscoverable so that only people who made an effort would find all their features; I don't know if Instagram deliberately copied that or not but a whole lot of its functionality requires blindly tapping somewhere that's not obvious to tap or dragging somewhere that's not obvious to drag (e.g. going backward or forward in a story, messaging someone).

My favourite example is pasting a phone number into the dialer on Android: you have to long press on the blank white input, and it will only even show the context menu if you've got a phone number on the clipboard already. A couple of versions back it was worse: you had to long press in the correct half of the featureless white rectangle, otherwise it would just silently not work.


> My recollection of some things that made older GUIs less usable, especially for non-techy people

It's of course entirely possible that usability has increased in some areas and decreased in others. That's more or less my assessment of the situation summarized in a single sentence.

Also, there's a difference between "usability" and "easy to get started with". They're not the same thing at all! "Having to read the manual" can make an application MORE usable if it allows for more advanced UI paradigms, and for applications where the users will spend a lot of time this can be a good trade-off (of course, for many other types of applications it's a bad trade-off).

It's hard to directly compare things to 20 years ago; "real-time collaborative editing of a document (with automatic saving and unlimited undo) over a video call" wasn't done because we lacked the internet speeds and computing power to make that viable, not because UI paradigms improved.


> It's hard to directly compare things to 20 years ago; "real-time collaborative editing of a document (with automatic saving and unlimited undo) over a video call" wasn't done because we lacked the internet speeds and computing power to make that viable, not because UI paradigms improved.

But that's not true. 2003, 20 years ago, happens to mark the launch of both the first modern collaborative real-time editors (SubEthaEdit and MoonEdit) and a major video calling service (iChat AV).

Sure, hardware progress since then has helped. Video calling got much better due to internet, camera, and codec improvements, along with webcams becoming a ubiquitous component of computers rather than an external addon. Collaborative editing really took off once it started to work in web browsers, originally with EtherPad, but more importantly with Google Docs. That's partly due to the UI paradigm of the URL, but the ability for collaborative editing to work in web browsers can also ascribed to faster CPUs, as well as browsers getting more efficient (which is not hardware but also not a matter of UI paradigms).

Still.

Ignoring the calling part (which really isn't essential, IMO), collaborative editing isn't necessarily very resource intensive (depending on the algorithm). The technical conditions for it to work were satisfied as soon as Internet access became a ubiquitous feature of computers in the late 90s. Yet it wasn't until 2010, when Google Docs gained collaborative editing, that it even started to take off. I'd say that's definitely an example of UI paradigms improving over time.

By the way, a form of collaborative editing was shown off in 1968 in the Mother of All Demos, with video calling! It used analog video.


And AT&T had their (in)famous Picturephone way back in the 60s. But none of that is relevant or important here. Most people had nowhere near the bandwidth to make video chat usable.

Internet connections of the late 90 were absolutely not enough to support collaborative editing: the latency was just far too high, and more often than not connections were just not reliably enough, never mind that a huge section of people were on dial-up and paying by the minute (and many didn't have internet at all).

This really was a matter of actually having a critical mass of people with the hardware to do $stuff. Yes, all of this was "technically possible" a long time ago but a bunch of people in a research project isn't a market. After that things kind of sorted themselves out.


The thing with complex software is that yes, you have to explore it for some time before you could use it to accomplish your tasks. But after you did explore it, you'll have memorized where the actions you need are located. And probably their corresponding hotkeys too.

Technology requires education and getting used to before it can be used to achieve one's goals. It's perfectly normal. For some reason, no one questions the need to take driving lessons before being able to operate a car.


> but I personally think people really take for granted that the baseline of usability for many areas has hugely increased over the last 20 years.

I disagree.

What has happened, is that products that could get away with it, have hidden more and more functionality away, usually by shoving it into some god-awful hamburger-menu, or simply removed it, to not "confuse the users".

That's not improving, that's removing. And it's fruststrating for every power user, and everytime normal users leave the "happy path" the UI-gurus "designing" such things had in mind.

> scary technical language, having to read the manual,

Yes, people should try and understand the products they use. If they refuse to do so, then the obvious solution are simplified products, which, granted, are easier to understand, with the tradeoff of lacking functionality.

Which is completely fine by the way. Right tool for the job and all that. What isn't fine, is if ALL products start doing that, because then it gets really hard for the people who do read manuals to do their job...because suddenly there are no manuals any more, the language used is over-simplified and non-descriptive, and interface elements are god-knows-where.

And sadly, that's exactly where we are now in the software world. Because yes, things like a massive WYSIWYG editor is complex. There is no way around that complexity, not if you want it to have all these bells and whistles it's marketed for having. Yes, things like an email client that people organise their online-identiy around is complex. Yes, applications that people manage their financial lifes with have to be complex. No, an error message saying "Oh noes! Something's not working. Maybe try again later!" under the picture of some sad cartoon character is not helpful for tech support to debug problems.

My favorite example, and personal beef of mine? Firewall management software. Because, I think we both can agree that people who's job it is to manage corporate firewalls do read the manuals, and are not scared by technical language. But all my reading and being tech savy gets me absolutely nothing, if the only way to configure that thing is by running a click-marathon through what I can only assume was a modern-art design study gone horribly wrong, and the only documentation available is an old youtube video from 2 versions ago.

> as you realise how much knowledge you have to build up to confidently use some software.

Here is the thing though: In ye 'olde days, all these scary, techy interfaces looked the same. Once you figured out the menu bar in one application, it was easy to transfer that knowledge elsewhere. Want to save something? "File" menu. Want to change what you're working on? "Edit" menu. Want to change settings? Pretty sure there is a "Settings" or "Preferences" hidden somewhere under "Edit".

Nowadays, we have a hamburger button, or a 3-dot button, or a title bar, or a sidebar, or maybe it's one of the phones function buttons, or it's a 2-3-4-finger swipe (the directionality of which is anyones guess). The settings may or may not be under Account, or in the App-Settings, or there might not be any settings at all. The account can be reached helpfully via that gear-icon, or by clicking the profile picture, or by hovering over the profile picture and waiting 0.5sec for the dropdown menu to fly in. Or it's not there at all, because that would be scary to the users, and instead I can change account settings via the non-mobile website on my PC. Or via some other app. Oh, and how wonderful those interfaces are where certain elements only become visible when I flip the phone by 90°..

And this continues ad nauseam. There are no rules, no continuity, no obvious way how something works. Knowing how one app works, doesn't teach me anything about the next one, other than how great those ugly textual menu-bars were in comparison.


My least favorite feature of modern UIs: undiscoverable settings and options that only appear if you have enabled some other non-obvious option, are in some unspecified state, or, more typically the star are aligned.

My actually least favorite feature of modern UIs: doesn't matter if you actually learn to use it, because in two weeks it will change under you with on OTA update. And good luck finding documentation (third party of course) for the new redesign, because you are actually a guinea pig in an A/B test and most users do not have the same experience.


> Yes, people should try and understand the products they use. If they refuse to do so, then the obvious solution are simplified products, which, granted, are easier to understand, with the tradeoff of lacking functionality.

I was talking about being forced to read a manual because the UI wasn't obvious enough when it could have been made obvious without making the UI worse e.g. making features more discoverable by taking them out of right-click menus, preferring single click to double click, removing _unnecessary_ jargon, optimising the UI for common workflows, informative tooltips, better onboarding on the first-run, allowing undoing mistakes (vs warning modals or no undo at all), avoiding modals, auto saving/restoring state so you can continue where you left off.

I really think people take for granted how much better UIs do this stuff now because best practices have improved.

"Read the manual" and (as another comment suggests) "no one questions the need to take driving lessons before being able to operate a car" were common sentiments at the time that blamed the user and excused software for being better.

> Because yes, things like a massive WYSIWYG editor is complex. There is no way around that complexity

Obviously I'm not saying to dumb down complex professional software and stick all menu options in hamburger menus. I'm talking about software that casual non-techy people would often use. Complex software is a whole other design challenge.

I would argue that e.g. VS Code has a lot of UI improvements over Visual Studio though.

It would make more sense if you considered some of the better UIs of today, rather than the worst.


> really think people take for granted how much better UIs do this stuff now because best practices have improved.

We had these things figured out in the late 90s already.

    - Menus were single click, and hierarchially organised
    - The hierarchy was similar between programs
    - Everything was text, no one had to guess what "tiny arrow inside a box" means
    - Everything looked the same, because everything used the same widgets
    - Common functions were placed in menus that were labeled the same across applications
    - Menu-Items were directly labeled with their shortcuts if they had any
    - Hover-Tooltips have been a basic widget of GUI libs for ages
Sure, they didn't win any design awards, but they worked. In all their boring, obvious blandness, they worked, and they were easy to use. And that's the most important thing.

> removing _unnecessary_ jargon

Please tell me what is "_unnecessary_ jargon" in things like "File", "Edit", "New", "Preferences", "Save As...", "Tools", "Layout" or "Help"?

> I'm talking about software that casual non-techy people would often use.

As am I. These problems plague many many popular apps as well. Where is the account setting? Behind my profile picture? In a sub-hamburger? In the App-Settings? Where is the sub hamburger? Topright, mid-bottom, only visible when the display is in landscape mode? Do I have to swipe the menu in? If so, from which direction and with how many fingers?

That professional complex software is plagued by this as well is a personal gripe for me as a tech person, but these problems are universal.


I gave examples of specific best practice trends that are more common now compared to before. I didn't say everything is better so I'm not sure where your rant is targeted and it would help if you could interpret more charitably.

I agree menus, predictable locations for features, and keyboard shortcuts are great. You're again comparing the best of then to the worst of now though.

Standards for things like this will likely settle in over time. When apps now need to have support for touch + trackpads + mouse + keyboard, mobile/tablet/desktop screens, and multi-platform (Mac, Windows, iOS, Android, Linux), it's really shaken things up and made it much harder to agree and standardise on common patterns.


> You're again comparing the best of then to the worst of now though.

No, I am comparing what was common then to what's common now.

Granted, the menu structure of apps back then was in large part common, because there was only the desktop, and GUI programming was still in its infancy, so everyone used the same widgets.

And no, I am not saying that everything back then was great. There absolutely were apps with shite interfaces. Only it was a lot harder to justify them.

> Standards for things like this will likely settle in over time.

Some will, but since we made the transition from building interfaces to designing them, agreeing on standards became much much harder.

Because now interfaces are the playground of marketing, brand recognition, product identity and design fads. Even explaining how an app works can be hard, because the person asking me could be in another group in some godamn A/B test and have a different one, even though we use the same App.

> When apps now need to have support for touch + trackpads + mouse + keyboard, mobile/tablet/desktop screens, and multi-platform

That certainly makes it more difficult, and requires careful consideration for different platforms.

So riddle me this: Why was common the reaction of all these different requirements to suddenly pretend everyone is using a phone for everything all the time?

Because that's how many apps and websites now look: Like something designed for a small phone screen and navigation by thumb. I open a website on a 4K monitor with a laser-precision pointing device, and I get an information density comparable to the intergalactic void.


> So riddle me this: Why was common the reaction of all these different requirements to suddenly pretend everyone is using a phone for everything all the time?

It's very resource intensive and challenging to design multiple interfaces is the obvious answer here. Designing a website with just a few interactions that's accessible, supports mobile and desktop screens, and works in all browsers is already challenging and time consuming enough. Asking for almost a separate interface for different screen sizes and more costs significant resources that many won't have.

> Because that's how many apps and websites now look: Like something designed for a small phone screen and navigation by thumb. I open a website on a 4K monitor with a laser-precision pointing device, and I get an information density comparable to the intergalactic void.

What websites and apps in particular? I agree lazy mobile ports exist and are bad but I'm not experiencing this often with my daily apps and websites.


> It's very resource intensive and challenging to design multiple interfaces

Sure is. Wanna know why? Because there is no unversal standard on how to do it.

And arriving at such a standard is really really difficult, in no small part because of all the "amazing" design ideas I mentioned before, which are not only fast-and-ever-changing, but also don't work well together.

In ye'olde days, the problem was approached from the technical side: Technically, designing graphical interfaces is hard, so here is a constrained system that gives you these well defined methods for doing so. That's what design gets to play with, so that's the standard. It didn't look cool, and didn't lend itself easily to branding etc. but it worked. In fact it worked so well, that we STILL have ALL the UX elements from these days in active use today, and they STILL WORK.

Nowadays, we are in a situation where we try and approach the problem from the other side: Technical limitations basically went away, so everyone is free to design his own stuff, leading to a cambrian explosion of conflicting principles and ideas, including ideas how to standardize and categorize all of it.

If we were able to get this under control and design universal widgets that can automatically adapt to whatever device they are used on, that would be wonderful. And there are amazingly intelligent people who are trying to do that. But all that effort flies out the window the very next time some design studio or company want a nice juicy piece of personal brand recognition and "design" the next-shiny-thing.


Lovely rant.


>the baseline of usability for many areas has hugely increased over the last 20 years

But the baseline for functionality has decreased at the same time; usability has come at the cost of removing useful options in favor of lowest common denominator, not 80/20, more like 50 and "a very bloated 10...GiB".

OK-Cancel is gone, changes now are instaneous. Cancel was a useful thing to do.

You like that things have gone they way they've gone? That's great! I wouldn't dream of taking that away from you.

But why I can't I have mine work the way I like? Why do you support smashing everything into very small smooth round cookie cutters with one cookie recipe?


Slack is not some backwater cottage industry. It's a big company with thousands of employees and millions of users - many of whom are paying good money for their software.

...and therein lies the root cause underlying the majority of these useless changes: Too many people for the amount of required work.

In much the same way that "design by committee" often leads to horribly complex abominations, I've noticed that the bigger the development team (and perhaps the company behind the product), the worse the software turns out to be; and usability is no exception.

At the other extreme, products of a single developer very rarely make hostile UI changes, because they mostly can't afford to. Then sometimes they get bought out by a bigger company, and the useless changes start showing up shortly afterwards.


Slack gets a lot of flak, but in my experience they're far from having the worst web app UI. For instance, have you tried the abomination that is the AWS management console? Not sure about the pricing for Slack and for AWS, but I bet companies tend to pay a lot more for AWS, so you'd think they would be able to come up with an acceptable interface - but no, it's a Rube Goldberg-esque monstrosity where even the simplest tasks require you to traverse at least five slow-loading pages...


I saw this happen many times:

1. We have to add this feature, we need an icon, menu item, anything, for it.

2. It doesn't fit well in any of our menus, we should stop a moment and think about the UI, maybe like this and that.

3. Great but it takes too long, we need this tomorrow, on this Friday, or at any very demanding deadline.

4. Ok, so we add an icon there or into the options and people will figure out. We'll fix it later.

5. Later never happens because we're again at step 1 very soon.


Firebase seems a case in point. You have to do all this weird Google cloud shit to get something basic working. I posit that using Firebase is more complicated at the outset than installing an ORM and using postgres. And being nosql it gets worse from that point on. So it’s raison detre of making it easy to build apps is lost.


> What is the "Archive" icon even supposed to depict? The lower part of a printer, with a sheet of paper sticking out?

It's a storage box (also known as a "banker's box"), used for long-term storage of documents or other office materials. [1]

I guarantee everyone in a computer/IT setting has seen one.

[1] https://www.officedepot.com/a/products/6275549/Office-Depot-...


I've seen plenty of cardboard moving boxes in my life, and I've seen the archive icon uncountable times, but it never occurred to me that's what that icon is supposed to be. A thin wide rectangle on top of a square could be a bajillion things. If they wanted to depict a box, they should have shown 3 dimensions.


A hole punched for the handle would've made it a lot more obvious.


That and the disbelief at the little wrench-and-screwdriver icon both seemed weirdly misplaced. Anyone can point out dozens of weak icons, I snicker at every floppy disk and phone too. But it'd take a real visual poet to find a perfectly unambiguous tiny picture to convey the meaning of "archive" - what, a little library? Scrapbook? I'd say there's an unwritten law of iconry, it shouldn't be completely wrong, but it can't really be completely right.


I always thought it was a printer, but representing a document to be stored in one of those boxes.


I was confused by the author's ignorance about this, too. How do you work in the industry for 25+ years and not know what an archive box looks like?


Because an "archive box" is a thing that is not necessarily a common occurrence in office culture around the world.

The author is Swedish. Google's translation of "archive box" is "arkivlåda". That is not a thing over here, and is not even in the dictionary. An image search for the Swedish word resulted in many boxes that looked very different.

And even if I see it as a box, it looks also like a box of printer paper to me. (It's the "Fresh paper" icon! :) )

Either way, you are diverting away from the main issue of the paragraph: Would you recognise what the icon represents even without the text?


I've worked in America for 20 years and I never knew about this either. I just thought that was a regular cardboard box, and it never occurred to me the archive icon is trying to depict such a thing. I always just thought it was an ugly file folder or something.

I never had to use such a box in real life.


> I never had to use such a box in real life.

Okay but that type of box was all over the national and global news this year: https://www.google.com/search?&q=trump+bathroom+documents&tb...


I pretty actively try to stay away from orange man escapades. I'm sure we'll get more than enough when he's president / evil dictator for life again /sigh


There's those types of boxes everywhere! Whether it's chinatown[1], a garage[2], or a secured area in Maralago.

[1] https://www.cbsnews.com/news/biden-classified-documents-hous...

[2] https://www.politico.com/news/2023/01/12/additional-document...

But it seems a bit too off-topic for this thread, dunnit?


I can't wait to learn what you think this thread is about.


This man uses the archive button


I can second this: While I do know that kind of box as depicted, I've never owned one and I have never used one. (However, plenty of rectangular cardboard boxes.) It's a very cultural thing and it really stresses you imagination, if it isn't an item that is consistent with your world, but rather manifesting a competing concept. If there are also alternative interpretations available, thanks to the drawing style, this doesn't help particularly.


He knows what a cardboard box looks like. He didn't know that the thin rectangle on a square is supposed to be a cardboard box, or that a cardboard box is supposed to represent "archive". Cardboard boxes are just as often used for moving things.


I have never seen a box like this anywhere except in document storage closets in American movies.

They are not a thing in Germany, as far as I can tell.


Interesting—what do you use for document storage in Germany? I'm always on the lookout for new storage ideas for managing all my crap.


Binders.


So many, many rants could come from this and my blood is boiling in fervent agreement. But two thoughts: as hard as it is to use the contrast-less, contextless obscure icons and "is it checked or toggled or active", try working with someone and having to explain "click that thing, I don't know what it is, no, the other one, to the left, arghghhg."

But really, maybe these aren't our interfaces anymore - the lack of scrollbars on macOS is because of the touch, pinch and swipe touch interfaces, and the people that are now very happily consuming content (and creating, although less so) without having to worry about that extra bit of information. But the rest of us suffer, and the "increase contrast" and "reduce motion" checkboxes don't do nearly enough to reconfigure the UI.

I never really appreciated the Mac modding community[1] (kaleidoscope?) and the powerful frameworks they had to build UIs, 90% of which I'd happily switch to today if I could.

[1] https://botsin.space/@osxthemes/111416576638238992


> lack of scrollbars on macOS

Do you know that you can tell your Mac to "always show scroll bars" on System Settings -> Appearance page?

Anyway, while I tend to agree that Snow Leopard UI was the best and it went downhill since then, the latest macOS is still very usable and I do not suffer on Mac at all (unlike Windows 11, where I had to install a 3rd party app to at least partially revert what they did with UI)


Sadly the UI isn't even the worst part of Windows these days. It's the ads. It's beyond belief that I should get ads in software that I paid for, yet here we are. I got so sick of it that I switched to daily driving Linux. It's not perfect (I'm a gamer and games can have issues even though they mostly work great), but at least I don't have ads on my desktop.


Microsoft did the right thing in putting ads on the Windows desktop. They can make more money that way, and increase shareholder value.

For every user like you that gets angry and switches to Linux, there's countless others that don't, and MS makes more profit showing them annoying ads, so it's a net gain for them.


I think that you hit on one of the major things that leads to modern UIs being horrible: products seem to want to have a single UI for all form factors and types of users.

However, every form factor has a unique set on constraints, and a casual user has a different set of needs from a non-casual user. It's simply impossible for one UI to serve all of these things adequately.

So we end up with user interfaces that suck in every use case, just in different ways for different use cases. We need to go back to the days when user interfaces were actually tailored to the use case.


Oh, and the (lack of) window borders: This is a concrete and practical problem - I often get into fights with multiple open browsers. I will have 2 or 3 browser windows open, each containing an ongoing selection of related open tabs concerning a given separate task/subject. As I try to resize, position and minimize-maximize those 3 windows, I will have trouble figuring out the right area to get hold of "window N". (because of the modern minimalist way of drawing it, especially in dark mode).

This problem is made worse by the modern violation of the title bar itself. Originally, the title bar would be exactly that, with identical design across multiple applications. Nowaways, the titlebar is a free-for-all anything-goes, with everything-but-the-actual-titlehandle stuffed into it. And worse, if it gets cramped for space, the title-handle (by which you move and handle the window..) is the first to lose and go! So you find yourself begging your application window "is there any place I can interact with you to move you?"

Surely we live in the age of the instant geniuses.


The right area for a resize of the right window border is ~33% of the window's area at the right (with an Alt or some other key) instead of a tiny window border. Similarly minimizaion is much better done with a shortcut/mouse gesture rather than with a tiny button in the corner (though it's not easy to find out about these better UI actions)


That's fine, but I'd still prefer to have real window borders. Not only because they make manipulating the window easier, but because the visual separation itself makes the entire system easier to visually parse and use.


Visual border and resize border are two different UI things, and regarding the latter, precision-hunting for a tiny line instead of a thick area doesn't make manipulation easier, that's just a persistent mistake of the previous generations of UI geniuses


I don't necessarily disagree with most points made here, but this has a close resemblance to the arguments so many seemed to have when gnome 3 was released because the "start button" (or "panels") was replaced. It's ok to miss it, especially if your parents' first computer was running Windows 95, but that doesn't mean the OS UX can't / shouldn't be improved. I personally consider the improvements significant.

Experimentation is important and only change is constant. That said, change for the sake of change is a real problem and adding design trends and ego to the mix can make for a lot of waste and annoyance.

Otherwise minor nits...

> What is the "Archive" icon even supposed to depict? The lower part of a printer, with a sheet of paper sticking out?

Storage Box. Seems obvious to me, but reasonable that it wouldn't be obvious to everyone. I probably wouldn't get it without the text, but I also accept that some concepts don't transfer to iconography easily.

> "Sweep" is most likely a broom - but would you be able to determine that without the text?

I can't imagine what else that would be.


Due to my age and work experience, I'm also familiar with archiving involving storing physical documents in cardboard boxes. My current job really is paper free and graduate employees joining will never deal with archive boxes.

Similarly, the icons on my mobile for making a call, answering a call and hanging up are all based on the shape of the handset for a corded phone.

The Save button icon in many applications is based on a 3.5" floppy disk.

I sometimes need to save data as a PDF, which in many cases involves "printing" to PDF. The icon is based on a paper printer, but the action I am using it for doesn't include any external device or paper.

This makes me wonder about what icons we could use in the future where so many actions are done via a touch interface on a phone/tablet. There are less physical objects involved in common actions. So the skeuomorphic approach to icons for buttons is becoming less valid over time especially for younger people.


> what icons we could use in the future

All icons will depict a smartphone. At least for the short remaining period until you just tell the AI what you want.


> that doesn't mean the OS UX can't / shouldn't be improved

Of course, OS UX can be improved. But not all changes qualify as "improvement". See some issues caused by flat UI here: https://uxcritique-blog.tumblr.com/


> I can't imagine what else that would be.

My first thought was that it looked like a vintage oil can, or a douche bulb.

BTW, Are "Archive", "Junk" and "Sweep" supposed to be verbs or nouns? And what does the arrow in-between "Junk" and "Sweep" do? Does it belong to "Junk" or with "Sweep" or neither?


My problem here is, rather, what does "Sweep" even mean? (I'm not an an Outlook user.) I can imagine some housekeeping or clean-up task, but there must be a more precise concept and wording for this, for sure.


Okay let's say you recognize it as a broom. What does "sweep" even mean for this context? Is there a separate dustpan button after? Why not just the trash iconography?


I had the same thought - sweep it under the rug? Hide this until it starts to smell?


>> "Sweep" is most likely a broom - but would you be able to determine that without the text?

>I can't imagine what else that would be.

It could be a hand holding a fencing foil. The scale of the various parts is wrong, but it's also wrong if you assume it's a broom.


Pedantically, it would have to be an epee, not a foil. The handguard on a foil is less curved.


> I can't imagine what else that would be.

If I saw that icon without the label, my immediate guess would be a paintbrush symbol, indicating either a paint tool or something to do with styles and colors. My second guess would probably be a bell, indicating alarm or sound settings.

I suppose I would eventually guess a broom, but not in the first 3 tries.


> I can't imagine what else that would be.

A hand bell.


This is evident in physical products. We have perfected 'crap for the masses', unfortunately. Standard problems: - I just bought a (bulk) phone charging station. Even though I attempted to pay more to get a non-shitty product, it has no on/off button. Instead, I can unplug it manually from the socket each morning :-/.

- music volume: When I stream music from my android phone to one of our not-cheap bluetooth speakers, the volume can be adjusted in something like 16 possible increments. In practice, there are 3-4 possibilities: (A) so low you can't hear anything, and the frequency distribution is off. (B) almost but not quite audible. (C) audible, but a bit too loud. (D) so loud you fear structural damage to your kitchen. In practice, I toggle back and forth between B and C, and wish just that interval had analog subdivisions.

- two relatively expensive bluetooth speakers (sigh) in our house have no 'tangible' power buttons, only icon-marked areas where you can hover your finger for the same effect. I lack words to describe how annoying it is to have to visually ascertain your finger is near where the almost-invisible icon is. I have marked them with nail-polish, but it is not enough. They are both cylinder-shaped, so there are 360 choices.

As I see it, there should be good money to make, in just offering physical knobs on your gadget, and possibly analog controls.

As a caveat, precisely the physical knobs have often been the failure points in expensive Hi-Fi gear I've owned over the years, in particular the channel/input selector, and the volume controls. Sigh.


Glomming onto your comment to add my gripes with the Google home speakers I now deeply regret installing in every room in my house:

1. Touching the speakers adjusts their volume. They are convex hulls with no handles, and there are no visible markers indicating where they are sensitive to this input, so moving them while in operation is dangerous: they are as likely to hit 100% volume as 0%.

2. Alternatively, volume can be adjusted verbally: "Hey Google, set volume to 10 percent." However, I ended up covering my newborn's ears and running out of a room recently when the speaker didn't understand the "percent." That's how I discovered that you can also say "set volume to N," with N being 0-10, and the speaker will multiply by ten to get a percent. This decision is baffling for a voice interface, which could have consistency and user safety by treating all inputs as percentages. As it stands, 1% = too quiet to hear, 1 = quiet, 10% = quiet, and 10 = so loud the speaker cannot hear you say "hey Google, stop."


Here is a tip for your hi-fi gear: get a can of De-Oxit D5. The common mode of failure for those 'knobs' is the potentiometers getting gunk and oxidization on their carbon traces. They work by sliding fingers over a carbon trace which increases making resistance change as it moves on the circle (or line, for slide pots). Also, since they rely on physical contact between two surfaces, it helps tremendously to have a non-interfering lubrication between them. De-Oxit D5 has a contact cleaner in aerosol which flushes gunk out and it leaves behind a compound which prevents corrosion and lubricates electronic contacts. It is in the bag of every tech who works regularly with hi-fi gear.

Note this applies to analog controls. Digital controls use rotary encoders which only indicate which direction the dial has been moved.


Ah, you've encountered a non-logarithmically-scaled volume control in the wild. All too common I'm afraid. https://dcordero.me/posts/logarithmic_volume_control.html


Criticism of GUI design trends like this is always attacked for being out of touch -- "you just want everything to cater to your weird taste", "most of the users aren't experts", "iOS enabled my papa to send emails, he never even worked out how to turn on the Macintosh II SE", and so on -- this comes from a well-intentioned place but doesn't really hold up under scrutiny.

The problems described here are problems for less-experienced/non-technical/casual users, too. In fact, it seems reasonable to assume that friction arising from the design of the UI will have a greater impact on non-expert users. Highly proficient and experienced users are able to work around problems more easily, recognise the (changed) patterns more easily, etc. They are also more able to articulate what the problems are and are more likely to dedicate time to doing so.

When your non-technical family member faces some inscrutable change to their workflow, they seek your help to get their task done, not to analyse the design or intent behind the UI -- and you know that, so you use your expertise to adapt their workflow to the new design and they thank you for it.

When your professional family member faces some inscrutable change to their workflow, they tell you how shit software is these days and find a solution on their own.

So on one hand you hear "help me get my thing done" and on the other you hear "software used to be better". It's not surprising that people treat the cases differently -- even though both are problems stemming from UI design decisions.

editendum: I guess it's worth pointing out explicitly that usability is not universal. Some software is complicated because it's for doing complicated work.


Is the author really holding up mIRC as an example of good usability?

Aside from the standard window controls, I can't make sense of what a single thing on that screen does.


This sort of reminds me of the grumps who complain that the masses haven’t embraced Vim, RSS feeds, arch linux, fediverse, etc. Like it’s fine that you like them, but I’m surprised you’re surprised other people don’t quite get the appeal.


As someone who uses all those techs, what makes me more sad is when I can’t continue to use this tech because some other company force me to use their shitty software that isn’t interoperable in any way and I am forced to use it because of work/friends.

However, I am aware enough to understand why others don’t get the appeal. At least with email I can pick my client and they can pick theirs which is more “user friendly”. Slack etc forces me to use workflows I hate.


That's what standards are for. It makes sense for open source projects but not necessarily for commercial ones, especially if they are successful.


Which is why they can, and should, be legally mandated. The EU has taken appropriate measures in the hardware space, maybe next we take it to the digital realm.


In particular, the icons are really bad. I get that modern flat icons can be a little abstract, but they’re at least visually distinct at a glance. Here you have three icons in a row that feature tiny text (11pt? 10pt) text above equally tiny icons.

It’s weird, because in the abstract I agree with a lot of the criticisms and principles in the post, but this example is so bad.


On the sort of low DPI monitor the interface was designed for, 10pt text is quite legible. Heck, even 8pt text is legible.


> I get that modern flat icons can be a little abstract, but they’re at least visually distinct at a glance.

They are not.


“The mIRC interface was in no way perfect”

The examples taken from mIRC are almost exactly the standard window/document controls that are missing from slack and discord.


The toolbar buttons weren't standard controls. And many of them don't seem like actions that would be taken very frequently, so they probably didn't really need to be in the toolbar.

On the other hand, the use of a toolbar, in itself, was a standard. And so was filling up toolbars with as many icons as would fit, because why not.


The issue is really that "Usability" in the sense of "helping people achieve their goals" isn't really a priority for most of these vendors. When they talk about "usability" they really mean: engagement, retention, distinctive branding, and optimising for whatever KPIs are important for their business.

Then there are the economic pressures of supporting multiple platforms, so you get shit like Electron and PWA where for a fraction of the price you can present the same semi-usable interface on different platforms - where it looks like your signature style but doesn't look like any other apps on the platform.


> optimising for whatever KPIs are important for their business.

Or, for their promo packet. "Our usability research showed that our UI redesign was not worth the cost to the users" is a sentence I've never heard in fifteen years of software development. But I have seen dozens of decks that include a slide of hand picked quotes from a small sample of test users extolling the virtues of the presenter's Q3 project.


For a long time, I've contemplated building a Qt competitor, a cross-platform library for building GUI's.

I'm still considering it, but if I did do it, it would need to address accessibility and usability first. And this post is a good example of why.

I think I'd take usability guidelines from 20-30 years ago instead of modern ones.


I have been praying for a GUI library that at least supports a skeuomorphic style for UI elements; this, however, requires some creative designers without preconceptions ready to take on a challenge. The challenge is to create a forward-looking skeuomorphic design with subdivisions adapted to different devices and workflows (separated e.g. by screen size and by expected expertise of the user).

This is hard to do when most designers are stuck in the non sequitur that skeuomorphism means a dated 90s feel (a baffling lack of imagination for a seemingly creative profession) or that a single design instance should support all devices (I will not use a word processor on my phone unless absolutely necessary, and I sure as hell don't want to touch-click gigantic rectangles on my widescreen Microsoft).

And of course, as you say, it is important to read the rich HCI research conducted over many decades since the 60s, and not dismiss it as "old".


Well, I'll say that I won't just support skeumorphic design; it will be the only option. :)

The rest of your post lays out my plan perfectly.

Yes, it will be a challenge, but I think you are absolutely right that UI's need to be designed for their audience, like blog posts and other writing.

Now the question is: if you need to use such a library in a commercial context, would you happily pay for it? I can't justify the time unless I could make money.


He touches upon the tragedy of modern simplified lobotomy crayon full-screen UIs: They are horrible to use for any usecase they weren't specifically built for: In slack and teams, I often end up screenshotting something, because the information I want to consult, cannot be displayed at the same time I am in another editing view.

  This includes scrolling (not in slack/teams, that single thing actually works): Often you cannot scroll back up in history to see earlier discussion as you are typing.
Which segues to my pet peeve: In the dumbed-down modern online discussion forums, the UI and search for finding earlier discussion is horrible. There is no calendar/timeline navigation, but just an endless contextless scrolling, where you can scroll back for 20 minutes to reach "7 months ago".

This is annoying on technical forums, where a feature discussion from 2 years ago may be very relevant to figure out the hows and whys.


I was nodding along, until the author presented mIRC as an example of what should be.

Maybe if you grew up with this app, it's efficient or whatever. But at a glance, I have no clue at all what any of those icons do. It's just a chat app. Perhaps you should be able to control it, or at least learn it's afordances, with chat?


Microsoft have outdone themselves on New Outlook where the transition animations actually delay state change.

For example, where I'm used to clearing a folder by pressing CTRL+A followed by Del, it's now necessary to wait after the first step because selecting all messages takes a while to animate.


I actually think slack is both an example of poor design and good design.

While the multi-tenant with zero shared ui elements is a monstrosity, I think they do a really good job at discoverability. One of the things application level menus aren’t great at is connecting what element must be focused to take the action. In something like word, it’s all a document so most things are applicable most of the time, but that breaks down in chat like slack where there is a notion of chats, files, links, users, etc and the localized contextual controls while not always great aren’t terrible either.


Slack, assuming you’re referring to the new design is exceptionally poor. Putting a sidebar on my sidebar so I can get notified while I’m being notified isn’t the discoverability I’m looking for. It’s as if they took a discord screenshot and copied it without even attempting to understand how it works.


Would absolutely rather use Discord than Slack


I'm continuously surprised that Discord hasn't made a play in the Slack/Teams/etc space. They are dramatically more competent at it than their competition.


>Another example is why the Mac, Atari and Amiga all put the menu bar at the top of the screen: it's an oft-used target and should be easy to move the pointer to. This is an adaptation of Fitt's law.

this oft-repeated argument is so dumb. Perhaps it's that different people's brains work differently, but so what. I have more than one window open; I'm looking at one of them, that's my context. Why would I want to roll the mouse out of my context to go to the top of the screen (shared context) only to discover that the the other window has the focus, and thus the menu? If I clicked on the menu in the context of my window, it would be given the focus and I would not be interrupted/distracted; my attention is the premium here.

It's just such a flawed argument. I'm fine with you having it the way you prefer it; I'm not fine with you forcing me to have it the way you prefer it; and even more aggrieved to be given a nonsensical reason.


Uber is also completely fucking unusable now. Just ads and upsells


I hear Chinese supper apps are so full of features that people frequently show others how to do things or what they've discovered.

My guess would be that in the new UI swindle finding a button gives a sense of accomplishment. Recognizing 2 or 3 gives a sense of knowing the application and makes a person curious what the other buttons do. If they cant take what they've learned to other applications the lock in is stronger.

Empower the user but only slightly while simultaneously making them feel slightly dumb.

Of course there is also the new culture where the mission is less important than not hurting peoples feelings. You cant just tell people anymore that their shit sucks and that they need to start over. Not do that often enough and the ugly bloated flat icon monstrosity is going to ship. No one can stop it now.

In the end everyone got used to the new slashdot design and all worked out... (lol)


>I hear Chinese supper apps are so full of features

Do you mean "super apps"? I legitimately can't tell if that's a typo or not.


the discoverability aspect of usability ("what can this system do") has fallen the worst.

I'm looking at you Google Assistant/Alexa/Siri/ChatGPT.


Interesting reading. We need more innovation in UI/UIX. Personally, my mental health is affected by bad experiences and get flooded by a lot of content from different sources we need new ways for content consumption.


I guess I always look at UI as like movies. Sure, we know a good movie when we see it, and "everyone" knows what a good movie is. Yet that's totally wrong as one person's "Dumb and Dumber" is another person's "Titanic" - and you should look at that comment from both perspectives.

People still make bad movies that should be good. And make good movies that should be bad. If it was easy, everyone would make good movies.


>I guess I always look at UI as like movies.

That is a big part of the problem; designers are tasked with creating something to be used by a vast array of people: those with diminishing eyesight, unsteady hands, professionals who need adaptable elements for advanced workflows, newbies who need to be onboarded quickly and smoothly, dilettantes who want to make use of basic software functionalities, hackers who want to extend or modify the software... an endless list of situations, people and workflows; and all that designers see is aesthetics.

UI is much more than aesthetics, and a UI designer is much more than an artist expressing themselves or their brand; or at least, it should be.


People might disagree on what's a great movie, but I don't think anyone disagrees on the very worst movies. No one actually thinks Gigli was a good movie.


I would be willing to bet any amount that at least one person unironically loves Gigli. Art is subjective, that's the whole point.


If you are a developer, you really should observe your users struggle with your software. It will make it very clear that all that clever stuff you thought of just confuses the hell out of everybody. And since I am a developer, I have never felt as much shame as when I realised what horrific workflow I had accidentally foisted upon my unsuspecting customers.


> What is the "Archive" icon even supposed to depict?

It looks like a labelled archive box to me.

<https://www.google.com/search?q=archive+box&tbm=isch>


It would've been a lot more clear with old-style coloured isometric icons. Making everything vague, monochrome silhouettes definitely wasn't an improvement.


Design has nothing to do with usability. Companies do not value designers. At least that has been my experience. When I was young and gullible I was hireable. Now I am too stubborn to compromise on my values to be a part of the bureaucracies and pandering.


There’s a whole book dedicated to turning gullible obedient designers happily producing malicious dark patterns into designers who own their responsibility to society and the end users.

It’s called Ruined by Design, by Mike Monteiro. He also has a talk on the same topic. Both are highly recommended.

In short, a designer has a responsibility to use the craft for good and refuse to weaponize it. For this to be a real possibility, we need a union standing behind the designer.


Cloud and SaaS. It is all about constantly mutating interfaces, underlying concepts and resources.


Mac classic UI/UX: Tog on Interface (1992) Bruce Tognazzini 0201608421


"One of the two "slacks" I'm a member of recently got a UI update - and the other didn't."

This is because major changes at basically all business software companies are rolled out on a team-by-team basis, rather than per user account.

Also note that from the screenshots that he is on Free tier Slack teams in both instances. When you do not pay for a business communication tool, you should not really have expectations of stability or coherence. You're getting a free sample of a product that is meant to be paid for. What he was experiencing was a beta or gradual rollout where Free teams were being used as test subjects to see how the redesign impacts user engagement. If you don't pay for a product, you are the product.


In this case, you cannot escape the enshittification by paying.

Still not as bad as the "slacksticka" which is their icon now


You can't escape it, but if you pay for an enterprise plan you typically have huge amounts of advance notice (monthly or quarterly roadmap updates, plus an account team at your service and the ability to demand face time / feedback with the product managers). This makes a big difference to your process of change management in handling redesigns, and influence over even relatively minor UI updates.


as someone who pays for Slack, you could only push out the absolutely godawful recent update to December. this is also the only UI update I'm aware of that had its own change.org petition (with thousands of signers!).


The author has some valid points, but let’s not forget that people used to critizice those old UIs at the time as well, ever since the first GUIs appeared to replace command line interfaces.


My pet peeve nowadays is when apps or websites just crop the names of stuff (like files for example) and add “…”, so that you have to hover every element to see the full name in a tooltip.

Information density has decreased even as screens have gotten bigger and LCDs have higher and higher resolutions.

I dont know why designers feel the need to use such large margins, padding and whitespace everywhere.


> Information density has decreased even as screens have gotten bigger and LCDs have higher and higher resolutions.

Right, and here's an especially egregious example: https://investor.vanguard.com/investment-products/mutual-fun...

When viewed on a laptop, there is so little information visible (without scrolling) that you might as well be viewing the page on a phone.


Biggest example of this is to go to any news site or blog, and scroll through in a browser (without adblock if you're feeling frisky). How long did you have to scroll, and how dense was the text?

Then open a TUI browser like Lynx and go to the same page. Now look at how much text there really was. Usually it will be about a page or so without the need to scroll at all.

Doing this opened my eyes to just how monumentally bad modern web design has gotten at conveying information.


That site is like setting browser zoom level to 500%, but without nice large text sizes. Ugh.


  > I dont know why designers feel the need to use such large margins, padding and whitespace everywhere.
probably because they are designing for mobile first?


Yes, mobile-first design has greatly accelerated the decline of usability in interfaces, but that decline also began before mobile-first really became a thing.

All I know is that usability keeps getting worse as time goes on. Where is the floor?


> Where is the floor?

You could try to cram it into an Apple Watch?


That's still good compared to the chat-only interfaces of the future.


And I thought Next-Next-Finish was bad...


mIRC is very powerful, but most '90s, Windows power-users needed some explanation of how to use it, whereas Slack's design language makes it install-and-play for even basic users.

Mobile design isn't just a response to the limitations of its hardware; as technologies become more prevalent, intuitiveness to increasingly inexperienced users has to be prioritised over everything else and this is a major trend that dates right back to the beginning of the computer industry.

Usability is much better today than it was in the past; it's just no longer targeted at us.


I currently ship a web app, which among many things contains an 'excel grid clone'. My earlier team collaborators, who by now have jumped ship to other greener pastures, styled the grid in 'that modern way', so on a huge ultra-HD resolution monitor, you could view maybe 9 columns at a time, if you squeezed them a bit :-/. Soon after the CSS guru was gone, I changed that grid CSS to instead honor the spirit of Excel, and it is now possible to view more than 9 columns of data, and more than 7 rows :-/. I still in vain try to fathom the mindset that would make you turn 'excel' into a bottleship-scrolling-microscope-hell.


I've seen at least half a dozen websites do a redesign with these horrible features....I really do wish for the web the way it was 15 years ago.


Yep, god forbid you can see more than the 10 first characters of a filename - it's a homage to the old 8.3 limitation! And your files were created 'about 7 months ago', not on april 17 in 2023. If you need more details than that, you are probably a lowly work-slave.


> Information density has decreased even as screens have gotten bigger and LCDs have higher and higher resolutions.

I hate this trend. This is the largest reason why the reddit redesign sucked (besides the worse performance).


It's very sad, I agree that aesthetics/artistic sense of UX designers is absoutely trumping usability in very many modern applications.

I always point to wikipedia as something that has great UX (though I think some of their recent choices have worsened usability). Like making their table of contents a hamburger dropdown.


> aesthetics/artistic sense of UX designers is absoutely trumping usability in very many modern applications

Indeed. And I get totally stupefied by how much freedom and authority they are given in relation to the other people supposed to provide inputs for the project.

My stock broker did that. They had a web app that didn't just work -- it was acknowledged as very good. Fully customizable, you could open many sub-windows and monitor an arbitrary number of quotes depending on your strategy. The screen could turn into a mess, but it was your mess, and that's ok as long as it made sense to you, the user. I used their web app for more than a decade.

Last year they replaced it by something they have the guts to call "the enhanced experience". That fuming pile of sh*t enforces a certain geometry for the screen that makes it absolutely clear that the kids who designed that garbage never traded stocks in their lives.

But what scares me most is: what the hell the managers who approved that crap had in their minds?


Part of it is that UX/UI designers are hired based on their portfolio. This imposes a bias towards those who can wow their interviews with flashy mockups/workflows on figma.

If it comes down between someone whose design looks like it came out of Win98 but has the information density of a terminal, and someone whose design looks cool but has the information density of a picture book, most teams pick the latter. Those same people go on to design the applications that get shipped.

The OP is right in that we lost the ability to design something like mIRC - the people doing the hiring value form over function.


> This imposes a bias towards those who can wow their interviews with flashy mockups/workflows on figma.

Sounds a hell of a lot like leetcode-optimized interviewees...


I think a major part of UX design is that a lot of people enter the field thinking it's an art-adjacent field because they want to do something similar to art.

They often lose sight of the goal of usability in pursuit of their own aesthetic sensibilities. A lot of UX designers hate that the most usable designs are actually some of the ugliest/plainest.


Yup.

Don Norman got a lot of blowback from designers, when he published The Psychology of Everyday Things (which was changed to The Design of Everyday Things).

He even wrote a sort of “let me explain” follow-up book, called Emotional Design.

But I am actually going through exactly this, right now. I just got off a Zoom call, with our designer, because the first batch of test users couldn’t figure out that there are action buttons.

Many designers want users to admire the UX; not use it.


Does anyone have a better idea for interviewing designers then?


> Like making their table of contents a hamburger dropdown

I still go hunting for the TOC, get confused, and take a second to remember they hid it. Like, daily. It’s a couple extra seconds but it’s all the damn time.


The trend seems to be to make everything into appliances, masses of individual ad-hoc tasks, all with their idiosyncratic logic and no way of interoperation or composing them to accomplish anything but the exact task intended.

There's an App for That. Except when there isn't it Just Can't Be Done.

P.S. Gnome 3 is way ahead of Blender in usability. Blender is nowadays very usable, but it's still not great. Having a billion buttons shoved away in menu hierarches is not power, it's more akin to having a billion idiosyncratic Apps.

Edit: Want to add that Blender usability keeps on getting better. The increased use of the node interface is making Blender a lot more usable and powerful. Being able to connect any too ports is something that would require combinatorial explosion with buttons and menus.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: