Hacker News new | past | comments | ask | show | jobs | submit login
The Decline of Usability (datagubbe.se)
996 points by arexxbifs on April 17, 2020 | hide | past | favorite | 695 comments



Ubuntu got worse at 18.04. Logging in on desktop now requires "swiping up" with the mouse to get the password box. The "swiping" thing is to avoid problems with unwanted activation when the device is in your pocket. It's totally inappropriate to desktops.

Then there's icon mania. I've recently converted from Blender 2.79 to Blender 2.82. Lots of new icons. They dim, they change color, they disappear as modes change, and there are at least seven toolbars of icons. Some are resizable. Many icons were moved as part of a redesign of rendering. You can't Google an icon. "Where did ??? go" is a big part of using Blender 2.82. Blender fanboys argue for using keyboard shortcuts instead. The keyboard shortcut guide is 13 pages.

Recently I was using "The Gimp", the GNU replacement for Photoshop, and I couldn't find most of the icons in the toolbar. Turns out that if the toolbar is in the main window (which is optional), and it's too tall to fit, you can't get at the remaining icons. You have to resize the toolbar to give it more space. Can't scroll. There's no visual indication of overflow. It just looks like half the icons are missing.

(I have no idea what's happening in Windows 10 land. I have one remaining Windows machine, running Windows 7.)


> Logging in on desktop now requires "swiping up" with the mouse to get the password box. The "swiping" thing is to avoid problems with unwanted activation when the device is in your pocket. It's totally inappropriate to desktops.

Anecdote: The first time this happened, I had no idea why it wasn't working and naturally started clicking on things and pressing buttons to try to get it to do the thing. I thereby discovered that you can get the password prompt by pressing Enter.

Having used it this way for two years now, your description of this behavior is the first time I'm learning that it is also possible to do it by dragging the mouse upwards. The discoverability of this behavior apparently does not exist -- I assume if pressing Enter hadn't worked I would have had to use a different device to look it up on the internet.


> Anecdote: The first time this happened, I had no idea why it wasn't working and naturally started clicking on things and pressing buttons to try to get it to do the thing. I thereby discovered that you can get the password prompt by pressing Enter.

One more anecdote. I found this screen for the first time when I got my laptop to demonstrate something to a student. I didn't know what to do, started to do random things, trying to figure out what happened, and the student interrupted my attempts, got mouse from my hand and swiped with it. I felt myself old and stupid. It is the thing like 20 years ago when I taught my parents to use standard UI. Only new it is me who needs help.

I asked, she didn't see Ubuntu before, and nevertheless she managed it better than me. I think I'm growing old, and just couldn't keep up with a pace of changing.


I think it's the familiarity which is actually hurting you there. If you come up to a device which as far as you know is alien technology, you don't know if it should behave like a Mac or an iPhone, so the thing that works on an iPhone feels like more of a valid possibility. If you come up to it knowing exactly what kind of device is and that it isn't at all like an iPhone, it doesn't.

Because any hope of guessing it comes from knowing that phones do that, so the less like a phone you know it to be, the less likely you are to try that. Notice that even on phones it isn't intuitive if you've never done it before. If you tap the lock screen on Android it shows a message that says "Swipe up to unlock" -- because how else would you know that for the first time? But at least there it serves a purpose.


I still remember the first time my young niece sat at my traditional desktop computer a few years ago. She must have been about five or six years old. She immediately started using her hands to try and interact with the screen and was utterly confused as to why nothing was happening.


Like Scotty in that iconic scene..."Hello, computer...........Hello, computer......" someone hands him the mouse, which he holds like a microphone "Hello, computer...."


For those unfamiliar, from the second greatest Star Trek film: https://m.youtube.com/watch?v=xaVgRj2e5_s


> I think I'm growing old, and just couldn't keep up with a pace of changing

On the other hand, when my iPhone suddenly would connect with a caller, but neither party could hear the other, redialing didn't help, turning it off/on didn't work, I remembered the ancient trick of "cold boot". Which resolved the problem.


You put an iPhone in the freezer and booted it, and that fixed it? Wow. I thought that was just for saving spinning disk drives and preserving the contents of RAM for an attack.


A cold boot means shut off all power to the device. The normal "off" button the iPhone puts it in standby mode, not off. This is so it can still listen for calls.


I tried to power off an iPhone today for 5 minutes, failed, and just gave up. Why Apple, why would you make this simple action so damn obscure.


My iPhone 10 locked up hard a few days and I couldn’t get it to turn off using the standard method.

After some digging, I found a new trick that I guess is implemented at a lower level: press and release volume up, then volume volume down, then press and hold the main button until it powers off.


Wait, is it not press-and-hold power like on Android?


No, on the newer iPhones press and hold simply brings up Siri. Gotta press and hold main button and either volume button simultaneously. If that doesn’t work you should be able to use the trick I describe above.


you have to press the "power" button and a volume button afaik


In my experience Ubuntu shows a little animation of up arrows, and maybe even says "swipe to start" if you look long enough at the login screen.

It is pretty confusing the first time, and annoying every time after that, didn't conciously know about the "Enter" trick before now.


Windows 10 has the same mechanism, she might have learnt it from there first and just applied it here.


I’ve been actively using computers since CP/M. On windows, that I use daily, just random click and press and shake the mouse until I get a password prompt to login. Since there is slight delay until something happens after each action I’ve never been patient enough to figure out what exactly it is that works.


The period between wake-up-from-sleep and getting a usable desktop has been a UX nightmare for a long time. There are so many awful things and question marks that happen in the flow:

1. What do you need to do to invoke a wakeup? Press a key? Are there any keys that don't wake the machine? Move the mouse? Click a mouse button?

2. Multiple monitors: During the wakeup sequence, you first have one display turn on, then you think you can log in but surprise! Another display turns on, and the main display briefly flickers off. For some reason, after 25+ years, display driver writers still can't figure out how to turn a second display on without blanking and horsing around with the first display.

3. Once the displays are on, some systems require some kind of extra input to actually get a login prompt to display. Is it a mouse click? A drag? Keyboard action? Who knows?

4. Some systems allow you to type your password as soon as the computer wakes. But there is some random delay between when you invoke wakeup and when it can accept input. What this usually means is I start typing my password, the computer only recognizes the last N characters and rejects it, I wait, then type it again.

These are some irritating bugs that affect everyone who uses a PC every time they log in. Yet OS vendors choose to spend their development time making more hamburger menus and adding Angry Birds to the Start menu.


Windows 10 allows a normal mouse click as well, unless Microsoft changed that afterwards.


It still does. When you do a mouse click, the lock screen moves up very fast - the animation is also a hint for the future. If you have a touch-enabled device and touch-press the lockscreen, it jumps up a little and falls back down, suggesting the swipe movement. On top of that, just typing on your keyboard works.

So MS has managed to make an interface that works just the same on desktops, laptops and touch-enabled devices, and the UX isn't bad on either.


Generally yes, except I seem to run into an annoying bug where if you start typing your password too quickly after the initial unlock keypress, the password input control sometimes decides to select everything you've managed to type so far when it initialises, so your next key press overwrites everything. Plus at home I've kept the full lock screen because I actually like those random landscape pictures, but at work they've set the computers to directly boot into the password entry, which of course rather plays havoc with your muscle memory.


It hasn't changed.


Windows 10 has the same stupid UX, they likely learned it there. It also supports hitting Enter, or any other key. I don't know if Ubuntu supports other keys too, I use Xubuntu to avoid these exact pointless changes.


I don't know about Ubuntu but on Windows you can also just click with the mouse to open it, no need to swipe up.


I’m 20 and it took me literally minutes to realise I needed to swipe up. I thought the computer had frozen.


It's not your fault.

It's the shitty ergonomics that have been pervading software UI design for several decades now.

From the huge number of responses to this article, it's clear the software industry has a very major problem. The question I ask is why haven't users/user complaints been successful in stopping these irresponsible cowboys.

Seems nothing can stop them.


> The question I ask is why haven't users/user complaints been successful in stopping these irresponsible cowboys.

Most users don't complain, because technology is magic to them and they have no point of reference; they assume things must be the way they are for a reason. From the remaining group that does complain, many do it ineffectively (e.g. complaining to friends or in discussion boards that aren't frequented by relevant devs). And the rest just isn't listened to. It's easy to dismiss few voices as "not experts", especially today, when everyone puts telemetry in their software (see below), and doubly so when user's opinions are in opposition to the business model.

Finally, software market isn't all that competitive (especially on SaaS) end, so users are most often put in a "take it or leave it" situation, where there's no way to "vote with your wallet" because there's no option on the market that you could vote for.

The problem with telemetry is that measuring for correct things and interpreting the results is hard, and it's way too easy to use the data to justify whatever the vendor (in particular, their UX or marketing team) already thinks. A common example is a feature that's been hidden increasingly deeply in the app on each UI update, and finally removed on the grounds that "telemetry says people aren't using it". Another common example is "low use" of features that are critical for end-users, but used only every few sessions.


I would like to add the use of "taps" as an engagement metric to your list of misuses of telemetry. There used to be a rule of thumb in UI design that important actions should be as few clicks away as possible. Measuring engagement through taps encourages the opposite.

I also don't like things measuring "dwell time" when scrolling, as it encourages attention-grabbing gimmicks and rewards things that are confusing as well as things that are useful.


An organizational problem seems to be that UX decisions are owned by the UX team, who tend to be extremely tribal in their opinions.

As in, if you are not a UX professional, your opinion is inconsequential.

See: replies on most Chrome UX feature requests over the last decade


> Seems nothing can stop them.

Of course. The people complain. They say fork it and change it yourself or use what we give you.

The people that can't just suffer through it. The ones that know enough use something else.

I login in linux tty. And startx starts dwm. No fancy login screen for me.


I don't care how "user friendly" you think you are, not being able to log in is an absolute fail. Even getty has a better user interface, imagine that: regression all the way back in to the 70s.

This is why I don't touch GUIs from the major binary distros or gnome3 with a 10 foot pole. If I can avoid it I don't ever install anything from those projects.


I have avoided gnome as much as I can since the gnome2 days. The entire project is rife with UX decisions that leave a bad taste in my mouth.

[0] is the example that always comes to mind. I guess this made sense to somebody at the time, but it adds overhead to a process that was simple before, and isn't enabled just for "Enterprise" deployments, it's just dumped on the user to figure out how to configure screensaver hack settings by creating / modifying a theme.

[0] https://wiki.gnome.org/Attic/GnomeScreensaver/FrequentlyAske...


The problem with gnome is that they don't seem to validate their ideas at all. As a result, gnome users are directly subjected to the unfiltered whims of gnome's UI "designers".

Instead of spending the substantial donations they received[1] on who knows what, the GNOME foundation should have spent some of it conducting proper focus groups.

1: https://www.gnome.org/news/2018/05/anonymous-donor-pledges-1...


I don't know if that's true. GNOME 3 is just a dark enlightenment experiment, like an attempt at neomonarchy or the Empire from Elite. Instead of feature creep and user accommodation to Windows idioms, they slash and burn despotically to ease developer burden and try to flesh out their own vision. Do you use Dropbox? Too bad. :3 There's extensions that break but whatever we don't care fuck the tray.

I think it's an interesting and worthwhile experimental path; I just wish it wasn't the "default" as much as it is. But I also feel that way about Ubuntu. And Windows. xD


This page isn't new enough to include more recent failures.

One of my least favourite was when it was not possible to configure the screensaver timeout to never turn off the display. IIRC you had a choice of several fixed times, from 5 minutes to 4 hours, but no "Never" option.

Not useful for systems which display information and are infrequently interacted with. That use case was completely ignored, and for no good reason.


> This page isn't new enough to include more recent failures.

oh no doubt, I have another comment from 4+ years ago about the same topic https://news.ycombinator.com/item?id=10883631 and even then it was ancient history IIRC

man, just looking at that page again reminded me that Windows Registry for Linux^W^W^W^W gconf exists.


Between this and https://stopthemingmy.app/ and the various other systemd/freedesktop “anti-hacker” initiatives, I’ve been finding Linux to be more and more becoming the opposite of the operating system I’ve used for the last 20 years.


Man there is not space on this comment box or time for all the criticism that link deserves.

>Icon Themes can change icon metaphors, leading to interfaces with icons that don’t express what the developer intended.

Icons were never sufficient metaphors to start with which is why we have text labels.

>Changing an app’s icon denies the developer the possibility to control their brand.

What does this even actually mean.

>User Help and Documentation are similarly useless if UI elements on your system are different from the ones described in the documentation.

This is only true if the user is completing an action that is solely based on clicking an icon with no text which we have already established is bad.

>The problem we’re facing is the expectation that apps can be arbitrarily restyled without manual work, which is and has always been an illusion.

Why has this worked generally fine in lots of ecosystems including gnome?

>If you like to tinker with your own system, that’s fine with us.

Earlier discussion seemed to suggest that lots of gnome developers were in fact not fine with this because it hurt gnomes "brand identity"

>Changing third-party apps without any QA is reckless, and would be unacceptable on any other platform.

Reckless?

> we urge you to find ways to do this without taking away our agency

Your agency?

> Just because our apps use GTK that does not mean we’re ok with them being changed from under us.

Nobody cares if you are OK with it.


Still, I try to avoid GUI stuff and freedesktop stuff as much as possible (I do use a window manager, but not GNOME or KDE or whatever), and write better programs when possible, but a few things don't. I don't use any desktop environment at all. The web browser is one thing that isn't so good. Really, many things should be moved out of the web browser and made to work by simply nc, curl, etc. And some things work better with local programs, but others with remote, with whatever protocol is applicable e.g. IRC, NNTP, SMTP, etc, please.


It's really just the popular distros that are following redhat. If you build an OS from scratch or use something like alpine or gentoo it's not so bad.


A neat idea that I hadn’t thought of until your comment:

Because it’s now possible to run multiple VMs at once (containers, etc) perhaps it’s time to run a simple, minimal, admin friendly hacker vm inside Ubuntu desktop?

Let Ubuntu configure all that it needs to get a good functional machine out of the box (working sleep mode for laptops, WiFi management, GPU support, systemd if that’s what it wants.) I then deploy the minimal VM I actually want to poke around with inside that installation.

This is pretty much what many people do in macOS. Apple’s OS supports the bare metal, vagrant / VirtualBox give me my tractably scrutable dev environment.

It’s not a particularly ground breaking concept but it might cheer me up a bit when battling with the volatility of user facing Linux distributions.


> Because it’s now possible to run multiple VMs at once (containers, etc) perhaps it’s time to run a simple, minimal, admin friendly hacker vm inside Ubuntu desktop?

> Let Ubuntu configure all that it needs to get a good functional machine out of the box (working sleep mode for laptops, WiFi management, GPU support, systemd if that’s what it wants.) I then deploy the minimal VM I actually want to poke around with inside that installation.

If there's anyone like me here they might be happy to know that KDE Neon exists and is something like:

- Stable Ubuntu base.

- Infinitely (almost) customizable KDE on top.

- And (IMO unlike Kubuntu) sane defaults.


Thanks for the tip about KDE Neon. I use Kubuntu, but it definitely misses the mark.


Just being pedantic: containers aren't vms. Containers use the native Linux kernel namespacing and segregation facilities to run multiple applications in a somewhat segregated way.


sounds a little bit like https://www.qubes-os.org/intro/


Gnome and KDE are both worse than useless - They poison other, useful projects.

There is never going to be a unified GUI for Linux; that requires a dictator. KDE tried to provide the carrot of development-ease, Gnome tried to generate some reality distortion, but nobody cared. Carrots don't work. As far as I'm concerned, the experiment is over and it is time to embrace the chaos.

Now, this is easy for me to say, I'm mostly a command-line person anyway, and have spent most of my working life dealing with horrible UI. But it does have a lot of implications for Linux that I think a lot of people are not ready to accept.


Honestly the problem with all software is people trying to “innovate” too much. They made this thing called a book once upon a time and those have worked for centuries. Same thing with UIs: the stacking window managers work well from Windows 95 and XP why change it?


"Honestly the problem with all software is people trying to “innovate” too much."

You are spot on, and your 'book analogy' is perfect. If it works perfectly don't change it — that is unless an innovation arrives that offers a significant improvement and that's just as easy to use.

Unfortunately, most so-called UI improvements over the last 20 or so years are not improvements at all, in fact many have been quite regressive. They've annoyed millions of users who've collectively wasted millions of hours relearning what they already knew (and in the end nothing was added by way of new productivity)—and that doesn't include the developer's 'lost' time developing these so-called improvements. It's time that would otherwise have been much better spent fixing bugs, providing security improvements and or developing software for altogether new applications that we've not seen before.

The question I keep asking over and over again is what exactly are the causes behind all this useless 'over innovation'. Why is it done on such a huge scale and with such utter predictability?

Is it marketing's wish for something new? Are developers deliberately trying to find work for themselves or to keep their jobs or what?

It seems to me that many a PhD could be earned researching the psychological underpinnings of why so many are prepared to waste so much money and human effort continuing to develop software that basically adds nothing to improve or advance the human condition.

In fact, it's such an enormous problem that it should be at the core of Computer Science research.


> Why is ('over innovation') done on such a huge scale and with such utter predictability?

Promotion & NIH management sydrome.

New shiny gets a promotion. Fixing a niche bug in a decades-old stable system does not.

And by the time all the new bugs you've introduced are found, you'll have a new job somewhere else.

So essentially, project managers' bosses not pushing back with a hard "Why should we change this?"


GNOME, Mate, Pantheon, XCFE, KDE, Deepin, UKUI, LXQt etc. created an unmaintainable mess of competing forks _while all using stacking window managers_. It's maddening how similar they all are to each other. Someone should build a dating site where understaffed Linux projects can find a matching project to merge with.


Well, it’s great to experiment. And gnome 2 for example worked really great. I guess I am thinking more in the realm of things like “force touch” gestures, multi touch swipes and such. They could be useful as an added bonus for power users, but I think the traditional paradigm for the OS should work by default: 1) on desktop, single/double click, drag and drop, tool tips, mouse wheel. 2) on mobile quick tap, tap and hold, basic swipe gestures (on mobile these work well but sometimes not intuitive).

I’m probably missing some stuff, but I think people out to at least be able to “feel” their way around a UI. Lately there’s been so much push for minimalism like omitting scroll bars and such that make it confusing.

But, again that experimentation will root out what works and doesn’t. And new devices like VR of course have yet to be discovered paradigms.


Before codexes – the kind of book we use today – scrolls had worked for centuries.

> the stacking window managers work well from Windows 95 and XP why change it?

To get something that works better.


> To get something that works better.

Despite all evidence to the contrary.


"Experiments are bad. We've tried them once, didn't work"


> embrace the chaos

Well said. Is your machine shop stocked by a single brand of tools all in the same color, or is it a mix of bits and pieces accumulated, rebuilt, repainted, hacked, begged-borrowed-and-stolen over the course of your development as an engineer?

A free software Unix workstation is exactly the same. It’s supposed to look untidy. It’s a tool shed.

Apologies if I’ve touched a nerve with the Festool crowd with my analogy.


"There is never going to be a unified GUI for Linux; that requires a dictator."

Agreed, but I can never get to the bottom of or reason why developers do not provide alternative UI interfaces (shells) so that the user can select what he/she wants. This would save the user much time relearning the new UI (not to mention a lot of unnecessary cursing and swearing).

For example, Microsoft substantially changes the UI with every new version of Windows—often seemingly without good reason or user wishes. This has been so annoying that in recent times we've seen Ivo Beltchev's remarkable program Classic Shell used by millions to overcome the problem of MS's novel UIs.

Classic Shell demonstrates that it's not that difficult to have multiple UIs which can be selected at the user's will or desire (in fact, given what it is, it has turned out to be one of the most reliable programs I've ever come across—I've never had it fault).

It seems to me that if developers feel that they have an absolute need to tinker or stuff around with the UI then they should also have at least one fallback position which ought to be the basic IBM CUA (Common User Access) standard as everyone already knows how to use it. If you can't remember what the CUA looks like then just think Windows 2000 (it's pretty close).


> Agreed, but I can never get to the bottom of or reason why developers do not provide alternative UI interfaces (shells) so that the user can select what he/she wants.

It's because everybody wants you to use their thing and not some other thing. If people have a choice then some people will choose something else.

This is especially true when the choice is to continue using the traditional interface everybody is already familiar with, because that's what most everybody wants in any case where the traditional interface is not literally on fire. Even in that case, what people generally want is for you to take the traditional interface, address the "on fire" parts and leave everything else the way it is.

Good change is good, but good change is hard, especially in stable systems that have already been optimized for years. Change for the sake of change is much more common, but then you have to force feed it to people to get anyone to use it because they rightfully don't want it.


> embrace the chaos

Linux was all about chaos and herding cats until a short number of years ago.

It's the "standardisation at all costs" brigade who have killed the goose that laid the golden eggs. It's now far worse than Windows in many aspects. Freedesktop and GNOME deserve the lion's share of the blame, but RedHat, Debian and many others enabled them to achieve this.


Linux GUIs have always been worse than Windows as far as I remember (going back to the mid 90s).


That's very subjective, and not at all related to the point I was making. It wasn't about the niceness of GUIs.

Over the last decade, we have experienced a sharp loss of control and had certain entities become almost absolute dictators over how Linux systems are permitted to be run and used.

Linux started out quite clunky and unpolished. It could be made polished if you wanted that. But nothing was mandatory. Now that's changed. A modern mainstream Linux distribution gives you about the same control over your system that Windows provides. In some cases, even less. Given its roots in personal freedom, ultimate flexibility, and use as glue that could slot into all sorts of diverse uses, I find the current state of Linux to be an nauseating turn off.

And I say that as someone who has used Linux for 24 years, and used to be an absolute Linux fanatic.


People have had similar complaints for a very long time: https://www.itworld.com/article/2795788/dumbing-down-linux.h...


True. In that article though, they thought that the commercialisation wouldn't seriously affect true free distributions like Debian. Shame that did not turn out to be the case. The fear of even slight differences has effectively forced or coerced everyone to toe the RedHat line, even when severely detrimental. What we lost was any semblance of true independence.


I'd be fine with an "UI dictatorship" or standardization if it gave us better UI and UX. Gnome's "dictatorship" has only brought bad experiences and inappropriate interfaces.


I actually like Gnome 3 a lot. I feel like it’s the first DE I’ve tried that I could interact with with the appropriate mix of 90% keyboard 10% touchpad on a laptop.


"This is why I don't touch GUIs from the major binary distros or gnome3 with a 10 foot pole"

Exactly, but what perplexes me is why aren't these issues that are so obvious to us not obvious to them. Why do they think so differently to normal users?


This behaviour has been removed in the next gnome release


I dislike it too, but doesn’t Windows 10 do the same thing?


Windows 10 is even worse. It swipes up for you when you press a key, but it won't pass that key on to the password box. So you have to press a key and then type your password. At least with gnome you can just type your password and it works as expected...


It also requires you to wait until the animation has finished. I regularly lose the first character or two of my password, because I start typing too soon, while it’s still animating away


iOS has been this way for a couple versions, and I can't imagine how it ever passed testing.

Animations that block use input are the sort of stupidity that becomes evil in its own regard. There was even a calculator but with this. This is a massive failure at the management level; somebody actually codes these things, but that it's not caught anywhere before shipping shows that the wrong people are in charge.


It's the sin of losing the user input.

Don't make your user repeat something twice :)


I do not have this experience on the screen lock screen (on Ubuntu 18.04 at least) — just typing a password will put the full password into the box once the animation is gone.

I do not log-in frequently enough to remember how it behaves on log-in (even my laptop has ~90 days of uptime).

FWIW, moving away from GNOME 2.x to either Unity or GNOME 3 was a hard move to swallow, though in all honesty, Unity was better (though pretty buggy and laggy until the end)!


When Ubuntu have removed Gnome2, I have looked at Unity, got really unhappy about it and installed gnome3. ... In a few minutes, I started to think that Unity wasn't THAT bad, after all, and went back to it.

Now when it is gone, I'm using xfce, which seems be the last decent desktop environment.


Another one to add:

Besides waiting for animation, in Windows 10, if you type the password fast enough the first character gets selected and the second character you type will replace it.

This happens frequently.


It's not just the logon screen. In many apps if you type CTRL+O followed by a file name to bring up the File Open dialog and populate the name field then the application frequently loses the first few characters of the file name. Or type CTRL+T followed by text. This opens a new tab but the text appears in the old tab if you don't pause.

These things used to work reliably. I think most of the problems are caused by introducing asynchronicity into apps without thinking about how it affects keyboard input. Keyboard input should always behave like the app is single-threaded.


Application developers have ceased to care about input focus for keyboard entry.

Here's an example I encounter whenever I use Microsoft Teams at work. I go to "Add Contact", and the entire screen becomes a modal entry box into which I have to enter a name. There's a single entry field on the screen. It's not in focus, even though that's the sole action that I can perform at this time. I have to explicitly select the entry field with the mouse and then type. It's such a basic usability failure, I really do wonder what both the application developers and the testers are actually doing here. This used to be a staple of good interface design for efficient and intuitive use.


It is the same on chrome or firefox. Empty window only with url bar. You need to type in the url bar to write something. Writing software is hard. Using your brain while writing software is even harder.


Learn this habit: press backspace a couple times first, to wake up the screen and show the password input.


Lol, you've just described such a beautiful regression...


On my Windows 10 if I press a key nothing happens. My fiction is something else has taken the focus away from the "login app". If I press Alt-Tab then suddenly another key will wake up the login app and give me a password prompt.


I always just press a CTRL key on both Windows 10 and Ubuntu.


Windows seems to respond to almost any input to display the password box.


Probably, but Windows is terrible at everything UX so that doesn't mean much.


Windows 10 with Windows Hello does face recognition logon without any need to swipe or press key combinations. On a touchscreen, it responds to a swipe upwards. On a machine with a keyboard it responds to apparently any mouse click or keypress.

Presumably Gnome copypasted it from Windows, because otherwise where did that idea come from into multiple distinct projects simultaneously? Windows has always had ctrl+alt+del to logon, Ubuntu hasn't had a precedent of having to do something to get to the logon prompt, IIRC.


Windows hello is such a nice experience (at least just for local login), but I will never use it because I don't trust Microsoft with my data.


"the info that identifies your face, iris, or fingerprint never leaves your device." - https://support.microsoft.com/en-us/help/4468253/windows-10-...


> "the info that identifies your face, iris, or fingerprint never leaves your device." - https://support.microsoft.com/en-us/help/4468253/windows-10-....

Of course not.


Of course not what?


> I thereby discovered that you can get the password prompt by pressing Enter.

This is the same this Win10. I was super annoyed they add one more step to a very frequent action, for no benefits on a PC computer. Hopefully I don’t have to use Win10 too much, but this is symptomatic of the mobilification of computers.


You can just start typing the password and it will input it correctly, the Enter is not actually needed (at least on 19.10)


Gimp is a great example. If you look up screenshots from the late 90s of gimp 1.0 you think

"Hey wow, that looks pretty great! I know where the buttons are, I can quickly scan them and it's clear what they do! It isn't a grey on grey tiny soup, they are distinct and clear, this is great. When is this version shipping? It fixes everything!"

Apparently almost everyone agrees but somehow we're still going the wrong way, what's going on here? Why aren't we in control of this?


What's crazy is Gimp has always felt user-hostile to me. Loads of other people share this complaint. With every new release, there are people recommending giving it a second/third/fifty-third shot saying "They finally made it easy to use this time!"

At this point it feels like a prank that's been going on for a quarter century.


"What's crazy is Gimp has always felt user-hostile to me."

You're not wrong, I wish you were but you're not. By any measure GIMP is a dog of a program. I wish it weren't so as I stopped upgrading my various Adobe products some years ago, but alas it is. It would take me as long as this 'The Decline of Usability' essay to give an authoritative explanation but I'll attempt to illustrate with a few examples:

1. The way the controls work is awkward, increasing or decreasing say 'saturation' is not as intuitive as it is in Photoshop and the dynamics (increase/decrease etc.) just isn't as smooth as it ought to be. Sliders stick and don't respond immediately which is very distracting when you're trying to watch some attribute in your picture trend one way or other.

2. Most previews are unacceptably slow; they really are a pain to use.

3. The latest versions have the 'Fade' function removed altogether. I use 'Fade' all the time and I don't like being told by some arrogant GIMP programmer that the "function never worked properly in the first place, and anyway you should use the proper/correct XYZ method". You see this type of shitty arrogance from programmers all the time[1].

4. GIMP won't let you set your favourite image format as your default type; you're forced to GIMP's own XCF format and then export your image into say your required .JPG format from there. (I understand the reason for this but there ought to be an option to override it, if the GIMP developers were smart they'd provide options for various times, for instance: 'session only'.)

5. As others have mentioned, there's icon and menu issues, menu items aren't arranged logically or consistently.

Essentially, the GIMP's operational ergonomics are terrible and there's been precious little effort from GIMP's developers to correct it. (GIMP's so tedious to use I still use my ancient copy of Photoshop for most of the work, I only then use the GIMP to do some special function that's not in Photoshop.)

[1] The trouble is most programmers program for themselves—not end users, so they don't see any reason to think like end users do. (I said almost the same thing several days ago in my response to Microsoft's enforcing single spaces between sentences in MS Office https://news.ycombinator.com/item?id=22858129 .) It doesn't seem to matter whether it's commercial software such as Microsoft's Office, or open software such as the GIMP or LibreOffice, etc., they do things their way, not the way users want or are already familiar with.

Commercial software is often tempered by commercial reality (keeping compatibility etc.) but even then that's not always so (take Windows Metro UI or Windows 10 for instance, any reasonable user would have to agree they're first-class stuff-ups). That said, GIMP is about the worst out there.

"At this point it feels like a prank that's been going on for a quarter century."

Right again! GIMP developers seem not only to be hostile towards ordinary users but there's been along-standing bloody mindedness among them that's persisted for decades; effectively it is to not even consider ordinary users within their schema. Nothing says this better than the lack of information about future versions, milestones, etc. All we ever get are vague comments that don't change much from one decade to the next.

Perhaps it would be best for all concerned if GIMP's developers actually embedded this message in its installation program:

"GIMP is our play toy—it's prototype software for our own use and experimenting—it's NOT for normal user use. You may use it as is but please do not expect it to work like other imaging software and do not bother us with feedback for we'll just ignore you. You have been warned".


I've only hacked on GIMP a small bit, so I'm no means an authority, but the sad truth is that GIMP is an extremely small project driven mostly by volunteers. There is a desire to correct these issues but there are many, many other issues to prioritize them against. It's been a very infrequent occurrence for them to have the resources to work with UI/UX designers. I'm not trying to dismiss your complaints, but I think you would see some better results if you didn't wait for someone else to fix it for you. My only suggestion is that GIMP actually has very complete Python/Scheme scripting interfaces that can be used to make a lot of little UI tweaks, although the APIs are not well-documented.

>they do things their way, not the way users want or are already familiar with

In my experience, a program that has never had a feature removed (or unintentionally broken) is an exception, not the rule. It takes a lot of effort to keep things working over the years, and if there is no will to maintain that, then those things will disappear.


If you read my post underneath then you'll appreciate I'm quite in sympathy with your views. What I said above I did with my imaging hat on.

Users, show precious little allegiance to any app when it balks them or they cannot find an easy way to do what they want to (run a help desk for a week and you'll get that message loud and clear).


it’s almost as if good design is not free


Tragically, with few exceptions, the evidence confirms that's true. I say this with great reluctance as I'm a diehard open/free software advocate (ideologically, I'd almost align myself with RMS [Stallman] on such matters, that's pretty hard-line).

As I see it, there are great swathes of poor and substandard software on the market that shouldn't be there except for the fact that there's either no suitable alternative, or if reasonably good alternatives do exist then they're just too expensive for ordinary people to use (i.e.: such software isn't in widespread use). I base this (a) on my own long experience where I encounter serious bugs and limitations in both commercial and open source software as day-to-day occurrences; and (b), data I've gathered from a multitude of other reports of users' similar experiences.

(Note: I cannot possibly cover this huge subject or do it reasonable justice here as just too involved, even if I gave a précised list of headings/topics it wouldn't be very helpful so I can only make a few general points.)

1. The software profession has been in a chronic crisis or decades. This isn't just my opinion, many believe it fact. For starters, I'd suggest you read the report in September 1994 edition of Scientific American titled Software's Chronic Crisis, Wayt Gibbs, pp 86-95: https://www.researchgate.net/publication/247573088_Software'... [PDF]. (If this link doesn't work, then a search will find many more references to it.)

1.1 In short, this article is now nearly 26 years old but it's still essentially the quintessential summary on problems with software and the software industry generally (right, not much has changed in the high-level sense since then, that's the relevant point here). In essence, it says or strongly implies:

(a) 'Software engineering' really isn't yet a true engineering profession such as chemical, civil and electrical engineering are and the reasons for this are:

(b) As a profession,'Software engineering' is immature, it 'equates' [my word] to where the chemical profession was ca 1800 [article's ref.], (unlike most other engineering professions, at best it's only existed about a third to a quarter the time of the others).

(c) As such, it hasn't yet developed mandatory standards and consistent procedures and methodologies for doing things—even basic things, which by now, ought to be procedural. For instance, all qualified civil engineers would be able to calculate/analyze static loadings on trusses and specify the correct grades of steel, etc. for any given job or circumstance. Essentially, such calculations would be consistent across the profession due to a multitude of country and international legally-mandated standards which, to ensure safety, are enforceable at law. Such standards have been in place for many decades. Whilst the 'Software Profession' does have standards, essentially none are legally enforceable. Can you imagine Microsoft being fined for, say, not following the W3C HTML standard in Windows/Internet Explorer to the letter? Right, in this regard, software standards and regulations are an almighty joke!

(d) Unlike other engineering professions, software engineers aren't required by law to be qualified to a certain educational standard [that their employers may require it is irrelevant], nor are they actually licensed to practice as such. When 'Software engineering' eventually becomes a true profession then these requirements will almost certainly be prerequisites for all practitioners.

(e) With no agreed work procedures or mandated work methodologies across the profession 'software engineers' are essentially 'undisciplined'. As such, the SciAm article posits that software programmers work more akin the way of artists than that of professional engineers.

(As a person who has worked in both IT/software and in an engineering profession for years, I have to agree with Wayt Gibbs' assessment. There are practices that are generally acceptable in software engineering, which if I attempted to equate them to an equivalent circumstance with my engineering hat on, then I'd likely end up in court (even if no one was killed or injured by what I'd done. Here, the rules, the structure—the whole ethos is different, and both ethics and law play much stronger roles than they do in software-land).

2. You may well argue that even though Computing Science is not as old as the other engineering professions, it, nevertheless, is based on solid mathematics and engineering foundations. I fully agree with this statement. However, without enforceable standards and licensed/qualified software practitioners, the industry is nothing other than just 'Wild West' engineering—as we've already seen, in software just about anything goes—thus the quality or standard of software at best is only that of the programmer or his/her employer.

3. As a result, the quality of product across the industry is hugely variable. For example take bloatware: compare the biggest bloatware O/S program ever written, MS Windows, with that of tiny, fast and highly efficient Kolibrios OS, built on Assembler https://kolibrios.org/en/ (here I'm referring to methodology rather than functions — we can debate this later).

4. The commercial software industry hides behind the fact that its software is compiled, thus its source code is hidden from public view and external scrutiny. Its argument is that this is necessary to protect its so-called intellectual property. Others would argue that in the past loss of IP was never really the main issue, as manufacturing processes were essentially open—even up until very recent times. Sure, it could be argued that some manufacturing had secrets [such as Coca Cola's formula, which really is only secret from the public, not its competitors], but rather industrial secrets are normally concerned with (and applied to) the actual manufacturing process rather than the content or parts of the finished product. That's why up until the very recent past most manufacturers were only too happy to provide users with detailed handbooks and schematics; for protection from copies they always relied on copyright and patent law as protection (and for many, many decades this protection process worked just fine). It's a farce to say that commercial 'open source' isn't viable if it's open. Tragically, this is one of the biggest con job the software industry has gotten away with—it's conned millions into believing this nonsense. More the true reason is that the industry couldn't believe it's luck when it found that compilers hid code well — a fact that it then used opportunistically to its advantage. (Likely the only real damage that would be done by opening its source is the embarrassment it'd suffer when others saw the terrible standard of its lousy, buggy code.)

4.1 'Software engineering' won't become a true profession until this 'hiding under compilation' nexus is broken. There are too many things that can go wrong with closed software—at one end we've unintentional bugs that cannot be checked by third parties, at the other we've security, spyware and privacy issues that can be and which are regularly abused; and there's also the easy possibility of major corruption—for instance, the Volkswagen scandal.

5. Back to your comment about 'good design not being free'. I'm very cognizant of the major resource problems that free and open source software developers face. That said, we shouldn't pretend that they don't exist, nor should we deliberately hide them. I accept that what we do about it is an extremely difficult problem to solve. My own suggestion to up the standard of open software is a sort of halfway house where cooperatives of programmers would be paid reasonably acceptable remuneration for their contribution to these major open projects. In turn, there would be a small nominal free (say $5 to $20) levied on large scale open software programs such as GIMP, LibreOffice, ReactOS etc. to ensure that development could go ahead at a reasonable pace (the projects otherwise would be revenue neutral—there would be no profits given to third parties).

Let me finish by saying that whilst commercial software has the edge over much free/open software (for example MS Office still has the Edge over LibreOffice), that edge is small and I believe the latter can catch up if the 'funding/resource' paradigm is changed just marginally. Much commercial software such as MS Office is really in a horrible bloated spaghetti-code-like mess and with better funding it wouldn't take a huge effort for dedicated open software programmers to beat their sloppy secretive counterparts at their own game. After all, for many commercial programmers, programming is just a job, on the other hand open software aficionados are usually doing it for the love of it—and that's a true strategic advantage.

I firmly believe that for open software to really take off it has to be as good as and preferably better than its commercial equivalent. Moreover, I believe this is both possible and necessary. We never want a repeat of what happened in Munich where Microsoft was able to oust Linux and LibreOffice. With Munich, had it been possible to actually demonstrate that the open code was substantially and technically superior to that of Microsoft's products, then in any ensuing legal battle Microsoft would have had to lose. Unfortunately that was not possible, so the political decision held.

One thing is for certain, we urgently need to raise the standard of software generally and it seems highly unlikely that we can do so with the way the industry is currently structured.


> When 'Software engineering' eventually becomes a true profession then these requirements will almost certainly be prerequisites for all practitioners.

This wouldn't work. Most software isn't life-and-death. That's a big difference from bridge engineering, nuclear engineering, and aeronautical engineering.

If you're hiring someone to do Python scripting, there's little point insisting they have a grounding in formal methods and critical-systems software development. You could hire a formal methods PhD for the job, but what's the point? The barrier-to-entry is low for software work. Overall this is probably a good thing. Perhaps more software should be regulated the way avionics software is, but this approach certainly can't be applied to all software work.

If your country insisted you become a chartered software engineer before you could even build a blog, your country would simply be removing itself from the global software-development marketplace.

> compare the biggest bloatware O/S program ever written, MS Windows, with that of tiny, fast and highly efficient Kolibrios OS

I broadly agree, but in defence of Windows, Kolibri is doing only a fraction of what Windows does. Does Kolibri even implement ASLR? One can build a bare-bones web-server in a few lines, but that doesn't put Apache out of a job.

> My own suggestion to up the standard of open software is a sort of halfway house where cooperatives of programmers would be paid reasonably acceptable remuneration for their contribution to these major open projects. In turn, there would be a small nominal free (say $5 to $20) levied on large scale open software programs such as GIMP

This doesn't work. It means a company can't adopt the software at scale without implementing licence-tracking, which is just the kind of hassle Free and Open Source software avoids. If I can't fork the software without payment or uncertainty, it's not Free even in the loosest possible sense.

The way things are currently is far from ideal, but we still have excellent Free and Open Source software like the Linux kernel and PostgreSQL.

> open software aficionados are usually doing it for the love of it—and that's a true strategic advantage.

Agree that this can be an advantage. Some FOSS projects are known for their focus on technical excellence. That said, the same can be said of some commercial software companies, like iD Software.

> One thing is for certain, we urgently need to raise the standard of software generally and it seems highly unlikely that we can do so with the way the industry is currently structured.

Software firms today are doing a good job of making money. If the market rewards regressions in UI design, and using 50x the memory you really need (thanks Electron), what good would it do to regulate things?

Apparently most people don't care about bloat, and they prefer a pretty UI over a good one. That doesn't strike me as the sort of thing you can tackle with regulation.


Krita matured nicely over the years and last time I found it quite easy to use.

UI is hard. It got replaced by "UX", but nobody agrees what that really is. So it boils down to whatever impracticality designers dream up. When UI was easy, there were real research, data backing up claims of improvements and laid down rules to enforce some consistency. This became "unfashionable" and was removed.


It was a hard structured science, hicks law, conservation of complexity, goms analysis, fitts law ... we've tossed these decades of hard work in the garbage can because somebody in marketing didn't like the colors.

It was like during the VCR wars of the 80s when consumers wanted the most features but yet the fewest buttons. Then they complained how you had to basically play rachmaninoff on their sleek minimal interface to set the clock.

We need to be like other industries; "that's too bad". Seatbelts are inconvenient? "that's too bad". You don't want to stay home during a pandemic because the weather's nice? "that's too bad" ... you want a bunch of incompatible UX goals that leads to trash? "That's too bad".

Sometimes the opinion of an uninformed public shouldn't matter. We don't go to a doctor and pass around ballots to the other people in the waiting room to democratically decide on a diagnosis. Knowing what to not listen to is important.


The uninformed public is pretty vocal about what is going on in most apps.

The UX propellerheads come back with statistics from user telemetry that always agree with them.

UX is the problem — designing “experiences” geared around an 80/20 approach is substituted for the harder task of building tools that work.


Having rarely seen a VCR that wasn't flashing "12:00", I came to the conclusion that a clock was simply feature bloat.


One of the arguments, given in the Supreme Court, by Mr. Rogers, to permit the VCR to be legally owned (I know how crazy this is sounding but it's real) was to time shift programming ... which requires a functioning clock.

Fred Rogers, 1984: "I have always felt that with the advent of all of this new technology that allows people to tape the 'Neighborhood' off-the-air ... they then become much more active in the programming of their family’s television life. Very frankly, I am opposed to people being programmed by others. My whole approach in broadcasting has always been ‘You are an important person just the way you are. You can make healthy decisions’ ... I just feel that anything that allows a person to be more active in the control of his or her life, in a healthy way, is important."

see https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Unive....

There's definitely non-crazy ways of doing this ... but it requires what at first blush, would appear to be a complicated interface.


Just when I thought I couldn’t be thankful enough to Fred Rogers for saving public television in America as he transformed what it could be, now I find out that he’s also pivotal in standing up for fair use rights and inadvertently supported the analog hole before the digital one was even invented to be closed. He truly was a person who stood up for his beliefs on behalf of the fellow person and an example of goodness in the world without pandering or compromise.

https://en.wikipedia.org/wiki/Analog_hole


"https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Unive.... "

Ha, reading that link now one feels a delicious sense of irony. Imagine how Sony would react today seeing that it has become one of the biggest purveyors of video/movie content. ;-)


Time shifting doesn't require a clock, if you are home to start recording, but a clock helps with programmatic recording.


That's an extreme nitpick. The whole argument for time shifting is to be able to record something while you're not at home.


Let’s hear them out in the spirit of debate. I’m curious how this hypothetical VCR is programmed, what the remote and interface might look like. I might even like it, or at least want parts of it as concepts to integrate with other things that already exist. Could shake loose some ideas.

Honestly I don’t know why VCRs are so hard to program but all of the buttons can’t help. I might be getting old but the Roku remote seems about right as far as complexity in the device goes and I can see how a nice interface with relative timekeeping could do what you need without a clock per se. Inertial guidance for timekeeping? A self winding DVR?


I remember what setting the time on a VCR was like and it's interesting to think of all the assumed knowledge you actually need in order to have it seem intuitive.

Two things off the top of my head that I can think of: 1) knowing that a blinking number is indicating some kind of selection and more generally 2) seeing the UI as a glimpse into a larger abstract space that can be navigated. Or in other words, having used computers for many years, what my parents sawzl as just a blinking word, I would see as a menu where up/down/left/right had meaning.

There's also some more abstract thinking involved there - for me it's very spatial so I think of it as being able to keep track of your place in this 'abstract map'. You had to learn some non-obvious things like "if the down button stops working, it probably means I'm at the 'bottom' of my available choices" or "if I start seeing the same choices again, it means I have 'wrapped around' and in a logical sense I'm back to where I've been before".

I actually remember thinking something like this as a child when we got a VCR. I think I remember that realization that "this is a menu I can explore". The exploratory skills you pick up when you have to figure out how to use something technical generalize really well to other technical things.

TL;DR: I think VCRs were hard to program because the limited UI of buttons and a tiny screen meant that you actually needed a fairly built-up mental model of the process to keep track of what you were doing.


I really like how you brought to the fore this concept of intuition as it relates to UI/UX in technology products. There’s a certain cachet in being able to operate technical devices. There’s similar social capital to be gained in creating useful results using technology. If only the embedded intuition of operating the device worked with the goal of creating useful results with the device.

The biggest “what were they thinking” part for me is why they cram a whole GUI with config options and menus into a clock when almost every use case for a VCR is already connected to a perfectly workable display which is much better suited to a GUI in the form of the TV. Later VCRs had onscreen rather than on-device GUIs but by then institutional momentum was too far along to redesign the remote when they moved the GUI out of the device and onscreen. Truly a missed opportunity.

I don’t know anyone involved in any VCR product. If I did I’d be asking them a lot of questions. But I have a hard time thinking they meant to make it so hard. They probably were clapping each other on the back and congratulating each other. They were inventing future ways of using content and for that they deserve praise. They just sucked at understanding how hard it is for non experts to put themselves in the mind of experts, someone whose inner mental world has jarringly different contours and whose mental model of reality may have little to no correspondence whatsoever with their own.


This is a great observation! The blinking-indicates-editable-via-buttons-mode is a mental model you either have or you do not. It is certainly not axiomatic and needs some experimentation to learn. Digital writwatches with those standard three buttons also relied on this mental model.


Not really, that's more a cliche. I set the clock on mine to tape programming when I wasn't there, and if anything it was probably easier than setting clock radios and watches now.


I was a child during the heyday of VCR, but I don't think any of my family was aware of timed recording. The whole concept just didn't exist in our lives until Tivo. Non-obvious features plus buying things second hand just meant you never learned everything your stuff did back before you could find manuals online.


My family could afford most things new, we had the manuals. I think my parents read them too, as they knew how to use the timed recording feature. My grandma knew how to use hers!

Many people would keep the manuals near the TV, so they could remind themselves how to use the rarely used features.

The Panasonic VCR we had included a barcode reader in the remote. The printed TV guide has barcodes for each program. This interface was very easy to use -- scan, and press "transmit".

Edit to add a link to an advert: https://www.youtube.com/watch?v=tSGUbE1v5RA -- the sheet of timecodes was necessary if you didn't have a TV guide with them printed, as shown here: http://www.champagnecomedy.com/panasonic-barcodes-saving-you...


"Non-obvious features plus buying things second hand just meant you never learned everything your stuff did back before you could find manuals online."

That's the outrageous point. You shouldn't need manuals for operation of ordinary domestic appliances! If you do then you automatically know its design is substandard!

(The only reason you should need a manual is for some info or maintenance function that's not normally associated with its user-operated functions.)


Anecdote: a little while after the start of this covid thing, I broke the band on my wristwatch. My first impulse was to run to the store, or order a new one from Amazon. Then I realized: I'm working from home, there are clocks all around me, and I don't need to be anywhere at any particular time (and my computer warns me about upcoming Zoom meetings).

So now my wristwatch is sitting on the desk.


I think of programming the Betamax(VHS rival) everytime I scrolled through the interface on 1990s/2000s printers. I built the nested file structure in my head rather than reading it on the screen. It has made navigating the digital world so much more natural(for me).

*The hard part of programming the Beta(and early VHS) for me was getting the family to leave the tuner on the channel I/we wanted to record.


Speaking of feature bloat...

I hate that ovens and microwaves have clocks on them. I don't need two devices in my kitchen to tell time. It's ridiculous since they usually next to each other, and most of the time have different displays. Just because there is an LCD/whatever, doesn't mean it always has to display something!

At least on the latest power outage, my microwave stopped showing the time. The oven still flashed, so I set that time and only have one clock in my kitchen now.

Even my vehicle has two clocks in it, one on the instrument cluster and one on the infotainment system. So stupid!!!


> The oven still flashed,

What's even more crazy, increasingly often I've started to encounter ovens that don't work until you set the clock. I.e. if the clock was reset and is blinking, the heater won't turn on. Took me a while to figure it out the first time I saw it.


A lot of ovens have a delayed bake feature that uses the time. I’ve never seen a microwave with that feature, though, and it’s also the less essential device.


If they couldn't make it easier to set I think the clock should have been less prominent. It's necessary if you're doing a scheduled recording.

It's too bad time sync over power lines didn't catch on widely (or broadcast over the radio). It would still be saving everyone from changing their digital clocks during DST.


They tried compromises like VCR Plus+[0]. It was basically a 6 digit code that would be printed next to the show name in places like TV Guide. You would enter the code into your VCR instead of a time, and it would figure out how to record it. I think it still required a working clock, though.

[0] https://en.wikipedia.org/wiki/Video_recorder_scheduling_code


Do you not have a radio clock?

They're common in Europe, on a midrange bedside clock for example, and typical office/school clocks.

I remember we were foiled by one at school, when someone set the clock 15 minutes forward when the teacher wasn't looking. The hands could only move forward, so a few minutes later they started spinning 11½ hours further forward to set the correct time.

https://en.wikipedia.org/wiki/Radio_clock


Interesting. I think I've seen those things, but I've never bought one. I was expecting this tech would be built into microwaves, ovens, and cars by now.


Time over analog TV signals was supported in EIA-608, the standard for sending closed captions and other data in one line of the vertical retrace interval. PBS stations used to send it. Few things used that data.

In the 1990s I encountered a hotel TV with that feature. It had a built-in clock with hands (not on screen), which was also the alarm clock for the room. No one had set it up, and I spent about ten minutes with the remote getting it to find a station with time info and set the clock. Then the "alarm set" function on the remote would work and I could set my wake-up time.


Time codes inside of analog terrestrial NTSC sounds really easy and obvious.

Given that nobody did it, it would appear that even though legally people like Mr. Rogers were making the case for time-shift programming, the industry must have assumed it was a minor use case.


The main reason I mentioned it is that I know I've seen various implementations--they're just not widely adopted. I guess nobody has the business interest to make it all work?

https://en.wikipedia.org/wiki/Extended_Data_Services (NTSC) looks like a 2008 standard and most PBS stations provide "autoclock" time data

https://en.wikipedia.org/wiki/Radio_Data_System (FM radio) I figured this had an implementation considering text has been around for years. Amazingly, I don't think I've ever seen a car stereo use it to set the time!

https://en.wikipedia.org/wiki/Broadband_over_power_lines I know this has been around but has had a lot of hurdles. I figured the current time might be a simpler thing.

The only reliably time-setting tech I've seen integrated is GPS--I'm not 100% sure how time zones work with it, but it does know your location.


https://en.wikipedia.org/wiki/Extended_Data_Services

Autoclock setting was done for VCRs. It just happened much later than the case in question.


> the industry must have assumed it was a minor use case.

You mean the same industry that was trying to make time-shifting (and VCRs in general) illegal?


The problem was everyone did it once, and then lost power at some point and it went into "minor task not ever important enough to be worth taking the time.

If they'd included a backup battery to retain the clock, I suspect it'd been less of a thing.


In the days before power strips were ubiquitous, my VCR got unplugged whenever I played the Sega. There was no way anyone was setting the clock daily.


I still think that's part of the bad UI =) That setup was bad unless you included a battery or time was likely to set itself.


> If they couldn't make it easier to set I think the clock should have been less prominent. It's necessary if you're doing a scheduled recording.

On the contrary, the clock needs to be super obvious precisely because it's a pain to set. Otherwise you wouldn't notice until your recordings were messed up.


I think context is key. It's only necessary if you have a scheduled recording. So it should only be obvious if you're setting up a scheduled recording or have one queued up. In those cases it should force your or alert you in an obvious manner that the time is not set.


"Sometimes the opinion of an uninformed public shouldn't matter."

Correct, that's the 2000+ year old axiom of ignoring the lowest common denominator and seeking the best advice available.

That said, if you're designing software for use by users who are 'lowest common denominator' then, a priori, you have to make it to their measure. If they cannot understand what you've done then you've wasted your time.


Hamburger menus are symptomatic of this for me. I spent /way/ too long not understanding this completely new element everybody was suddenly jamming in everywhere.


Agreed, 1000%. I have the traditional Firebox menu bar turned on, but I can't get rid of the lousy hamburger menu. Plus I have ten icons up there, most of which I don't have any idea what they do. I should probably get rid of them. (When was the last time you used the "Home" icon in a web browser? What is "home", anyway?)

(I just now cleaned it up, although there are some icons you can't get rid of.)


My impression is that modern UX is data-driven alright, it just follows radically different paradigms and goals.

It's not at all anymore about presenting consistent mental models, it's solely about the ease or difficulty with which particular isolated tasks can be performed.

It's also not automatically the goal to make all tasks as easy as possible. Instead, discoverability and "friction" are often deliberately tuned to optimize some higher-level goals, such as retention or conversion rates.

This is why we have dialogs where the highlighted default choice is neither the safe one nor the one expected by the user, but instead the one the company would like the user to take. (E.g. "select all" buttons in GDPR prompts or "go back" buttons if I want to cancel a subscription.

You can see that quite often in browsers as well, often even with good intentions: Chrome, for a time, still used to allow installing unsigned extensions but made the process deliberately obscure and in both Chrome and Firefox , options are often deliberately placed into easy or hard to discover locations. (E.g. a toggle on the browser chrome, vs the "settings" screen, vs "about:config", vs group policies)


Data driven ux seems to put all users in a single bucket.

I will readily admit in collective number of clicks and screentime, 37 year old men with advanced degrees in computer science are a super small minority.

But who is the majority then? Who spends the most time on say Reddit and YouTube? Children! Yes, people who we know are dramatically cognitively different than adults.

Why does YouTube keep recommending videos I've watched? That's what a child wants! Why does reddits redesign look like Nickelodeon?

There isn't one user and one interface that's right for everyone when we're talking about 5 year olds, 50 year olds, and 95 year olds.

We can make them adaptable to the screen, we should also do work to make them adaptable, at fundamental interaction levels, to the person using the screen.

And not in a clever way, but in a dumb one.

For instance, here's how you could ask YouTube: "We have a few interfaces. Please tell us what you like to watch:

* Cartoons and video games

* Lectures and tutorials

* Other "

And that's it. No more "learning", that's all you need to set the interface and algorithms.

Let's take Wikipedia, it could be broken up into children, public, and scholar. Some articles I'm sure are correct but are way too wonky and academic for me to understand and that's ok. There's nothing to fix, I'm sure it's a great tool for professionals. However, there should be a general public version.


> Let's take Wikipedia, it could be broken up into children, public, and scholar.

"Simple English" does a pretty good job. Obviously it's a mix of children/public but for science/mathematical topics where I'm looking just to verify my basic understanding of something, swapping over to Simple English usually gives me what I was looking for if the main article is immediately going down into technical rabbit holes.


> here's how you could ask YouTube: "We have a few interfaces. Please tell us what you like to watch: [...]

This proposal quickly falls apart because your categories are ill-defined based on your preconceptions. I watch a ton of lectures about video games on Youtube (e.g. speed run breakdowns or game lore theories). Do I choose the "Cartoons and video games" bucket or the "Lectures and tutorials" bucket?


yeah it was off the cuff. If you ask a 9 year old online if they're an adult, some will say "yes". I mean I guess it's their loss. Maybe a more direct approach is better.

"We've found adults and teens like different parts of youtube and use it differently. We want to make it the best for you. You can switch at any time, but tell us what best describes you:

* I'm an adult

* I'm not an adult.

"

youtube has this "for kids" app which came out after I first started pointing this difference in earnest around 2013, (https://play.google.com/store/apps/details?id=com.google.and...) but it's not right and they clearly still cater their main interface to the habits of children who watch the same video hundreds of times - the insane repetition is a part of learning nuance and subtly in the context of content they don't have to actually pay attention to. It's all about learning the meta, super important. They know what happens, it's the silence in between they're excited about - that's the nature of play.

This app instead silos the kids into a playskool interface, great for people under 7 or so, but like our playground reform, we've made it completely unappealing for the 8-22 or so demographic (when I was a kid and there were ziplines into a bank of tires, you bet there were 20 year olds lining up to have a good time on those, we all have a need for play; freedom to err wrapped in relative safety).

Instead, it's data-driven UX for adults and data-driven UX for children - it's about separating the data, not a PTA-acceptable UX for overprotective parents.


The best thing a parent could do is download a set of approved videos and use a local playlist.

The easiest thing to do is just allow them on youtube no filter.

The middle ground is the play app. Weird stuff sometimes get through but usually it's more someone dressed as a pretend princess. The good thing it's never really a murder scene or something equally as horrible (which could popup on youtube.com).

What would you do as a parent?

I would avoid youtube unless you setup the video until 7 or 11. After that it depends on the child.


The one big thing "For Kids" has going for it is the pro-active identity. Rather than feeling like they are missing out by not being an adult, they instead feel like they're picking the thing that's special for them.


> Let's take Wikipedia, it could be broken up into children, public, and scholar. Some articles I'm sure are correct but are way too wonky and academic for me to understand and that's ok. There's nothing to fix, I'm sure it's a great tool for professionals. However, there should be a general public version.

It kinda has this for specific subjects:

https://en.wikipedia.org/wiki/Introduction_to_quantum_mechan...

https://en.wikipedia.org/wiki/Category:Introductory_articles



options are often deliberately placed into easy or hard to discover locations. (E.g. a toggle on the browser chrome, vs the "settings" screen, vs "about:config", vs group policies)

Yes, and Mozilla has become much worse about this. Turning off "Pocket Integration", or "Shared Bookmarks", or "Mozilla Telemetry", or "Auto update" becomes harder in each release.


I mean, at least for the "go back" case, it seems like good sense for any non-reversible action (delete, overwrite, buy, send, etc.) to highlight the option that means that people that are just mashing their way through prompts without looking at what's going on won't be screwing themselves over by doing something irrevocable they didn't mean to do.

Native macOS apps get to be a bit clever for this, in that there are two kind of button-highlight-state per dialog (the "default action" button, which is filled with the OS accent color; and, separately, the button the tab-selection starts off highlighting, which has an outline ring around it.) This means that there are two keys you can mash, for different results: mashing Enter results in pressing the default action (i.e. colored) button–which Apple HIG suggests be the "Confirm" option for dangerous cases; while mashing Space results in selecting the initially-selected (i.e. outlined) button—which Apple HIG suggests be the "Cancel" option for dangerous cases. I believe that, in cases where the action isn't irrevocable, Apple HIG suggests that the default-action and initially-selected attributes be placed on the same button, so that either mash sequence will activate the button.

I really wish that kind of thinking was put into other systems.


This distinction is used by all CUA-derived GUI toolkits. Unfortunately by default windows uses same outline style for both default and focused buttons so there is no visual distinction. (there is an alternative button style on windows that distinguishes between these two states, but it tends to be used to mark buttons as two-state and anyway looks distinctly ugly and non-native)


Windows distinguishes default and focused, it's just a bit subtle. A button in focus has a dotted rectangle around the contents (immediately adjacent to the actual border, which is why it's kinda hard to see). A button that's the default has a thick blue outer border in Win10, and used to have a black border in the classic Win9x theme.

What is different in Win32, however, is that if any button is focused, it is also made the default for as long as focus is on it (or, alternatively - Enter always activates the focused button). Thus, there's no visual state for "focused, not default", because there's no such thing.

The distinction still matters, though, because if you tab away from a button to some other widget that's not a button, the "default" button marker returns back to where it originally was - focus only overrides it temporarily.

This can be conveniently explored in the standard Win32 print dialog (opened e.g. from Notepad), since it has plenty of buttons and other things on it. Just tab through the whole thing once.


And even that concept (of "defaultness" of a button) is IMO wrong: it introduces "modes" of operation -- you have to look to see what is the default before you can press Enter. Which also means that you can't even have some general expectation what the Enter key is going to do when.

There were computer keyboards which had a distinction between the button to enter the field and the button to, for example, do the desired action behind the whole dialog. Just like today it is common to expect that Esc is going to cancel the dialog (or the entry form) there was a key that one knew would do the "proceed" (GO) independently of the position in which field your cursor is at the moment (or was). In these operating systems Enter always did just the non-surprising "end of the entering of the current input field, skip to the next" and the "GO" signaled the end of that process and the wish to use everything that has been entered up to any point. It's particularly convenient when entering a lot of numerical data on the numeric keypad, where Enter also just moves to the next field.

I think that concept was right, and better than what we have today. Entering what are basically "forms" in any order (filling the dialogs) and proceed from any point is a basic task and could have remained less surprising.


> It's not at all anymore about presenting consistent mental models, it's solely about the ease or difficulty with which particular isolated tasks can be performed.

IOW following metrics optimising for local maxima instead of looking at the big picture in a non-zero sum game. Each task is made easier by itself but in doing so creates a model in conflict with everything else, making everyone miserable. Nash would be sad.


Correct. I remember traditional UI, watching users do things behind 1-way mirrors and grinding irritating and inconclusive statistics to try to get to a better interface. This used to be a speciality.

Now all you have to do is stick a bone through your beard and pronounce yourself a "UX Guru" and off you go.


UX to me is finding a compromise between designers who want it as sparse as possible and users who want "homer cars".


Gimp is probably the only software I use somewhat regularly in which I can absolutely never figure out how to do something new, and if I don't do something for more than a month, I have to google again for how to do it. The level of stupidity in its design surely deserves a reward for unusability.


GIMP has had no much powerful functionality while at the same time having an absolute awful UI going on 15 years or more now.

And it still hasn't been fixed.

I'm not a big believer in conspiracies, but if there's one I'd not dismiss out of hand, it's that Adobe or some other company has been ensuring that GIMP has never been improved or become a viable replacement for some PS users.

There is obviously a large potential market for a lower cost option for light users of Photoshop who don't want a monthly subscription to Creative Cloud.

Maybe they secretly paid off open source devs to obfuscate the code so much that any potential volunteers would have too much trouble finding a way to re-architect the UI without years of unpaid work.

When I see so many great improvements to complex software released to the community on GitHub, along with the potential for some startup to fork GIMP, fix its UI, and charge some sort of support fee like a lot of companies do with OSS, I just find it very strange that GIMP's UI is still in such bad shape, after two decades oconatant complaining by users.

It wouldn't surprise if Microsoft did or does something similar with the OpenOffice code base. So many compatibility and usability problems that just seem to langusish for decades, while you'd think some company could find a way to make money fixing some of the biggest issues keeping light users of Office 365 who don't want to pay subscriptions.


Part of that is also down to knowing about competitors and for them to stay alive. For instance, the other day I downloaded from the Mac app store a fork of GIMP from way back called Seashore (that's since been updated and barely any GIMP code survives). I've not had much chance to use it yet but so far it's what it claims to be, simple to use, which is a breath of fresh air after using GIMP. But who knows about it? It's been around for years and I'd not heard about it.

I read an interview with the maintainer[1] and it sounds like he's put in a lot of work but as he says it's a "labour of love". I wish someone was paying him, even surreptitiously!

[1] https://libregraphicsworld.org/blog/entry/meet-seashore-free...


Good designers don't work for free. It's kind of strange that programmers have a culture of working for free (or at least of employers agreeing to open-source contributions), but we do, so gnarly algorithms get open-sourced all the time.


We developers write something for free because we need it. Even if we build something awful we still use it and maybe open source it. Then maybe we improve the UI but it's not our job so we're not good at it.

A designer that can't code will never start a software project so I guess that it's uncommon for them to get involved in one for free.

Then there are developers and designers involved in open source because their companies pay them for that. Gnome's designers are listed at https://wiki.gnome.org/Design#Team_Members

Two of them work at Red Hat, one at Purism, I didn't find any immediate affiliation for the other two.


In addition: If you know a tool well enough that you can design an intuitive UI for it, you don't need it.


> Good designers don't work for free.

Is there any company employing them ? Because i find the user interfaces from the 80-90 even 00 much more usable that the today's crap. Remember help buton ? Remember buttons ? Why does windows 10 looks the same and behaves worse than windows 1.0 ?


I must be the rare person here who finds that Gimp becomes better and nicer with each release.

Yes, I had to explicitly set the way I want the icons to look in settings. It wasn't hard, and one of the bundled sets worked for me.

Maybe it's because I'm a long-time user and I know my way around, and where in the settings to look.

One of the problems of shipping UIs is setting good defaults. Maybe Gimp does not do a great job here; I should try a Clea installation.


I'm dating myself, but I really liked late 90s GIMP, where almost everything was available via right click menu. GIMP was simpler then, though.


I thought you were exaggerating but holy moly it's 100% true.

What happened? search for images: "gimp 1.0" vs "gimp 2020". Wow.


Hah, I see that I'll be in for a surprise myself once I upgrade to Ubuntu 20.04.


You can change the UI skin in the options. I've been using GIMP for years and I don't have any major complaints.


This is about usability, so I don't think referring to a setting buried in the options (that you have to know about first) is a valid point.

> I've been using GIMP for years

I think usability to users experienced in the software and to new users are two different things. I believe an important part of usability is discoverability which is probably better judged by new users than by experienced users.


>You can change the UI skin in the options.

Holy cow! There's even the "classic" theme right there. Wish I knew this a year ago.


Yup. Edit -> Preferences -> UI -> Icons -> Legacy. Done.


sure, you can make the icon bar more sensible with some effort, but not as sensible as it was in 1998: https://scorpioncity.com/images/linux/shotgimp.png


Long time Adobe user. I tried Gimp. Shut it down, and merrily went back to my Adobe subscription plan.


I haven't used Gimp in perhaps a decade, but I can't imagine a worse UI than Adobe. At home, I have a free PDF reader, but at work (...when I was going to work...), I had to use Adobe's PDF editor. (Not that I edit the PDFs, I mostly just read them; occasionally a highlighter would be nice.) Ugly huge icons that I don't need and which take up lots of space. And next to nothing in the menu, except ways to turn on icon bars at the top and/or side, hopelessly emsmallening the page I actually want to read.


Long time emacs user. I tried vim. Shut it down, and merrily went back to my motor memory.


I have only used GIMP a handful of times in my life. I recently had to download it to do something beyond MS Paint's abilities. I had a hard time understanding why many things behaved the way they did. I don't remember it being this hard the last time I used it.


Whenever UI/UX is brought up, GIMP inevitably enters the discussion.

I've used it a number of times and do not find it any harder than any other piece of software — doing complex operations where you are not sure what you want to do (or especially, how is that called) is hard, but that's hard in an IDE as well.

I do not do much, but I do not do little with it either — I am perfectly happy with layers, selection tools, simple painting tools and the rudimentary colour correction I may want to do. And one can claim that the hamburger-menu-like approach started with Gimp, fwiw (right click on your image to get to a full menu, though you still had that menu at the top of your image window).

Two things have always been a requirement for proper Gimp use: "virtual desktops" — a dedicated one for Gimp — and IMO, "sloppy focus" (window under the pointer gets the focus right away), but I've been using those since at least 2001 or something when I first saw them on Sun workstations, so I probably never had trouble with extra clicks required to focus between toolbars and other dialogs.

For creating original artwork, I find any graphical approach too limiting — I _do_ want an easy approach of UIs, but I frequently think in terms of proportions and spatial relationships when drawing ("I want this to be roughly twice the size of this other thing and to the left") — I always try to imagine this combined tool that would fit my workflow, but then I remember that I am probably an outlier: I may have been spoiled having done mathematical drawings as a primary schooler in Metafont and later Metapost (for colour, or rather, grayscale :)), and being a developer since 7th grade, where it's hard for you to come to grips with how suboptimal doing precise drawings in any software is (I've done some minor uni work in AutoCAD too).


> Apparently almost everyone agrees

I very much do not.

Gimp used to be horrendous to use. It still has some usability issues, but it's become something I can use without risking my mental health.


The UI changes in Blender 2.80 were exactly to get rid of the awful non-standard non-discoverable UI that has plagued the program from the start. It actually fixes a whole ton of issues that the article complains about! For instance, mouse button assignments are now sane (select on left button, not right) and Ctrl+S brings up an actual save file dialog with pretty unsurprising layout and behaviour (instead of that abomination that replaced the application window contents when pressing F12 and required an enter keypress to save - there was no button). There are many, many more of these changes and they were absolutely necessary.

The unfortunate side effect of this is that grumpy old users that were trained to accept the previous highly idiosyncratic UI started to complain because they have to relearn stuff. But it's worth it. And it opens up blender for more users.


As I recall history, using icons was a main way the rest of the industry tried to copy the usability of the MAc UI, as it conquered a lot of mindshare in the 80s and 90s.

But the Mac almost never had just icons in the ui. There would usually be an icon and a text. With little space you'd revert to text only.

Apple had a team of expert Usability experts. Others... did not. So they just copied something that looked cool and was easy to implement.

That it cut down on internationalizion efforts surely didn't hurt either.


The old interfaces (I guess we can say that now, talking about a quarter century ago) tended to have several modes.

The menu bar was always just text.

The toolbars offered several options: large/medium/small/no icons, text/no text.

(Not all of those options were always available.)

This let you progress as a user of a system. When you first experienced it you could use the large icons with text because it made the things you were searching for standout. As you learned the icons you could start shrinking them, and eventually remove the text. This opened up the toolbar to fit many more actions (often less frequently used). And the tool tip from hovering remained throughout, so in the worst case of an ambiguous (or unknown) icon, you could hover over it and learn what it did. Additionally, you'd often get the shortcut for the action when hovering over the button (or viewing it in the menu bar).

Many contemporary applications don't provide their users with this notion of progression.


It really depends on the period. On Windows, Microsoft eventually unified menu bars and tool bars - the widget was called "cool bar" or "rebar", and it was basically a generic container for certain specific kinds of children that organized them into floatable and dockable "bands", with automatic layout: https://docs.microsoft.com/en-us/windows/win32/controls/reba...

The widgets that can be placed on that are buttons with icon & text (either of which could be hidden) that can be regular, toggle, or drop-down; and text and combo boxes. Well, and custom widgets, of course, but the point is that they were different from regular widgets in that they were toolbar-aware. IIRC this all first came as part of IE "common controls" library, even before it was shipped as part of the OS.

So then a top-level menu is just a band with a bunch of buttons with drop-downs that have hideen icons. A regular Win9x-style toolbar is a band with a bunch of buttons with icons but hidden text, and an occasional text box or combo box. And so on.

But the real nifty thing about these is that they could be customized easily, and it was traditional for Win32 apps to expose that to the user. At first it was just about showing/hiding stuff, but Office especially gradually ramped things up to the point where the user could, essentially, construct arbitrary toolbars and menus out of all commands available in the app, assign arbitrary shortcuts to them etc. So if you wanted icons in your main menu, or text-only toolbars, you could have that, too! This wasn't something that regular users would do, but I do recall it not being uncommon for power users of a specific app to really tailor it to themselves.

Visual Studio has it to this day, and takes it to 11 by further allowing to customize context menus through the same UI: https://docs.microsoft.com/en-us/visualstudio/ide/how-to-cus...


I get the point about extreme customisability but I still think the "rebar" is ugly as fuck and inconsistent (non uniform look, a salad of different types of elements). As a mac user I didn't have to put up with it and was disgusted when I first saw it.


What was particularly non-uniform about it? Most toolbars looked very similar in most apps, because there were certain UI conventions, similar to main menus. Once customized, sure, it's no longer uniform, but that's the whole point.


Oh, the joy of using an application that acknowledges your subtle interactions, such as hovering over an unknown button, like a friendly old-time barber who knows just how to cut your hair. No explanations needed, he knows just what you need.


> Others... did not.

While there is no disputing Windows copy heavily from the Mac UI, the actual feel of that interface was also strongly influenced by the IBM Common User Access (CUA).

https://en.wikipedia.org/wiki/IBM_Common_User_Access

Not only did Windows try to follow those CUA rules, Microsoft encouraged Windows applications to also follow those rules.

That meant that from a user perspective, the Windows experience was fairly consistent irrespective of which application was being used.


The "Slide up" is gone in next LTS due in a few days.

https://www.omgubuntu.co.uk/2019/10/ubuntu-20-04-release-fea...


About half way through article: “The new lock screen is easier to use, no longer requiting you to ‘slide up’ to reveal the password field (which now sits atop a blurred version of your desktop wallpaper):”

Great! That UI was horrific. Hitting a key like spacebar didn’t unlock.

Aside: I completely broke the Ubuntu login screen yesterday. I did `apt remove evolution*`. Unfortunately the login screen depends on evolution-data-server, so I couldn’t login from lock screen, and after reboot it dropped me to TTY!! Gnome is just getting crazy - it would be like Windows login depending on MS Outlook! Gnome is a big ball of interdependencies, becoming more like Windows. I get it, but I don’t like it. Edit: FYI: fixed in TTY3 (ctrl-alt-F3) by `apt install --reinstall ubuntu-desktop` from memory.


apt automatically removing dependencies by default is such a trap, I much prefer pacman's behaviour of refusing to remove package that breaks dependecies unless explicitly told to do so.


apt will not automatically remove dependencies either.

I suspect the OP already had ubuntu-desktop package removed for some other reason, and there was no direct dependency on evolution-data-server for gdm: it will only remove dependencies which no other still-installed package depends on. That might still mean a packaging bug (but at least on 18.04, attempting to remove evolution-data-server prompts me how it's going to remove gdm3 too — sure, it's short and easily missed; attempting to remove evolution does not attempt to remove evolution-data-server since a bunch of other stuff depends on it like gnome-calendar too).

In any case, apt will prompt you about all the packages you are about to remove (unless you pass "-y(es)" to it).


And it is a GNOME-ism, Ubuntu went out of their way to modify it.


That swiping up thing makes me so angry. What an absolute waste of effort. No way to disable it. If this is what they're doing in the most visible bits of the system, what on earth is happening in the rest?


Just hit the Escape key. You can also just start typing your password. I often type my password and press enter before the monitor is even awake.


...just to then realize that the computer was still logged in and the focus was on a chat window, and it was only the screen that had been in power saving mode. :-}


And this is why I bring my machine out of sleep by tapping the shift key.


Your post needs a trigger warning. My heart is racing.


Good thing no one on HN would ever reuse a password, right?


Good news, it is gone in the next version.


that slide up thing has to be some ultra clownish way. who proposed this, who reviewed & approved this, and on what fundamental ? it there no easy option to get rid of that irritant ?


I remember when I was playing around with building an HTPC for my car in the mid-2000s and I got to trying to put a frontend skin on the touchscreen. And I found all the existing skins completely awful because they were wall to wall arbitrary icons, in a setting where I needed at a glance functionality.

Eventually made my own, and the key element? No icons at all. Just text - potentially small text - on the buttons. Turns out, being something you spend your entire life reading, text works great - within a sparse set, you can resolve exactly what a word is from letter shapes even if you can't directly read it, and if you don't know what something is you can just read it.

No one ever had any problem using it, even if they'd never seen it before, because every button said exactly what it did.


Yes, but now you have to write gobs of internationalization code. ;)

Actually, that sounds completely sane.


"You can't have the same UX for both handheld touchscreen devices and M+K laptops/desktops" seems like an absolutely 101 level no brainer to me. How are big projects/companies still attempting to make stuff like this happen?


> How are big projects/companies still attempting to make stuff like this happen?

"None of us are as stupid as all of us."


Cost saving?


> You can't Google an icon.

I've been complaining about that since 1983, when an Apple evangelist came to my workplace to show off the Mac's icons. Nobody was able to guess what the box of Kleenex icon was, much to the frustration of the evangelist. Of course, there wasn't a Google then, but how do you look up a picture in a dictionary?

We've reverted to hieroglyphic languages. (Ironically, hieroglyphs evolved over time into phonetic meanings.)


> Ironically, hieroglyphs evolved over time into phonetic meanings.

No, they didn't. The original creators of hieroglyphs knew how to use them to spell phonetically, but they didn't do it that often, they were satisfied with the old system (just as Chinese are now). It was a job of other people, who actually didn't know how to use hieroglyphs properly, to build a functioning alphabet on top of them. Two systems coexisted for some time, and then hieroglyphs went into decline with the whole culture that supported them.


The Egyptian and Mayan ones did.


What I wrote about was about Egyptian hieroglyphs. I can only repeat that they did not _evolve_ into alphabetic writing. The Semitic alphabet was a fork; alphabetic use by the Egyptians themselves was very marginal. The Mayan system AFAIK was a mixed logo-syllabic one to begin with.


But the heiroglyphs are corporate logos crossed with app icons (that change every so often to stay "fresh")


>Blender fanboys argue for using keyboard shortcuts instead. The keyboard shortcut guide is 13 pages.

Well to be fair Blender is a professional tool. It is expected that users read the manual and learn the shortcuts, etc. Discoverability is something that should not be optimised for in a tool for professionals like Blender.


That is such a lame, played-out excuse. A real favorite of apologists for shitty UI design.


You are absolutely right. It doesn't hold water once you look at other professional programs. They all use the same math underneath and love or die to a huge degree based on their interface. Blender being free and open source opens up possibilities for people with no access to professional tools and researchers, but compared to the commercial tools out there its interface is a mess of inconsistency.


It's really not. Take a tool like after effects. The interface is not obvious, the icons are unlabeled (until you mouseover and get a tooltip). You've got to learn it as you would learn to use any other tool.


Icon mania... and pretty much every single icon is only possible to understand after you have learned what it means.


That's deliberate. The saying "a picture is better than a thousand words" doesn't apply when the picture is 16x16 pixels.

Every single icon that makes sense to you now (the floppy disk, the binoculars...) do so because you learned them a long time ago; it's funny how you can now find YT videos that explain where that weird blue icon for the "save" function comes from.

The images are just a mnemonic device - in the sense that the sign is partly related to the meaning (the binoculars could very well mean "zoom in" in an alternative world). Certainly a stronger connection works better because it helps to remember, but they are not meant to help with "understanding" what the button does.

It is the same deal as with keyboard shortcuts. ctrl+S is Save, but you know that ctrl+V is Paste and it has absolutely nothing to do with spelling.


Actually the floppy disk made perfect sense when I first saw it: "we're going to do something involving the floppy disk, and there's a put-onto arrow superimposed over it" (the load-file icon had a take-off-of arrow). It makes less sense now, but only because computer storage no longer has a single, iconic (heh) form factor.

ZXCV is positional, [C]opy having a nice mnemonic is more of a happy coincidence than a design decision in it's own right.


> ZXCV is positional, [C]opy having a nice mnemonic is more of a happy coincidence...

ZXCV is actually half positional/mnemonic, half graphical-as-letters, a bit like old-fashioned smileys: X for cut looks like opened scissors, and V for insert looks like the downward-pointing “insert here” arrow-like mark editors and teachers use(d?) to scribble onto others’ texts.


Even worse: You have to try out each icon. Next version replaces many of them and entire design again. Rinse & repeat.


Ubuntu 19 is terrible. I was forced to use KDE (Plasma) instead because they broke so many basic UI concepts (to give you a tiny taste: changing tabs went from Ctrl+Tab to Meta+Tab, the keyboard layout switcher takes 2 seconds or more because it pops up a stupid little window to show the language selected, and many other things just like this I thankfully erased from my mind and now only the frustration remains).


I use the keyboard layout switcher only with the keyboard which is pretty much instant, so I wonder how was it broken in any of the Ubuntu 19.x releases — that sounds worrying if it carries over to 20.04? (I am on 18.04)

"Any Ubuntu 19.x" because non-LTS Ubuntu releases come out every six months, so there was Ubuntu 19.04 and Ubuntu 19.10, but never an "Ubuntu 19": they are never as polished or as stable as LTS releases, and are only supported for 9 months, forcing you to update regularly.

If you are looking for a more stable experience and you are not chasing the latest features, you would probably be better served with LTS releases which come out every 2 years (they have hardware updates with newer kernels included every once in a while, so you do not have to worry about using them on latest laptops either).

If you want the most stable route, go with LTS point (.1) release. Eg. I only update my servers when 18.04.1 or 20.04.1 is out.


The language switcher was broken in 18 already : https://bugs.launchpad.net/ubuntu/+source/gnome-shell/+bug/1...

People who switch between several languages just can't use Ubuntu because of this, read some testimonials on the bug linked above.

This is just one of the pain points though. There were many.


That's quite interesting — I do not get that behaviour at all, both on newly installed and upgraded systems from older LTSes (3 laptops and a desktop). They are all on 18.04, and I do have at least 2 layouts on all of them (Serbian Cyrillic, Latin and sometimes US English) which I switch between hundreds of times a day.

Of the non-standard settings, I've got "make Caps lock an additional Ctrl" and "each window keeps its own layout" on. The rest, including keyboard switching shortcut (Meta+Space) is using the defaults.

I simply press the shortcut and start typing and works as expected — if I find the time, I'll debug further and report on the LP bug, but just wanted to report that I do not experience any of the problems mentioned.


I just noticed that the above bug refers to switching input methods vs. just keyboard layouts.

Input methods are a separate concept from keyboard layouts (XKB), and I only ever use XKB stuff. Input methods load separate libraries which interpret keycodes, and are commonly generic enough to allow "external" complex input definitions (think Arabic or CJK) — not sure how fast they used to be before, but perhaps combining IM and layout selection is the culprit.


One tip for Blender at least... It's very hard to discover, even if you know it exists, but you can drag out from the side of the main icon bar and turn on text labels as well.


One nice thing about recent Ubuntu is that even though they hide the password box, you can start typing on the unlock screen and your text will be entered into the password box.


Why hide the password box, then, unless you think that users learning to type and have magically it go somewhere invisible is a good UI design…

(Interesting aside: I complained to someone on the Mac Safari team that it was difficult to search open tabs, and he told me that apparently this feature already exists! You go into the tab overview, and…just start typing. A little search bar will pop into appearance in the top right corner. Why it couldn't just be there and have keyboard focus from the start, I have no idea…)


Worse, this creates a bad habit. What if the UI changes and now it's the username that pops up first, not a password box?

So it's a hard-to-discover feature, and a misfeature unless you elect to keep this behavior forever.


Because this only happens on the lock screen when there's no user to be entered/selected. There's no slide-up on the initial login.


Wow, I would’ve never guessed that. Apple has a terrible habit of burying obscure UX features.

It’s a shame really because these undocumented features mean 99% of people WON’T ever use them. Isn’t that counterproductive to engineers?

Why aren’t employees speaking out against this?


macOS hides the password field to encourage people to use Touch ID instead, but Ubuntu probably doesn't even support fingerprint login...


Every mainstream Linux distro supports fingerprint login natively just by virtue of using PAM.


Funnily enough, so does every Mac by virtue of using PAM ;)


Ubuntu does support fingerprint login if you have supported hardware.


Unless slack steals the focus. Happened to me a few weeks back. Then slack gets your password and enter key, and the login screen doesn't.


The irony being that there's a vocal group of GNOME users complain that newer versions full-stop prevent windows from taking focus and install an extension to being the behavior back.

Can't please everyone I suppose.


It wouldn't be a problem if it is an option which can be configured, I think.


Steam remote streaming did something similar, while on the road with my laptop it presented me with my Kubuntu login screen... so I logged in, fine... and now my computer at home is sitting unlocked and unsecured.


Blender icons are also microscopic on a 1920 x 1080 monitor of usual size. We don't all have astronaut vision.


windows 10 is the same, there is lag between when the screen with the input box appears and when it accepts input. so if you start typing too quickly, fail. and don't get me started on their auto update policy (i agree with automatic updates; i do not agree with them forcefully hijacking my computer with no means to stop it. its actually what brought me back to linux as my dd. no regrets!)


In Ubuntu 19 I can just type my password and hit enter to log in. In windows 10, I have to hit any key before typing my password and hitting enter to log in.


> Logging in on desktop now requires "swiping up"

No, it required pressing some button to show the password box. Just like windows. If you don't have a keyboard you can also swipe up instead.


Whenever these discussions come up, I don't know how much of this response is just confusing familiarity with usability. I kid you not, I knew people who bemoaned python because of tab spacing and this wasn't 2003, this was may be a few years ago (2016 or so), which I'd argue is just being unfamiliar with it. C isn't really that intuitive--I still remember learning it at 14 and finding the syntax confusing vs basic--it's just familiar to those who use it.

I do feel that some modern changes are annoying and unnecessary though and the instances I see that increase as I get older. I just always try to check myself and analyze how much of that anguish is just from being used to something or not.

I will finally say the examples here of inconsistency between evince and MPV is just inexcusable. You can't both break expectations AND be inconsistent, it's like the worst of both worlds.


The author completely failed to mention discoverability. Which is a huge part of usability because it allows _new_ people to sit down at your applications and gradually work up their expertise until they are keyboard wizards (or whatever).

So, no, just having a bunch of magic unlabeled buttons and saying "use the keyboard" isn't good usability. The part that kills me, is that a lot of these applications don't even really have shortcut key guides. So, you don't even know the magic key sequence for something until you discover it by accident.

Worse, are the crappy programs that break global key shortcuts by using them for their own purposes.. AKA firefox seems to manage this on a pretty regular basis. Want to switch tabs? ctrl-tab, oh wait, it doesn't work anymore, cross fingers and hope someone notices and fixes it (which they did).


> a lot of these applications don't even really have shortcut key guides

Not even Windows itself has, any where that I could find, a list of all the keyboard shortcuts. I find multiple lists, each with a different random subset of whatever the actual set is.

Sometimes I'll hit wrong keys, and something interesting happens. I don't know what I did, so it's "oh well".


tab spacing isn't just "used to it". It's the "hidden white space has meaning" problem. Copy and paste some code indented with spaces into some that's indented with tabs and if you're unlikely they'll look like the indentation matches but the won't actually match. If you're even more unlucky the code won't just crash it will appear to run but the result will not be what you expected.

You try to get around it by using an editor that shows tabs or an editor that re-indents the code but plenty of editors (notepad, nano, vi, emacs) don't show tabs as different from spaces by default.

https://wiki.c2.com/?SyntacticallySignificantWhitespaceConsi...


I've always guessed that the infinite feature creep that is plaguing modern UI's is caused by salaried visual designers who aren't being given new projects to work on. Once the visual templates are fleshed out, the icon sets are put together, the pallettes completed, there's not all that much to do. Unless you start playing around with NEW icon sets, pallettes, mouse aware buttons, etc, etc... I would be so much happier with a consistent, self explanatory UI than anything that is "sleek" or "dynamic". Those properties, excepting very specific situations, really detract from UX. Visuals improve with user familiarity, not with constant changes. Imagine if the Department of Transportation decided to redesign stop signs to be yellow, triangular, with a sleek looking matte finish... Traffic accident deaths would go up 10x in the first week.


Is there a sane mode or config for vanilla Ubuntu 18.04? I'm considering upgrading from my trusty old 16.04 LTS (both home and office laptops) and I dread the usual pointless UI changes that come with all the reasonable bugfixes/improvements.


20.04 is due out in a few days. I hope it's good. 18.04 has been total crap for me. The lock screen randomly won't authenticate, and I am forced to reboot. My USB dock suddenly started randomly disconnecting and reconnecting, after it worked fine for months.

The "Save/Open" button in the file dialog boxes is in the title bar, which is the dumbest thing I have ever seen. Dialog boxes get tied to windows, so when I try to move the dialog out of the way to see my work, it drags the whole damn window. (Some of this is mentioned in the TFA.) I think a lot of these decisions were Gnome-driven, but still... stick with 16.04.


I think Xubuntu is a good alternative. XFCE doesn't have as much eye candy, but it certainly surprises me far less and usually is very pleasant to work with.


I used Xubuntu for a number of years and it's a great lightweight environment overall. The main problems that I experienced with it are that it's handling of hot-plugging multiple displays (especially in between sleep states) has always been poor and crashy.

And I like the cohesiveness and integration of GNOME, although I had to do a hell of a lot of customization to mold it into something I could tolerate.


I've also noticed that XFCE's handling of hotplug monitors leaves much to be desired, and it also cheerfully ignores my preference of not suspending when I close the laptop lid (so I can use only the external monitor and an external keyboard when I have it plugged into my TV). Close the lid under XFCE and "BEEP!" it goes into suspend mode instead of doing nothing.

Fortunately, Cinnamon is just an apt-get away, handles both monitor hotplugging and closing the laptop lid sanely, and works the way I expect a desktop to work. I've settled on Xubuntu+Cinnamon as my go-to when setting up a desktop or laptop.


Can you elaborate on Xubuntu+Cinnamon? That sounds interesting.

I also just noticed that there's an Ubuntu Cinnamon which might be right up my alley as well.


The setup I did was to just install Xubuntu, then:

apt install cinnamon

Once installed, I logged out, and then picked Cinnamon from the session selection menu (the little gear near the upper right corner). It comes right up, though it won't pick up your preferences from XFCE.

I hadn't realized there was now a Cinnamon spin - is that still in testing? It's not on Ubuntu's list of flavors.


I'm on 19.10, and I'm quite disappointed. Exposing all windows as thumbnails is a big regression compared to Unity. It feels heavyweight, and just doesn't work anymore for window navigation. Pressing the app icon on the side bar brings up small, equal-sized window previews on top of each other without spatial information. Consequently I have now tens of open terminal session and browser windows. Global menu is gone, wasting precious vertical space. Or it would, if apps such as Firefox didn't change to use stupid hamburger menus. The place where the global menu used to be is now used for displaying the time, which is absurd. What's not gone is the chopped display of the focussed app's name on the left. I frequently make errors on window decorations (might be me used to Unity, but it's still the case after two months). Search isn't as useful as it used to be, but has a pompous animation. Apps are beginning to use snaps, which I have no use for - all I want is a driver for running the same old programs I used to run the last twenty years. It's not that there's been a boom of new desktop apps for Linux lately.

Installation was rock-solid, though. Only had to install synaptics over libinput (libinput causing physical pain for me because I had to press the touchpad hard all the time, and because lack of kinetic scroll).

Seriously considering alternatives.


I was on the same boat and decided to install KDE Plasma. Very very happy with the result, the UI dark theme is very beautiful and all shortcuts and UI elements work the way "old" UIs did (unlike freaking Ubuntu 18 and 19 which changed everything for 0 reason).

https://kde.org/plasma-desktop

It was quite easy to install and everything worked out-of-the-box. I just customized some widgets, the dark theme, icon colors and now it looks amazing. Best Linux desktop I've used so far.


I've been on the brink of installing KDE many times over the years. What held me back was that screenshots always displayed bulky window decorations which I have no use for on a 13/14" notebook. Also, for space efficiency, I want global menus period. I think Ubuntu pre-18 with Unity got a lot of things right - a lean UI getting out of your way and yielding space to actual apps. I can do without the collection of desktop apps for either Gnome or KDE - on Gnome the file manager is particular anemic, on KDE it seems too Windows-y for my taste. And I don't use Gnome's mail app or video player but rather "best-of-breed" apps like Thunderbird and VLC anyway. So I guess I'll be looking at lightweight DEs/non-"DE"s going forward (but I'll give KDE a try for sure).

It's sad because gnome worked very well for me so far, and I've actually seen Ubuntu becoming the mainstream choice for freelance dev teams at many places over the last couple of years. I do feel guilty to criticize other people's hard work given for free to me without constructive criticism but as far as I'm concerned gnome shoots for a future and audience of desktop users that just isn't there when IMHO the goal should be to preserve what precious free desktop apps we have and not making app development any more difficult or gnome-centric.


You can create a Unity-like look and feel in KDE Plasma 5 relatively easily[1]. Although the look and feel is only part of the experience, KDE Plasma 5 is very customizable that I'm sure it could be adapt fit to most workflow.

Despite its bad reputation in early days, KDE Plasma 5 nowadays is very lightweight. As in, the resource usage is pretty much on par with Xfce.

[1]: https://userbase.kde.org/Plasma/How_to_create_a_Unity-like_l...


Just to make it clear: after I posted this , I did some research and it turns out I am using KDE Cinnamon actually... I installed KDE, and now I have options when in the login page to select which "session" I want: Ubuntu, Unity, Plasma, Cinnamon, Wayland... I have Cinnamon by default, and it is indeed my favourite... Plasma seems very cool, but too different from what I am used to... I might play with it later when I have more time though.


Do not upgrade.

Switch.

A lot of Ubuntu software is now (Version 19.xx) only available with "snaps". They make some sense for IoT machinery (the user does not control updates, so they are deploy and forget) but I do not want to loose control.

Final straw for me. I am test driving Arch now....


Did you install Arch from scratch or are you using Manjaro, etc? I love Arch but I've never installed from scratch. It's on a my bucket list. I'm currently using Endeavour OS, which is easy-install Arch Linux with way less bloat than Manjaro. It's awesome. I'll never install a *buntu type system again.


Ha ha.

From scratch I think. My aptitude is no use with pacman


I highly recommend trying out other flavours of Ubuntu: Xubuntu (XFCE), Lubuntu (LXDE), Kubuntu (KDE), Ubuntu MATE, Ubuntu Budgie. Next week all should get 20.04 LTS release.


I don’t understand why you need different distributions for different DEs.

The only distro I’ve used past teenage is Ubuntu. I alternate stints of maybe 2 years with Windows, 2 years with Ubuntu. First thing I do after installing the most recent LTS Ubuntu is “apt install spectrwm”.

Spectrwm is not even particularly good — everyone tells me to use xmonad instead — but I know how to get it in usable shape in about half an hour. This after many moons of exclusively using Windows.


They are not different distributions but more like a different edition or "spin" of Ubuntu. They just install a different set of packages.


For KDE you're better off with KDE Neon. Still based on Ubuntu, but that's the official KDE distro if you can speak of one.


No, or at least I haven't found one. I was recently forced to upgrade from 14.04 and tried the default configuration in 18.04 (clean install) for a while before I gave up and installed Unity -- which, thankfully, is in the package manager. That at least got me back to the point where I could configure things critical to my workflow like having a 3x3 workspace switcher. The system has fought me every step of the way though -- especially with things like global hotkeys which have about half a dozen different places they can be configured and it's completely inconsistent what works where.

I've literally spent weeks trying to get back to the level of usability I had on my 14.04 setup -- compiling old/patched versions of software from source because the "improved" versions removed features I depend on or otherwise fucked up the interface (I cannot understand why anyone thought removing typeahead from Nautilus was a good idea!), trying every damned thing I can think of to debug the global hotkey problems (still can't get IME switching to work right reliably... it works for a while after I fiddle with it then just stops working and I have no clue why), and just generally having a bad time.


I've always considered Debian to be the sane, vanilla Ubuntu :-)


On the GNOME login screen you can press [Enter] or [Space], or click (on the latest GNOME, 3.34) or swipe up (on the previous versions) to get to the password entry box, or you can just start typing your password. It's extremely easy and discoverable (because there are so many options, nearly anything you do will take you to the password box). I really don't think there's an issue with it.

This is with stock GNOME (on Arch); I think Ubuntu may ship a skinned / modified / older version of it (which can create UI problems).


So what you're saying is that there's no way to discover it by visual inspection alone; and so my elderly family members could never discover it for fear of breaking something by pressing the wrong button. That's bad UI.


GIMP's UI/UX has been garbage since long before 18.04. Not contradicting anything you're saying, just noting that it might not be the representative example you're looking for.


Ubuntu shell was a disaster and I can't imagine it's gotten better - Mate on Ubuntu is the answer for that.

And Gimp is a mess. Enabling single window mode makes it better.


I had the hidden window problem the first time i used win7.

It was one one of those tiny netbooks with 1024x600. I think I was trying to add a user, and for the life of me couldn't figure it out. Turns out the updated add user control panel at the time put the add button on the lower right of a window with a minimum height > 600px and about 400px of white-space above it, and no resize/scroll bar so there wasn't any visual indication that there was more to the window.

But, there is a flip side too. I have ~6kx5k of desktop resolution (portrait mode 5k monitors) and very few applications know how to handle that in any reasonable way. Web pages are frequently the worst, nothing like a column of vertical text consuming 10% of the horizontal resolution of the browser that manages to scroll for a page or two. I guess no one reads newspapers anymore, so the idea of having multiple vertical columns of text is foreign.


Ubuntu got worse because of GNOME


> Ubuntu got worse at 18.04 ..

Give Lubuntu a try.

https://lubuntu.net/


The official site is https://lubuntu.me/ btw.


Are they really mixing themes like this?

https://lubuntu.me/wp-content/uploads/2017/09/video.png

from the front page


LOL nOmegalol.

LXDE is dead, and jankier than Xfce.

Use Xfce (if you like jank) or KDE or Cinnamon.


Lubuntu is no longer on LXDE, having switched to LXQT on version 19.04. LXQT is pretty good from my perspective, minimal but intuitive.


> Logging in on desktop now requires "swiping up"

That's not how it works here. From boot I am presented with a list of users, I click or press enter and type the password. When it's locked/suspended, all I need to do is to start typing.


I recently came across Glimpse [0] which is a fork of Gimp. They state usability as one of the reasons for the fork.

[0] https://glimpse-editor.org/


> Ubuntu got worse at 18.04. Logging in on desktop now requires "swiping up" with the mouse to get the password box.

You can also just start typing the password if you want to unlock the machine.


> Ubuntu got worse at 18.04. Logging in on desktop now requires "swiping up" with the mouse to get the password box.

I don’t know about you, but I just start typing on my keyboard.


Took a while to figure out I could do that. And I only figured it out by accident.


The best thing about Ubuntu desktop usability might be mnemomics, where alt+ an underlined letter is your keyboard shortcut, but it seems they're dying :-(.


> Ubuntu got worse at 18.04. Logging in on desktop now requires "swiping up" with the mouse to get the password box.

Have you tried pressing a key?


Is there any indication that pressing a key is an option? I've been swiping up with the mouse the whole time as well.


> Logging in on desktop now requires "swiping up" with the mouse to get the password box.

What? I just type my password without swiping anything. I've I think I've upgraded through pretty much every version of Ubuntu for the last few years, I haven't customized it to speak of, and I've always been able to do this on both my desktop and my laptop.


Moreover, specifically I’m running 19.04 and I click anywhere on the screen to open the password. Maybe they changed it over time.


>now requires "swiping up" with the mouse to get the password box.

oh, that has to be some ultra clownish way. i would punch through the display within a week. who proposed this, who reviewed & approved this, and on what fundamental ? is there no easy option to get rid of that irritant ? if none, I have to stay with 16.04 for lots more time.


> (I have no idea what's happening in Windows 10 land. I have one remaining Windows machine, running Windows 7.)

I use a laptop with an external monitor. When the monitor is not connected, Adobe Reader windows have the title bar out of the screen. The only way to maximize: Alt + space + x.


> Ubuntu got worse at 18.04.

Only if you use the default UI, which I think is an important distinction to make: I use Window Maker and had no regressions.

The ability to choose your own UI is an important strength of Unix, and one which distinguishes it from macOS and Windows.


> The ability to choose your own UI is an important strength of Unix, and one which distinguishes it from macOS and Windows.

It's a strength as well as a curse for Linux distros not limited to just Ubuntu since the developer of a GUI program must test if their app works on your unique setup, if it doesn't run on one distro or your specific configuration out of the box, then that is already a usability issue.

The Linux ecosystem doesn't have any guarantees for the app developer whether if a KDE, GNOME or Xfce built application will consistently work on your setup if the user changes the DE, display manager, etc so it is harder for the app developer to support accessibility features in whatever DE the user uses and its harder for them to track the source of the issue. This could be anywhere in the Linux Desktop stack.

The inability to "choose your on UI" on Windows and macOS guarantees that GUI programs will be consistent with the accessibility and look and feel features in the OS which makes it easier for app developers to test on one type of desktop OS, rather than X * Y * Z configurations of one Linux distro.


> The inability to "choose your on UI" on Windows and macOS guarantees that GUI programs will be consistent with the accessibility and look and feel features in the OS...

Except when it doesn't, which unfortunately is more and more often, to the extent that nowadays it feels it's most of the time. Which is the whole thesis of the web page that this discussion is about to begin with; just go back to it and contemplate that picture of the six window title bars again.

Personally, I blame Microsoft. After successfully having introduced and enforced the CUA guidelines, they themselves were the first to violate them on a large scale in a widely-used application (suite): MS Office’s “themed” interface from around the turn of the century. Sure, there were a bunch of MP3 players and stuff, probably inspired by that “kool” (=utterly unnecessarily weird) Kai thing — but AFAICS those were marginal, and Office was what “legitimised” applying non-standard frippery in stead of the simple unadorned standard UI elements.


ubuntu is a hodgepodge. I was always accidentally maximizing windows at the screen edge. wtf. (use dconf to turn off edge-tiling)

(also lots of phone home crap you have to hunt to turn off)

By comparison arch linux + gnome is relatively unencumbered


I can't find the icons in gimp on a recent ubuntu because they all look the same these days, every icon is just 'light grey geometric shape on dark grey' with no visual distinctiveness at all.


Space bar (or possibly any key) works.

The up-swipe to log in induces ux rage for me. I haven't yet tried to hunt down a way to shut it off because I just hit the space bar and forget it ever happened.


Press escape at the "lock screen" and you'll get the login UI. It works the same way on Windows too.


I’m on 19.04 and haven’t experienced the swiping up. Could it have to do with me running i3 instead of Gnome?


UI design lately sounds more and more like a Monty Python sketch.


18.04, <enter> works for me to get the password box up.


At the Ubuntu log in screen just start typing your password and it will automatically scroll up.


Yes, let's train users to type their passwords in with no visual indicator of where it's being input, and only faith that it will go well. Great idea. /s


> It’s totally inappropriate to desktops.

I don’t agree. It’s important for the user to know a login UI is the real thing. For example, Windows NT used to have you hit Ctrl+Alt+Del to make the credential dialog appear so that any fake lookalike was impossible.


Ctrl+Alt+Del cannot be caught by any program, and is therefore reasonable to identify the login UI. Swiping up can be detected by any program, does not improve security as a result, and is ridiculous to have on a desktop UI.


Unlike the Ubuntu mystery experience, Windows actually tells you to press Ctrl+Alt+Delete:

https://troubleshooter.xyz/wp-content/uploads/2018/08/Enable...


That's a bit different. I can fake swipe-up on a GUI but I can't fake Ctrl-Alt-Del.


Actually the idea behond Ctrl-Alt-Del is the other way around. Application can cause the same effect as user pressing Ctrl-Alt-Del (although doing that is somewhat complex due to interactions with UAC), but there is no way how application can prevent the said effect (essentially switching virtual desktops) from happening when user presses Ctrl-Alt-Del.


The implementation may be bad but it seems like the same idea to me, “user must interact with UI before entering credentials”.


Except you don't have to. Just start typing your password.


Yes, that’s why it’s a bad implementation of a good idea.


Even if you had to, it's still a bad idea. Ctrl+Alt+Delete works, because no normal Win32 app can intercept this - so if you do it, and you see a login box, you know that this is the real thing.

But any app can go fullscreen and draw a fake login screen that you can swipe up to show a fake login form.


It's not only that every app has a different style these days, but some of them change their style or add new features via auto-update every few weeks. Even office 365 (desktop version) does this.

It's not just a usability nightmare, it's an accessibility one too (although the two go hand in hand most of the time). Imagine teaching some elderly neighbour how to write a word document, and after weeks of practice they get it into their muscle memory that the thing they want a lot is the 5th button from the left ... then microsoft adds another one in the next update so it's now the 6th.

This would be one place where free software could really shine - you could convert a lot of people with "every application works the same, and we promise we won't change the UI more than once every two years unless we really have to.


If you think resume-driven development is bad for developers (and it is), consider the career incentives for UI and product people. If there's a "maintain" incentive, I'm not sure what it is. "Didn't change anything about functional and satisfactory interfaces" may be a real value-add in some cases, but it's not a sizzling narrative for selling yourself on the job market.


With apologies to Warren G. Bennis:

The UI/UX department of the future will have only two employees, a hipster and a dog. The hipster will be there to feed the dog. The dog will be there to bite the hipster if they change the UI/UX.


I think this reads better if the dog's job is "to bite the hipster if they change the UI/UX", but yes.


Suggestion accepted! Changed!

(Bite me.)


Sorry but now I must know what the original version was!


Closer to Bennis's original: "The dog will be there to keep the hipster from touching the UI/UX".


I think a lot of companies have too many designers with not much to do. Google has a team of people responsible for changing up the art or making whimsical games on the home page every day.

I also read 10 years ago that Amazon hired a big name to take over design for the shopping portal and Bezos wouldn't let this person do much of anything.


Too true. As an Amazon user for 22 years now, I was amazed by the greater than decade run some of their pages had with the UX appearing nearly unchanged (e.g. login flow).


Very true. Most people get measured by how much churn they create. The more the better. Even if it’s 100% correct for the business you are digging your own grave if you leave things the way they are.


I met these two mobile devs at a conference.

They survived several rounds of cost-cutting at a large company by constantly convincing management to let them rewrite their mobile app in some new framework because of X.

Start out with the native implementations, then they did Xamarin, tried PhoneGap, went back to native, then to React Native, and now on to Flutter. The pair of them keep their jobs constantly rewriting an app.


Now and then frameworks does backwards incompatible changes so you might as well rewrite the app. If you continue with the old version of the framework you are stuck with either security holes, bugs or that google and apple changes something that downprioritize your app into oblivion.

And if the UI/UX does not significantly change every 6 months, your users will give you one star reviews and "Old looking and ugly" comments. I don't envy app developers.


Yea if a framework makes breaking changes to the point where you can't upgrade your app, that seems like it could be a recurring problem so after the first rewrite you might as well learn native code and write it there.


This is a lesson that consistently gets lost in the salivation over the latest SPA framework sexiness.


Disagreed, the vast majority of business problems with tech are solved with some form of CRUD which frameworks are perfect for. If your framework all of a sudden doesn't support your business needs either you didn't plan ahead well or your business simply outgrew it, which can happen once a business becomes complex enough.


Virtually nobody in software is incentivized to look at long term software costs. Startups need to move fast, project managers move on, devs need to keep their skills up to date to keep flipping jobs, and managers would need to become technical.


Most, not all; if you have the right skills, you can find a comfortable job maintaining legacy code and (actually) improving it. Then again, most developers seem more interested in chasing new and shiny rather than polishing a stable system.


" comfortable job maintaining legacy code "

That's a very dangerous career path though. If that legacy system gets replaced you are usually out of a job and job search is hard with outdated skills.


To give you an idea of how reluctant they are to replace things, some equipment in the manufacturing industry is over a century old and still in continuous use. They will certainly be extremely unwilling to get on the "upgrade treadmill" that seems to be getting faster these days.

Also, problem solving and creative thinking are never outdated skills. ;-)


guide me brother how to get into this?


Apply to big non tech companies who probably run on older tech


This comment needs more upvotes. It's endemic to the software industry and far beyond. From employees who need to meet some arbitrary (and possibly unwritten) metric, to executives who are compelled to leave their mark on anything they touch, the number of people-years we waste on unnecessary churn has got to be staggering.


I think we're in a strange bubble. Rapid iteration was a potential source of improvements (never going wrong since you can always adjust next week vs potential big fail every N years) and that it would yield better understanding of users by throwing every possible solutions at them.

It will have to pop and rebalance itself because it leads to fatigue and false sense of progress.


> Rapid iteration was a potential source of improvements (never going wrong since you can always adjust next week vs potential big fail every N years) and that it would yield better understanding of users by throwing every possible solutions at them.

> It will have to pop and rebalance itself because it leads to fatigue and false sense of progress.

Totally agree, it's ended up turning into a stream of pointless side-grades and regressions, forever.

Progress come from thoughtfulness, vision, and luck. You can't replace any of that with A-B testing and little experiments.


It's doubly odd because as many (I suppose) I firmly believed that faster pace and smaller changes would lead to global improvements (same goes for ajax web..).

I think we just blew some social limit. People prefer stability, stability allows for more complex but riskier constructions, society enjoys the working ones even better. I like the notion of seasonality these days.


> It's doubly odd because as many (I suppose) I firmly believed that faster pace and smaller changes would lead to global improvements (same goes for ajax web..).

A late response, but a response none the less.

Faster response time and changes can result in global improvements, but they don't ensure it unless they consider global implications. See also: normalization of deviance [0]. It's very easy to make small changes based only on local considerations. These can, individually or in aggregate, produce a globally worse system.

A non-programming example, but I think illustrative:

A friend works in aircraft maintenance as an engineer. They use an adhesive to apply patches to aircraft versus bolts and other joining mechanisms (it's less harmful, longterm, to the airframe and less disruptive to the operation of the aircraft being lightweight and not intruding into the airflow). When the aircraft come in for maintenance the worker is supposed to use a disposable plastic tool and some chemicals to remove the adhesive, it is a slow tedious process. So, naturally, the worker makes a local change to improve their flow: a metal tool that quickly scrapes off the old adhesive. Job done, they move on.

The decision was made only with local consideration, but the consequence was an extra 2-4 weeks of maintenance on every aircraft this happened to. Why? The metal-on-metal scraping resulted in damage to the aircraft that required repair.

Unless you're making changes with the whole system in mind, the consequences can go either way with regard to the global optimum, while moving towards a local optimum.

[0] https://en.wikipedia.org/wiki/Normalization_of_deviance


Thanks, once again, lots to learn from other industries "issues" :)


> faster pace and smaller changes would lead to global improvements

They need to account for how that changes behavior.

It is like Agile development. Sure, adaptability is good. But it has led non-technicals to treat change as free so they now feel ok writing specs on the fly in meetings, creating half baked tickets, or changing a button's color every 5 minutes. By making change seemingly cheap, demand for it soared.

Smaller changes at a faster pace mean that no thought is being put into them and they are probably being viewed by fewer people before being deployed. In large companies that have a lot of people, that can easily mean that one developer has no idea what others are doing.

It also requires that meaningful feedback be received about them. A friend of mine has a startup and they are endlessly making small tweaks to where things are located. They aren't checking in any meaningful way how that is changing things but rather just guessing based on weekly user numbers.


Bull. Smaller changes at a rapid pace allows for feedback. Meaningful feedback, since you get to see and play with the feature. It’s easier to know what you want once it’s concrete.

If you’re deploying continuously to a QA server, there is be no way halfass features get out in prod.

Also, no developer needs to know what everyone is doing, and is impossible to ask for it, since people need space in their head for real life problems and responsabilities, not keep track of the minutia of their peers day to day


Are y’all deploying broken shit to prod? Why would the app be unstable?


I think they mean UI stability, not reliability.


Rapid iteration as a potential source of improvements is great for startups and new things in general. Word, which you teach your grandma, is NOT a startup. Startup needs to change fast or it will die, and they depend on early adopters who want something new. Word needs to think about it's huge audience that doesn't want anything to change. Remember ribbon redesign and pushback it received?

These two types of products need completely different approaches to product development. But also, don't teach your grandma Notion - Word's a much sensible option for her.


> This would be one place where free software could really shine - you could convert a lot of people with "every application works the same, and we promise we won't change the UI more than once every two years unless we really have to.

This is one reason I use Xfce. My brother once told me that my computer has looked basically the same since I used Gnome 2, and he's right.

I'm not entirely immune. After switching fully to GTK 3 it took me a while to find a decent style that had scroll bars or not terrible scrollbars. The one I'm using right now (ClassicLooks) has a few issues (e.g., I can't tab through GUIs because it won't highlight the active option), but overall is acceptable. I'll fix the highlighting issue eventually...



Every time I see a screenshot of someone theming Linux to look like Windows, the most obvious thing that stands out as not being quite right is the font rendering. Even when using Microsoft's own TTFs, the font renderers that Linux use seem to put the pixels in a slightly different place than the MS one.

Other differences I could see from the screenshots: the comboboxes have the dropdown arrow next to instead of inside the edit control, the table headers should have a thick left and top border too, the up/down control's buttons should also have a 3D border, the scrollbar buttons shouldn't become disabled even at the end of travel, and non-top-mounted tabs have their shadow in the wrong direction and side tabs should have text rotated accordingly. I've used this UI for over two decades so it's pretty easy to see when something doesn't look quite right.


If the font is not a pixel font, it will render differently depending on your settings and the choices of the rendering engine. Subpixel rendering for the different LCD-screens, aliasing for the pixels to look smooth on low res screens, hinting for those small details to be visible in vector fonts.

Vector fonts are made of lines that can sometimes be smaller than a pixel wide. Font engines solves this in different ways. It both depend on the quality of the font and the settings of the engine. (I'm no font expert though so names might be off. But I have been playing on and off with pixel fonts and vector graphics since 30 years)


This is for XFCE, which has one of the simplest tools for setting the font rendering right, with instant visual feedback. At least when i used it. Even in multimonitor setups with different resolutions/orientations.


Yeah, I began with DOS and later W98 back in the day, but at least Chicago95 is far better than most of the GTK2/3 or QT5 themes. You can always fix the CSS yourself or submit an issue on GitHub.


This makes me soooo happy. Now if I can just find a GNOME 1.0 total conversion. For whatever reason I always liked my 48px tall Panel and the hodgepodge of icon styles. I've managed to get XFCE's panel sort of looking like GNOME 1 but it doesn't quite behave the same.


> the thing they want a lot is the 5th button from the left

I agree it's normal and expected that people get used to buttons being in certain places, and moving them around too often is bad usability.

That said, the fact some of my elderly relatives use and understand technology this way, by memorizing how many centimeters from the edge of the screen they should look for a button, makes life needlessly tough for them.

They'd be better off understanding what a button is conceptually, what forms it comes in (e.g. standard button vs. underlined link with no outline, etc), and how buttons might be grouped.

I know it's a lot to ask of elderly users, but it pays dividends.

After many years of Q&A with me, my mom understands her iPad conceptually and as a result gets much more from it than my aunt does, who only understands procedurally that if she presses her finger on 'the button in the corner' then 'x' should happen.

If there is no button in the corner, my aunt is lost.


I've had the exact opposite experience with my mom - it took some time to explain things like icons, menus, toolbars, and context menus to her (in Windows) - but once she did, she felt empowered, because it all worked the same in every app. In many cases, this meant that she'd find a more tedious way to perform some task - e.g. using the main menu where a toolbar had the same action exposed more directly - but it was far more important for her to be confident that she could find it.

Then I got her an iPad - the very first one, back when they were released. She absolutely loves the form factor, but hates the UI, and finds it inconsistent, having to learn every app on its own. You could say that this is partly because of having to re-learn - except that it's been several years of nearly exclusively using iPad and iPhone for all her needs, and she still finds it terribly inconsistent.


Alas too many people are either incapable of this kind of ubderstanding or it would take even longer than just giving them directions every time something changes.


The way people react to computers is similar to how many react to math.

A lot of capable people are mathphobic, and it's strange to watch (as someone who took naturally to math). The expressions on the page (or the attempt to produce those expressions from a model in their mind) causes them to seize up. It's like watching an anxiety attack happen. Something about math (the subject, their experience when taking the courses, whatever) has left them with a severe discomfort or level of fear when dealing with it.

Computers illicit the same response from many people, regardless of background, education, level of experience with computers. They develop an understanding by rote, or with a rudimentary (but likely totally wrong) mental model. As soon as something is slightly different, the fear or discomfort rises and their mind blanks. They cannot figure out the next step. At an extreme, a color changes and they think it must mean something, but really it's just that that control is now "transparent" (pulling in the background color but blurred) and they happen to have a bright red object behind it, when normally it's a more neutral gray or blue. For some it's that things are no longer in the right place or that display differently (think of the changes in the Windows start menu over the decades). They'll have different thresholds, but once they hit theirs they cannot proceed without great difficulty.


We definitely should see if we can develop math teaching which doesn't elicit that response, because it has to be learned. Humans aren't evolved to be good at arithmetic or symbolic thinking, but we're not evolved to be morbidly afraid of it, either.

Of course, any attempt to improve math education gets screamed at ("New Math! Common Core! Blah!") and the whole thing just becomes political.


I think we just have to accept that we all are different. Some have it easy, other don't. This comes doubly with all subjects that we are not trained for or not interested in. It might take a lot more to learn or a teacher that understands the problem and can guide around it. Only the individual can say if the added time and effort is worth it. This insight can take many years though, it's hard to get a teenager to believe they will ever need math (or whatever other subject) in their life.


Part of the problem is cultural - it's socially acceptable to "not be a mathsy person" in many circles, and almost a badge of honour in some.

Another problem is that the discovery learning approach can lead to some horrible failure modes for math. I've seen cases where an hour of proper instruction can go further than a whole term of trying to see if they can find it out for themselves. Case in point: the rule of three. If 10 apples cost $20, then 5 apples cost ___? You can spend ages thinking about what it "means", or you can learn that you write down a 2x2 table, multiply across the diagonal and divide by the number in the corner.


Commom core seems just fine overall to me. I wouldn't be surprised if the generation that grows up with it has much better math comprehension and number sense.


This is a key insight. It's why I hate when we refer to "non-technical people" -- I don't believe possessing the technical skills is a meaningful differentiator in users. The more important categories are those you mention; the non-fearers will eventually figure out a solution, becoming technical along the way if required. The fearful stop trying to do that.


it seems like most people can get pretty good at discrete or fuzzy (for lack of a better word) types of reasoning, but rarely both. I'm not sure if it's just avoidant behavior, relying on whichever mode developed faster to the exclusion of the other, or something more innate.

given enough time to read and debug, I suspect I could eventually figure out how any piece of code worked. but if you give me a book and ask me to pick a good topic for an essay, I might not finish before the end of time.


Sometimes they'll change their style every time you go to them! Aggressive A/B testing is truly awful.


“The electric light did not come from the continuous improvement of candles.” - Oren Harari


“the 5th button from the left”

Exactly. That’s how I work in applications I use a lot. I don’t look much at the symbol and certainly not at the text. Same for buttons. It’s highly confusing when they shift icons around. Office 365 has become pretty bad that way. Every two months something gets shifted around. No new functionality. Just change for change’s sake.


> and we promise we won't change the UI more than once every two years unless we really have to

You get that for free by running a stable/LTS distribution.


It is quite sad how often I end up having to help older relatives with their computers on account of unintuitive UI. One memorable, recent example was my mother, who could not figure out how to get her GMail sidebar to un-collapse itself.

Here's a screenshot of a collapsed sidebar:

https://storage.googleapis.com/support-forums-api/attachment...

and a screenshot of an un-collapsed sidebar:

https://techcrunch.com/wp-content/uploads/2019/02/RC-Convers...

It took me some time to realise out that it is the hamburger-menu-like icon in the upper-left corner. It has a tooltip that says "Main Menu", but it is not a menu. It controls collapsing of the sidebar. Confusingly, it is positioned in the top panel, separated by a line that would make one think it is not related to the sidebar, and closer in affinity to the logo, search box, etc.


I just started a new job and they use Gmail for the email, it has been probably well over a year since I logged into gmail on web.

That’s side hamburger button throws me for a loop every. Single. Times. For some reason I keep thinking it’ll bring up the other gsuite apps, but instead the whole page shifts awkwardly aha then the sidebar disappears, “that was not what I was expecting” is my reaction every time.


You may want to try this one: https://simpl.fyi/


The white-background screenshots give me borderline anxiety. It looks like it's perpetually half-loaded.


Amazing, thanks for this


Oh that looks great, thanks!


Google’s UI has been cycling between bad and worse. That goes across the board - websites, Android, apps, everything.

I have never come across a Google product and thought “this is a well designed and clear to use program”. Except perhaps Google Search and Gmail of old - between circa 2005-2008.


You don't have to be elderly to be baffled by the "amazing disappearing user interface". I've accidentally turned off UI bits in everything from browsers to dev environments to PDF viewers to games. Invariably the affordance of "please unhide this UI element" is more minimal/obscure than it should be because the goal of hiding the UI element runs counter to "leave a nice big affordance behind so I know how to get the damn thing back".


I was confused about what you were talking about until I saw your screenshot. Fascinating. Coming from Android it is completely normal that hamburger on the left is drawer and three dots on the right is menu. Pretty much every app does it so you can't help but learn it but if you're coming from the desktop and it suddenly appears it makes a lot less sense.


But doesn't the burger menu usually open/close a drawer? On gmail all it's doing is expanding/collapsing the menu that's always on the screen no matter what. Feels like Google is breaking their own convention here.


It can do either. Small screens close it as there usually isn't enough space for it. Large screens do have the space so it minimizes rather than closes, and it's especially imperative that it does so, because if it closes on a larger screen, it's unintuitive to swipe from the side of the screen like it is on mobile.


Thunderbird (an email desktop client) has a similar issue: some keystroke (I don't know what, but I've accidentally hit it a few times) collapses the folder view on the left. The only way I've found to restore it, is to mouse-drag the dividing line back right. But it's not clear there is a dividing line, you have to guess that it's there, and be excruciatingly careful about positioning the mouse cursor.


I've heard that GMail has a "basic HTML mode", but I'm not sure whether that would help in this case.

I don't use GMail so I could be considered a "fresh user", and I couldn't guess the right button from the first screenshot either --- I thought it would be the big colourful "+", because that's usually the symbol for "expand".


It does, but that mode has its share of other issues. I accidentally lose the draft I’m working on, every time I use it.


What's bothered me most is how if you collapse Gmail's sidebar, the "no Hangouts contacts" section in the bottom half doesn't resize or adapt.

It probably doesn't hurt usability directly, but it seems very unprofessional


YouTube does that too, but on smaller screens. Still confused me to death in the first couple of times.


Part of this is because the sensors are winning.

Sensors (about 70% of the population) use an application by mapping: a click here does this. Based on literature and my experience with my husband, maps are made separately for each application no matter what the similarities are.

Most computer programmers are intuitives: we want things to work the same way in one application as they work in another. That makes it easier for us to learn new things.

But we're only 30% of the population. Blame whatever trend you want: phones, touch screens, microcomputers, Eternal September . . . we've been increasingly outnumbered and out-spent by the 70% as time goes on.


Where do these terms and figures come from? I’ve never heard them terms before but it’s an interesting framework with which to think about different peoples experiences with UIs. it strikes me that most people probably exist in both categories to some extent : even hardcore sensors will probably know and expect consistency some from apps, eg. some keyboard shortcuts (ctrl c, ctrl v etc) whilst some intuitives will have strong mental maps of certain pieces of software (games where creating such a map is often a path to mastery, or in my case adobe illustrator which is a UI train wreck but with which after 20 years I find it very hard to be as productive with anything else)



Huh I never really thought about that. I try my best to make certain keybindings do similar things when I get the ability. (The fact that Ctrl+Backspace isn't back-delete-word everywhere is maddening.) But I never assume any two applications will behave the same way.

I get so confused whenever my coworkers try to "feel" their way around some cli tool by trying different commands and options when I just jump straight to the manual.


Not all programmers are intuitives; there are a great many who are sensors who lack a mental model of how their device operates or how the software they're interfacing with at a DSL or API level operates.

They have a mental map of commands and structures for everything they make, and make "new" things by following examples of new maps. Ie, copying code from SO.


I don't think programming by rote is actually possible past a relatively low threshold. Code is relatively delicate, so just "add strings to files; invoke" is not going to work for most strings.


It's note by rote, per se, so much as it's adaptive pattern matching.


I wonder if "sensors" tend to work more in legacy code. no matter what I need to do at work, something similar probably exists already in the code somewhere. first step is almost always to find that similar code, copy-paste, and tweak it until it works. if you try to get too creative, you get bitten by some weird edge case that would have been handled if you just followed the pattern.


I suspect that user interfaces that feel like sugary garbage to us are totally fine with them. To them it's all the same.


Newly realized sensory here. I both wonder and desperately don't want to know how you feel. I don't think I could even function if I couldn't have a separate mental manual/muscle memory for each app.


Some people even have muscle memory for different keyboard layouts at the same time!

I know I used to switch pretty seamlessly between US International and ABNT-2[0] but others use Dvorak, Colemak and what have you which is obviously orders of magnitude harder...

[0]https://en.wikipedia.org/wiki/Portuguese_keyboard_layout


I had a totally bizarre experience when I tried learning dvorak and gave up halfway. I found that it temporarily broke my ability to type at all and I became computer illiterate for a day. It was quite surreal. Some months later I learnt Colemak instead and found I was automatically keyboard-bilingual. My brain must have done some rewiring from my first experience. Now I can touch type naturally in Colemak, and still roughly type in qwerty (I learnt colemak because I originally sucked at typing and wanted to start fresh.)


I don't think sensors are "winning".

I probably could fall into that category; I use Gold Wave v.4.51 for aduio editing (instead of, say, Audacity) because I've been using it since year 2000, and I know where things are.

The thing with modern software is that it tends to update and move things around. Things don't only have different semantics, but also buttons in different places.

The downward spiral of UI/UX is not due to users, sensors or not. It stems entirely from the disregard for ones.

And don't even get me started on latency, which is a whole another can of worms.


Please explain dd then.


Datenduplikator, data duplicate, duplicate data, dump data, data dump?


Due diligence, Danny DeVito, double dare, direct debit?


The Unix command? It was parodying the IBM JCL statement.


I'm young (25) but my first (family) computer used to run Windows 98 (that's what we could afford). And I can recall well, UI had one "meta-language": menu bar (file, view ..., help), toolbar, and if you'd hover your mouse pointer over a widget, a tooltip would show up with explanation about what the widget does and the keyboard shortcut for that action. Once you learnt how to use one application, say Paint, you'd probably pick up any other quickly. Also, there was the always helpful Help Window (F1) with rich explanation about everything you'd want to know.

I feel that Modern UIs are awkward to use. Many applications have their own way of doing the same thing other applications do. Oftentimes their new way of doing something is badly documented (tooltips are too ugly I guess), so now you have to search the web for help; the help you found is full of useless text, ads and browser-locking javascript; soon enough you find yourself longing for Win 98 era UI.


YES. We spent years training our muscle-memory to understand this interface, and it's gone. It's like someone flipped gravity because they thought it was cool, and if your stomach isn't okay with that, well that's your stomach's problem.

Fuck those people.

The top 2 evils on my list right now:

Buttons that don't appear until you wave the mouse near them. I spent way too long, and had to Google, how to zoom a PDF in Chrome. Turns out there are buttons for that, they're just hidden unless you wander over into a corner where there are no other controls so you'd never go there.

Borderless windows. It's confusing enough that we seem to eschew background patterns now, but borderless windows (or like, 1px-wide borders on a high-dpi screen) make it so much harder to figure out what's going on at a glance. I've found myself trying to figure out where the title bar is, then grabbing it and wiggling the window, just to make sure I understand which UI elements are part of the thing I'm currently interacting with.

What's worse is, I can't imagine anything that was actually gained from either of these changes, other than "Sam in Widgets thinks think it looks cool". They certainly don't help accessibility. They're the worst for usability.

And they're a giant up-yours to anyone loyal to a single platform for a few years because they trusted that platform to reward their learning and muscle memory.


>Borderless windows

An attempt at being sophisticated, that to me translates as "fuck you, you can't touch me, I'm too ethereal for this world"


Being almost 50 I agree completely. I remember enjoying learning software rapidly in various topics which made me know a broad set of tools on at least mediocre but mostly advanced level, from photo editing to system administration. Nowadays it is a pain and suffering in most of the cases and I was starting to believe I became ancient. Thanks Pmop to prove otherwise. And as the article says the most frustrating is becoming beginner in a tool again that you knew expertly just because the UI reorganizes the very same things and you cannot use it anymore to the same level while it is still there somewhere, you have to relearn.

All this for some questionable variation in visuals. Sacrificing usability.

E.g. I hate what MS did to the Skype, it was a favorite one helping me in countless life situation form business to love but I avoid it now at all cost. It is simply almost useless. But this is true for many others, also I reluctant learning new software as it is not that straightforward as before. Another hateful thing is that I rarely find something intuitively but have to Google what is the current awkward logic for its placement and activation method. The "best" is when new unwanted features in new versions are forced on by default and coupled with forensic level of investigation for the method of turning it off (assuming it can be, as in a few it is just forced on you, no questions asked, I am looking at you damn notification center! I had to chase you each and every time between OSX updates being harder and harder to achieve the goal of killing you)


It's all still there in Win32 Apps microcosm, available for anyone to use. I'm currently working on a pure Win32 application and Microsoft's tips are delightfully helpful for establishing a consistent design language across the whole platform. They call it "powerful and simple" in https://docs.microsoft.com/en-us/windows/win32/uxguide/how-t... Win32 Guidelines are also very illuminating: https://docs.microsoft.com/en-us/windows/win32/uxguide/guide...

My favourite aspect of Win32 is built-in behaviour. Things like keyboard-based navigation and screen readers work without needing any dedicated effort on my part. Visual elements are drawn by the operating system and remain consistent with system theme colors, font size, contrast and other user preferences. I always think about that when I see "Now supports dark mode" in a changelog... why don't they just use standard controls and leave all the finer details to the OS to deal with. Windows has supported system-wide theming for decades.


Windows has supported system-wide theming for decades.

...and more recently has been castrating that ability greatly, seemingly in favour of the horrible bland-and-flat trend, which is unfortunate because the ability to customise is highly desirable for usability/accessibility.

(Long-time Win32 programmer here, I agree that the built-in behaviour of the standard controls is highly consistent and also very usable.)


The problem is that even Microsoft's own desktop apps often don't follow these guidelines. And then there's UWP...


Because software needs to stand out more as there are way more players in the game. It's like how every few days there's a post on Reddit about how someone has started their new habit tracker / Todo app SaaS. yay! Apparently it solves a problem that all the others don't... We also need a new JS framework every few months and redefine our library APIs (react router is up to what version now?) all the time


Even computer games like Civilization II and SimCity 2000 used traditional OS menus.


Amiga version of Colonization went further and used multiple native windows. Looked way better than the PC version.


totally agree with this. there is a constant 'discovery' thing in these days UI/UX


For me the worst isn't even that the UI composition/presentation is bad. It's that the performance of these interfaces is getting monotonically worse over time. No one wants to take the time to learn the hard native UI development processes, so we just wind up throwing some shitty angular SPA application into electronjs and calling it a day.

Those of us with non-sloth-like reflexes now have to experience torture as every keypress or mouse click takes an extra 50-100ms to register. Microsoft even figured out a way to make things that didnt use to run like shit run like total shit. Of all the things, mspaint is now somehow a little bit slower and more annoying to work with. I don't know how they managed that one.


The problems with reliance on a giant framework are many. Yes, a noticeable decay of performance is one of those.

The biggest problem I see though is lost imagination. Usability concerns seem all but completely and generally forgotten unless a given framework deliberately provides a dedicated convention for a specific usability concern.

Worse still is that many developers reliant upon a giant framework absolutely cannot imagine developing anything that is not the exact same SPA as their last SPA regardless of any usability concerns or business requirements. It’s like when you’re a hammer and everything is a nail mentality meets the most myopically crippled tunnel vision.

I used to have great disdain for large frameworks because they result in degraded application performance with limited capabilities and substantial bloat. Now I primarily loathe them because they are the primary excuse for weak under developed talent to self qualify progression as a lack of career maturity under the perfection of inept disdain. The weakness and lowered maturity is self evident because their response to any negative mention of their favorite framework is contrived hostility often expressed by calling the target of that hostility arrogant without any consideration for the technical concerns present.


Bad UI design has nothing to do with your coding framework of choice, it has everything to do with the design. Most programmers are not good designers. I cringed a bit at the background color of the OP's blog.

There's a reason that most people are unimaginative, because just like coding, design is hard. Design for multiple platforms (the whole point of using UI frameworks is easier multiplatform development) and multiple screen sizes is hard and takes a long time to both design and implement.


"Most programmers are not good designers." Unfortunately, most designers are not good designers, at least these days. I say that because from what I've heard, the UI abominations that are being rightly bemoaned on this thread are not things that the programmers decided to do--at least not at companies of any size--rather they're things the company's designers told the programmers to do.


Creativity and originality have tremendous amounts to do with solving UI problems. Here is an example of developers literally lost without a framework to tell them what to build:

https://news.ycombinator.com/item?id=22470179

I wish I were making that up.

There are many aspects of design that are at first challenging. I just watched this video about inventing a new basketball backboard and it took a lot of work: https://news.ycombinator.com/item?id=22898653 There will always be some minimal thought work to creating and testing any creative or original concept, but with practice the effort involved reduces considerably. Even though some minimal effort is required (as with any task) is not an excuse to avoid effort in the entirety. At some point it is just mental laziness.


Again, your choice of JS front end framework has nothing to do with the design of your UI. You can code the same design with React, Angular, vanilla JS, QT, etc. From what I've seen, most programmers are awful designers who overestimate their ability to design "usable" software. In particular, we forget that the vast majority of the time, we are not creating software for programmers.


No one wants to take the time to learn the hard native UI development processes, so we just wind up throwing some shitty angular SPA application into electronjs and calling it a day.

That's partly because the native development processes are so unnecessarily awkward, particularly on Apple platforms. It's also partly because most native development processes only give you software that runs on one platform at the end.

Those of us with non-sloth-like reflexes now have to experience torture as every keypress or mouse click takes an extra 50-100ms to register.

Well, that's just bad programming. There's nothing about web technologies or Electron-based applications that requires such poor performance. Someone who has managed to design a system where the most simple interactions take 100ms to get a response would probably manage to create an uncomfortably laggy experience if they built native applications too.


> native development processes are so unnecessarily awkward, particularly on Apple platforms.

Can you elaborate? For me it's launch Xcode, new project, hit run and off we go to the races.


For example, getting a simple iOS app into the App Store means not only developing (or at least building) on Apple equipment but also jumping through Apple's hoops and paying Apple in multiple ways and still running the risk of being kicked out.

Writing a web app to do the same thing has exactly none of those downsides, and with relatively little effort it supports Android devices and users with larger screens on laptops or desktops running any major OS as well.


> monotonically worse over time

An orthogonal interrogation of course, but I pontificate: does specificity truly monotonically increase by specification of some gradient being monotonic, making the modifier "monotonically" not merely monotonically increasing with grandiloquence?


Okay, I was being annoying, but I would actually like to know what the difference between "monotonically worse" and "worse" is.


"monotonically worse" means it never gets better. "worse" without "monotonically" means it may get better for a bit, but overall it will be worse.

https://en.wikipedia.org/wiki/Monotonic_function


Ah, thanks a lot!


No one wants to learn how to develop a proper UI either. For all the heatmap and eye-tracking and logging tools available, no one seems to be doing anything useful with the data they get.

Do you know it takes 3 or more states for Google Maps to load and start navigation from within it's own app? Try it. Count how many times you have to interact with the screen just to use the actual main feature of the app.

Here's another one: In Outlook, I want you to count the number of interactions required to attach a file to an email. You're free to use the drag-and-drop features, but you have to count every single individual action required to complete the task, including clicking Start, opening Explorer, navigating to a folder, etc.

These are critical tasks and features. And they require more work than using tertiary features. So, not only are these interfaces getting less and less responsive, accessing critical functions for routine/repetitive tasks are more convoluted than features no one uses, like the go offline feature in Outlook.


I'm not disagreeing, but I have noticed that in the current version of Outlook, when I tell it I want to attach a file, it suggests the last few files I've edited in some program (I'm guessing it has to be an Office program), which are often exactly what I want--e.g. I'm sending some report I just finished in Word.

That said, the File Open process in Word has gotten considerably more cumbersome, with more decisions that have to be made before you can see a list of files to open.


I find the entire Windows/Mac/Linux desktop experience has regressed terribly, with inconsistency the primary offender.

I suspect this is because usability testing is only ever (a) app-specific and (b) short term. Nobody is studying the collective desktop experience across multiple applications, so every vendor thinks they have nailed it, but never notices that their version of “nailed it” is different to everybody else’s.

The commercial nature of most desktop software would seem to render this problem insoluble as there is no incentive for vendors to cooperate and every incentive for them to churn their UIs to push new versions out.


Lack of transference is a feature for businesses. The more locked in you are to a workflow or infrastructure, the less likely you are to switch.

It's a new world of lock-in. It's not in a business's interests to encourage you to jump to competitors, it is however to their advantage if the transition is difficult in any way. Consumers buy into it and a lot of new developers wanting to make their mark or do something new/different enable this. It's not sexy to implement UI concepts used for the past 20 years, I'm going to reinvent the wheel.

It's perfectly fine to improve the wheel or reinvent it if you can provide increased productivity. Instead I have so many UIs now, going through what should and have been simple workflows is like stepping through a box of puzzles.


It's even worse than that. Not only testing is app-specific, it's often feature-specific, or even worse, problem-specific. Which is fine by itself - isolating a test case to get a more meaningful measure is a reasonable approach. But the results then have to be reintegrated to get the larger picture - and they are not. Instead, the takeaway for every UX A/B test is treated separately, and the "fix" for the problem that it exposes is also implemented separately. The result often looks like it was sewn together from patches of different color and shape - even within a single app.


I mean inconsistency seems to come from the fact that every app the pet app of a company of anywhere between 20-1000 people. It's easy to take the defaults and accept the platform conventions when you want to get on with other things.


We have army of ux professionals these days. But I think engineers do better UX than most so called ux professionals and I say this as a ux professional myself.

we've invented a colossal industry with good intent in the beginning, but over past 10 years, I've seen it degenerate into desperation for relevancy by constantly introducing new things. (things aren't much better in SE)

Can we get back to just doing things? It is extremely frustrating working in software design space, from start to finish. Everything sucks. Process. Tools. Speed. Complexity. Politics. Pretending.

Is it just me?


No, it's the same with urbanism for example.

If I look at what was built in my country before, say, WW2, it all makes sense, even if we just consider private housing development: numerous streets around small blocks, which guarantee many communication options and no concentration of traffic on single exits, streets on the outside of the blocks, so that it can be used by outsiders too, and so that it allows future extension without compromising any communication. This could be made simply for working-class neighbourhoods or company towns, and yet it was well designed, and has aged well.

But in the last 40-50 years we have had gazillions of educated, graduate, certified, professional urbanists and architects, and those have produced all those closed or almost closed subdivisions, which are bad to almost everything related to communication by any mean (foot, bike, but also by car, because it unbalances the traffic between places), and forbid any kind of evolution (extension, inclusion, change of destination).

The same also have validated the opposite: totally un-organised subdivisions, which produce the exact same result: for each group of 2 or 3 lots, you have a lane to the main road, perpendicular to it, and of course making a dead end on their side. With the additional penalty of blocking the vision for anyone going along the road, by a never-ending streak of artificialisation. Example: https://www.google.fr/maps/@43.4593112,1.3756329,1321m/data=...

And they kept on doing that despite the evidence that it has almost no pro and plenty of terrible cons (otherwise said, it is utterly stupid shit). And they keep on doing it everyday, still.

------------

In my opinion, those situations have a degree of similarity: the advent of a horde of graduate professionals, who do worse than what has already been done, despite having been exposed to more experiences and more results of those decades of experiences, and having studied them.


Everything is turning into pop culture.

Doing things because they're fashionable right now, because they're newer, because it's easier to convince someone that the old way looks old than it is to convince them that it's worse.


dumb question... when there's a lot of space to spread out, what would be a better way to lay out subdivisions off of main road?


There shouldn't be a lot of "space to spread out"; that's car culture, not human scale development.


Also avoid “stroads.”


They are designing more for eye candy, less for usability. For example it is common to have gray text on white background, hard to read. Black themes for programming IDEs often have dark gray background, reducing contrast. Scrollbars are now thin, hard to see, often disappearing automatically and sometimes it is hard to distinguish which part is the scrollbar handle and which the background. Sometimes it is the same with switches - on/off. Is that switch on or is off? Sometimes hard to see as both states are similarly bright just different color.


I don't get these fake persona things I see in portfolios and on Behance and stuff. Are these a legit thing or just something that is taught in school so students do and put on their portfolio but isn't used in practice. I get the basic idea behind it but a lot of the time it comes off as quite pretentious, or filler work.


Not just persona. There's journey mapping, storymapping, design thinking, storybrand exercise, designops, formative/summative/ generative research, ethnographic study, service design, heuristics, lean ux, contextual inquiry. I'm not saying these are all useless, but many are just old things re-packaged. And I've noticed that UX people are rarely challenged by other stakeholders because no one can possibly keep up with what any of these mean. It's part FUD and part obfuscation.

If you you have good handle on who your customer base is (current and target), then no, you don't need personas. Just use real data. Where I become dumbfounded is when large companies with mountains of customer data with complicated segmentation and profiles continue to rely on just 5-6 same personas.


Worked at a company where they decided to have a consultant design firm do the designs. 20 designers and 3 coders to do the actual things.

The money dried up really fast (they where extremely expensive) and the design just sucked but boy did they have meetings like nobodies business.


They’re a bit like the UML diagrams of design. They have some utility in certain contexts but they are taught as a ready made solution without that context to students who aren’t able to relate it to real experience and end up overdoing it and misapplying it as they tend to do.

It doesn’t hurt that doing it makes you feel like you’re doing something useful, it gives you a shiny deliverable, ... but in the end there likely are better ways to answer the kind of questions you really want to answer, and many of them involve making the actual thing vs making things that only serve the planning of the making of the thing.


I think they are a symptom of an underlying problem: building a product not for yourself, but for some imagined customer and need. It puts your product in peril because it adds guesswork. But if you have to do it, personas at least challenge you to put yourself in someone else's mindset.


I have seen the persona thing everywhere from startup accelerators to books on effective pitching as a consultant.

It is a useful sales tool as then the people people management types who can easily relate to the persona.


> engineers do better UX than most so called ux professionals

Yes, exactly. Beyond a bare minimum of having a UI (which, to be fair, many engineers forget) good UI design is about not doing things.


I agree with engineers doing a better job. Especially over inepexerienced UX professionals who want to try the latest fad. Ultimately users don't care - they want it to work consistently and be intuitive.


" Ultimately users don't care - they want it to work consistently and be intuitive."

Yes.

And if you get these 3 aspect experience right, you can basically get away with murder: Quick performance, accurate info/data, and forgiving/recoverable from error/mistake.

What is looks like, for the most part is inconsequential.


The most usable application I’ve seen recently (including the ones I’m working on :( ) has been our 2000ish era work time entry system.

My initial gut reaction when I saw it was ‘oh god, how ugly’, but then I noticed how pleasant it was to use.


A usable, predictable app needn't be ugly and vice-versa.


I had some reflexive reaction of wanting to disagree because there are also a lot of things that got better. But inconsistency? Hell yes. It feels like every company tries to run their own experiment, getting more and more erratic, and apparently all getting great feedback from their users (not sure if all the feedback systems are broken or something else is going on). Of course, Microsoft who in recent years started following the "roll a die for what UI-style we use today" paradigm is one of the worst offenders.


> But inconsistency? Hell yes. It feels like every company tries to run their own experiment

Get off my lawn!

More seriously, old apps were way worse. Specially on windows, as soon as APIs for creating non-square windows became available, everyone wanted to use them. Nevermind that performance was horrible.

Even widely acclaimed apps had zero consistency with the OS. Remember Winamp? https://repository-images.githubusercontent.com/26149893/956...

Trillian? Microsoft's own MSN? https://static.makeuseof.com/wp-content/uploads/2009/10/Tril...

And frankly, every single printer, scan, or motherboard utility. Some are bizarre to this day.

We can't even say that Microsoft apps followed the rules. They were one the first to break with paradigms, mainly because they could ship their own version of Microsoft's common controls library. This is how detachable button bars came about, or the infamous ribbon.


I absolutely do remember this. Horrible. I think, during that time, Windows was the absolute worst offender.

I always used windows/mac and linux together during that time.

Early versions of OSX on PPC were pretty consistent. I didn't particularly like some of the "candy" design styles, but the UI guidelines seemed like a breath of fresh air. Note that Apple themselves started to destroy the consistency by introducing questionable things like "sheet metal" windows, sheets, and abuse all of them in iTunes first. Consistency went down pretty fast.

Looking back, GTK2 for me represented the pinnacle of consistency under Linux. As a toolkit it enforced resizable UIs (at a time when both OSX and Windows used fixed-width all the time) and decent components, not to mention that it supported system-wide themes to a degree never seen before. You can even set Qt4 to render GTK2 style widgets.

I have to absolutely laugh when I see that apps today "support a dark mode", where you could (and partly still can) switch THE ENTIRE UI to a dark theme in seconds in gtk.

But I don't want to defend Linux either. This has too regressed in GTK3 and Qt5 as well. The internal support for skinnability with CSS has caused most UIs to override the system theme irreparably. Many UIs ship with hard-coded themes that you simply cannot change anymore or break horribly when switching to a non-default theme. There are a ton of widgets which have incredibly poor consistency and often bring UI paradigms from phones that have _no_ reason to exist on the desktop. Qt5 QML widgets are so bad I cannot even describe how frustrated I am every time I see a good UI being converted to downright crap for "reasons?".

Ubuntu keeps following the latest fads with absolute zero consideration for UI customization, consistency _and_ performance. We have LXDE, but they too will have to inherit all the inconsistency on the programs running on top of it and since they too inherit GTK, there's no escape on the long run.

Still, Android beats the crap on all three easily.

It seems like nobody is even trying anymore when even developer tools gets rewritten in electron UIs with appalling performance and glaring bugs, yet they receive praise (and excuses).


It seems to me like this article should be titled "Decline of UI Consitency".

I think UIs have gotten better in general... but now instead of learning one difficult interface, users have to figure out many different interfaces. For power users who already had the difficult interface figured out, it definitely seems like a downgrade.


Partially I agree, partially for power users, partially for casual users. Sometimes this even overlaps. But a lot of designs nowadays are radically different which I’d say is just as much a problem for more casual users.


It is the web browsers fault, of course. It's amazing, it has been only a few years that web designers have discovered the concept of "components", but even today it's all a big laugh because nothing is actually properly composable. No guarantees, the web developers idea of a component is something equivalent to a "draw()" interface. Combined with the mess that CSS is this encourages people to just throw everything away with every project and redo it.


Nowadays the hottest business model is to throw out a piece of software for "preview", and asks your customers to pay to beta-test it, and then grow it to maturity gradually during a 2-3 years span.

Guess it's very good looking to the bankers, especially if you manage to pull off a subscription model, which many of them did.

Compare Power BI nowadays with an earlier version 18 months ago or even 12 months ago, and you know what I'm saying. I mean I'm fine for patches, but a lot of basic functionalities are missing from very early versions. That's beta testing, not patching.


The web puts us in a usability death-spiral. It's easy to use an onClick div to make a beautiful pop-up menu, but harder to support much more than clicking on an item. This in turn trains users to only click, which further erodes the case for any sort of richer interaction.

This is bleeding into basic browser functions. Find and scroll bars are routinely broken by the infinite scroll paradigm. Undo/cut/copy/paste are broken in customized rich text editing. Eventually these features will atrophy and fall off.


If you want a vision of the future, imagine a finger scrolling on a touchscreen -- forever.


How is that the future and not just a description of current social media apps on mobile/tablet devices?


Just in case someone doesn't get the parent post's quote, it's a riff off of a famous George Orwell quote, "If you want a vision of the future, imagine a boot stomping on a human face, forever.


And if you want a vision of the future on the desktop specifically, imagine the same, except without a touchscreen.


why should I have to touch the screen though when I can tilt it back to scroll up and tilt it forward to scroll down?


and bring closer to your face to zoom in



You jest, but it's hauntingly true


I love this post. I have been screaming at my monitor now for several years. Something shifted about 5 years ago in UX and everything has definitely gotten worse since.

When Google Docs started to bring back a traditional menu display in the top bar, I was so excited. Everything felt normal and it was easy to find what I was looking for.

But the rest of Google Drive is a disaster. Sometimes buttons are in the upper left, sometimes lower left. Recently they moved the “Add new document” button to the lower right and I spent forever trying to find it. It is infuriating that there is not intelligent person at a company of that size who can put a stop to this crap.

I really hate to say this, but I think a lot of UX designers are trying to justify their existence by reinventing things that have already been solved.

The reality is once you decide how a dropdown or a text input works on desktop, there is very little reason to reinvent it. Ever.

Stop reinventing things UX engineers: your small usability study with 3 of your friends who got confused for 5 seconds is not a sign that you should reinvent how to select things in a list.

/endrant


I agree with 90% of this article. However, I differ on one point: as far as I'm concerned, the "File, Edit, View" categories are anachronisms from another era. They make sense in Microsoft Office†, but fail to cover the breadth of software in use today.

I'm currently using Firefox (on OS X, where it still has a menu bar). The first three options under "File" are "New Tab", "New Window", and "New Private Window". Does it really make sense for any of those to be under "File"? I understand, historically, why they ended up there—each document used to correspond to a new file—but tabs fundamentally are not files.

I'll switch over to OS X's Messages app‡. The first two options under "File" are "New Message" and "Open". The former starts a new conversation, and the latter let's you attach a document to the current conversation. Those actions aren't related at all, except in that they kind of relate to the concept of the word "File", depending on which metaphor you're following.

So, I don't think there's anything wrong with mpv grouping its menus differently from evince. They're doing different things and shouldn't have to follow the same categories.

---

† Which is definitely (sarcasm) why Microsoft Office decided to replace the traditional menu with a ribbon. Again, I agree with most of this article.

‡ I'm running OS X 10.9; Apple may have made changes in newer OS's.


Tabs and conversations aren't files, but it is nice to have a grouping of actions that create/restore/manage whatever the primary context/data-structure the app deals with is, as opposed to making changes to it (edit) or modifying how it's displayed (view). They're very logical categories, just 'file' is not generally enough named now.


I'm typically a person who would agree with an article like this; I think we _lost_ something with the modern age, even if we gained a lot. (especially in terms of developer "velocity" (I hate that word)).

However, I really feel like context is important. Computers today have a context given to them over time, users don't need so much hand-holding these days because the expected paradigms are ever so subtly changed. New entrants to computers understand these new paradigms innately because they are already surrounded by the new context.

It's only when we look back we think how much usability has suffered.

Language is a good example of what I mean. Travel back 100 years and the linquistical choices that are made would not only be slightly alien to us, ours would be absolutely muddy to them.

I think you can make a case that a lot of the new paradigms like electron do not promote usage of native UI styles and accessibility.

But the Title bar being an overloaded UI element in todays context is generally ok I think.


> New entrants to computers understand these new paradigms innately because they are already surrounded by the new context.

Much of the argument is based on the idea that the paradigm is very weak. You have things like menus, where no one really agrees upon the form (traditional menu bar, hamburger menu, ribbon). When a menu is offered by a program, there is very little agreement upon where it goes (menu bar in the title bar, or below it; hamburger menu in the top right, or top left). When menus are categorized there is even less agreement about what belongs where. That is looking at just one UI element.

These muddled user interfaces wouldn't be so bad if it was a transition period that would carry us forward another 10 or 20 years. It would simply be a new design language that we communicate through. Yet there is no real evidence that is happening. The article's author pointed out that it would be easy to confuse the pictured stack of six title bars as belonging to three applications, I can one-up that by thinking that there were only two applications after a quick glance. Part of the problem is that they are mixing too many paradigms when overloading title bars. (The four title bars looked like the title bar, menu bar, tab bar, web page head for a single window while the last two title bars looked like a title bar and tab bar for a single window to my eyes.) The ability to create this artificial situation is an amusing outcome, it hides a more serious problem: there is no agreement upon what these overloaded title bars should look like.

I doubt that there will ever be agreement simply because many of the design decisions these days are about branding.


Linguist here. I think you over-estimate the linguistic change in the last century. Sure, there's new vocabulary for new things: spacecraft, hybrid (car), email, covid-19. But syntax and morphology have hardly changed at all, and you can still read Mark Twain (more than a century old) just fine, even the dialectal parts. And while we can't run the experiment backwards to see if our language would be muddy to them, I doubt it (again, apart from new words for new concepts).

Whereas the changes this thread is talking about have been major UI changes in the space of 10 or 20 years: loss of standard menus (I'm looking at you, Microsoft Office), hidden functionality, etc.


Agreed.

Personally I’ve given up on mouse GUIs

Why?

Photoshop pros use macros

Unix pros use text editing macros

Why teach new users single point and click methods of computing when the pros think it’s a waste of time

It’s from an era when computers couldn’t multi task and were largely business focused data entry terminals

Photo manipulation can be automated from a terminal and results displayed in real-time now

Why care about file menus? That’s just a set of keyboard macros unrealized.

The desktop metaphor is finally dying. Let it


The problem with this line of thinking is that not every program is going to be used enough to make learning the shortcuts or command-line switches worthwhile.

That said, each (gui) program should have relatively similar key-bindings.


> Photoshop pros use macros

> Unix pros use text editing macros

pros are like, 0.01 %


And they're the ones who use your tool most of the time.

Do you want to make it "easy" for someone who's never used your thing before, or do you want to make it easy for someone who uses it a lot?

(Hint: A new UI is never easy.)


A new UI is much easier if it follows convention.


Bravo to this article.

The phenomenon is exemplified by Slack and open platforms that follows its design lead, such as the Riot client for Matrix. Legacy chat clients from the 1990s and 2000s could fit a hundred rows of chat text on a normal 30 inch monitor. Slack and Riot can display perhaps as many as 30 lines.

The reason is a bunch of excess padding between lines, the injection of handles/names in the vertical space (because width is precious on small devices?), unreasonably large circular avatar images, and a host of other "mobile first" design quirks. Taken all together, we have a user interface that squanders screen real-estate with abandon.

While a legacy chat user might have chat in a small side window, a Slack or Riot user will more often than not have their chat application fully maximized or using a substantial portion of a side monitor. It's a regrettable pattern but I don't see much momentum on the reverse course.


Probably because a lot of us actually like this. I cannot deal with things like IRC where the whole screen is just a big blob of text. I need something to visually distinguish who the message is from.


Which is fine as a user preference. But modern software no longer allows for controlling information density. Even "compact" layouts, when available in modern software, are typically large and sparse by historic standards.


Today I was using Win 10 in a VM and wanted to put a shortcut in the startup folder. Used to right-click on the start button. Doesn’t work any more. Can’t right click on menu icons either, needed to change one shortcut to minimized, can’t be done. Those were a few of the tiny GUI features that made Windows superior to every floss desktop, gone.

After ten mins of googling I find that you have to open a separate Explorer window and type the secret “shell:startup” into the address line to get there now.

Between that and the control panel mess, what the fuck is going on in Redmond? (and everywhere else)

I gave up on Ubuntu after Lucid I think and eventually settled on Ubuntu Mate because it keeps nonsense to a minimum.

It’s as if a bunch of man-bun wearing devs never heard of Chesterson’s fence. :D

Edit: From the article I realized the fascination with overloading window titlebars was driven by 16:9 screens, making vertical space precious.



New Reddit is so so bad, I feel like I'm in a hallucinogenic nightmare when I accidentally click into it. Kudos that they kept the sane old option for people who want to use the website.


Question is, for how long? I am dreading the day when I can no longer switch to old.reddit.com whenever my laptop is begging for mercy.


And this is not considering that new Reddit is horribly slow. You can be waiting and waiting and waiting and waiting for the actual content to load.


I prefer old reddit because I use it to read things. But if you are into photos, memes, etc, the new reddit is better for just infinite scrolling while scanning images without having to click into each post to see images. I understand the direction they took, it wasn't for people like us who like sites like HN.

https://i.imgur.com/TTUpTL9.png


You can get infinite scrolling and inline images with RES [1] without making it impossible to read discussions.

[1] https://redditenhancementsuite.com/


Old is much better. Even then it's not perfect. The goddamn sidebar keeps taking up 80% of the screen on my vertical monitor or portrait phone.


I'm using this userstyles that hide the sidebar when the window width is less than 1500px.

[0] https://gist.github.com/squaresmile/47ed504ba2838f51be451518...


I got fed of it; I just use TUIR with a ~/.mailcap file, and ed/vi as the comment editor.

Ultrafast, easy and the CPU usage is pretty low.


Oh yes gods slack in particular got a lot worse with the most recent update, but the site I think takes the cake is one we Canadians use for food delivery - skipthedishes.com . Gods this site breaks so many things, options to update things are hidden among menus that switch the current page while not being ctrl-clickable into a new tab (which can trash your current in-cart order) and the whole site loves "minimalism" i.e. let's put soft grey text on a white background to drive contrast into the ground.

Given that this service's value proposition is basically "As a company we have a highly usable website and a fleet of delivery drivers" the fact that their website is trash is super annoying.

(The site is blocked behind address entry, if you'd like to try it out may I suggest 1 West Georgia Street Vancouver BC)


Wow, you are not kidding. I went to this site to see for myself, and the very first thing I see is re-Captcha. And it doesn't even work! I have to pick out photos of tractors before I even get to see what is on the site!

Holy cow, why does anyone use this? It's a hot mess.


First thing I saw is that the website didn’t work at all without JavaScript. Then after enabling JavaScript I realized why: https://twitter.com/buzzert/status/1251314195976425473


Didn't even get the chance for a re-Captcha with me. It outright blocks me because I'm using a proxy.


I wonder if that's because they had far too many pranksters ordering food for someone else.


It outright blocks me and I'm not using a proxy.


Well it's either this or you have to cook and do the dishes. Heck, it's even in the name.


As a user and driver, the intended experience is very obviously the app, not the site.


Really, just let the delivery by requesting by telephone call instead is better.


Ironically this page only uses 70% of my screen's width on mobile, the font size is uncomfortable and paragraph sentences are broken into 6~7 words which is really annoying. Chrome suggests "show simplified view" and that definitely makes it better.


Double tap to zoom to paragraph has been a standard touch gesture for the last ~10 years at least.


But why do I need to do that in the first place? There's nothing going on in that other 30%.


Only on iOS. On Android and all desktop OSes, double-tap selects a word.


Huh? Not on Firefox for Android at least. I don't recall much from my pre-Firefox days on Android, but double-tap to zoom in on a paragraph has been in my repertoire for a long time.


Wow, we're both right.

Double tap on Android 10 in both Chrome and Firefox will select a word in mobile-friendly websites (I tried on https://PhotoStructure.com ) and will do the iOS-zoom-to-fit-bounding-box on websites that aren't (like TFA).

This certainly speaks to an increase in complexity/inconsistency.


I'm on Android and double-tap works for me to zoom-to-paragraph. Double-tap again and it zooms me back out.


The designers took over. They finally gained power and with it, unfortunately, users lost out.

In my unified theory of organizations, the organization is eventually taken over by the employees/"servants" and instead of serving the constituents it becomes about serving themselves.

Think Congress, teacher's unions, USPS, IT depts.

In programming it's resume driven development, playing with new toys constantly instead of delivering solutions.


This kind of finger pointing is really not useful or constructive. Who do you think designed the interfaces that the author of the article prefers? Do you think he'd agree that designers —a whole class of professionals with a large diversity in skill and experience—are the fundamental problem? It's like saying that the developers took over in response to an article complaining about buggy software.


All of that lies at the feet of management (defined broadly).

The people re-elect Congress consistently. Employers reward resume driven development on the open market. UX people win prizes for how their UIs perform on high-resolution large monitors even when they will be used on mobile.

I think it would be more correct to say that management exerted greater control and deferred to the UX designers who were focused on keeping management happy as management doesn't ever go back and see whether the pretty mockup works.


I love how the pretty mockup always assumes that text or images fit perfectly.

It’s probably the single biggest cause of issues we have.

It’s interesting, because the one time a designer asked for my feedback before handing the design to management (the best one I’ve met, but they left the org) they hadn’t even considered it, but were all too happy to take it into account.


Yeap. Look at the users' reaction to changes to Google News (must be ten years ago now), or to Google Maps. The GooN fiasco resulted in tens of thousands of complaints on their feedback page, and only one single "that was a good change" post that I ever saw--and that change was sarcasm (and tagged as such).

Did Google learn the lesson, and go back to the earlier UI? Of course not, because the Designers saw that the change was good. (Just in case anyone misses it, that last clause is sarcasm.)


I'll give the author the complaints about modern scroll bars, which drive me up a wall, but the complaints about Firefox and Chrome really feel like grasping at straws to find something to dislike.

"This new take on the [Firefox] URL bar pops up when you least expect it, is very hard to get rid of and, as a bonus, covers the tried, tested and highly usable bookmarks toolbar below it."

What? It pops up when you click on it, or when you hit Ctrl+ L. That seems perfectly expected, and is how URL bars have worked as long as I can remember. And it only covers the bookmarks toolbar (should you choose to enable it) when you're using the address bar and the dropdown is visible. This is like complaining that the Save File dialog covers the tried-and-true document editor in Microsoft Word, but even more ridiculous, because you can, in fact, access your bookmarks through typing into the megabar. It's an excellent UI that provides keyboard access to a wide variety of browser features, all in one location - search, URLs, bookmarks, and even a fuzzy search of open tabs.

And the Chrome tooltips...behave exactly like normal tooltips, but with slightly different styling to make them more useful for showing the title and domain name. How is this a "crime[] against basic concepts of desktop UI design"?

(disclosure: I work for the company that makes one of those applications)


I don't know about Chrome (rarely use it), but I _hate_ megabar in Firefox. It covers my bookmarks and is completely useless. Why would this input be any different than other input? I wonder whoever thought that covering other UI elements is a good idea.

Hopefully now that I know the keyword ("megabar") I will be able to find a way to disable it. I have searched for an option before but found nothing.

EDIT: just noticed:

> And it only covers the bookmarks toolbar (should you choose to enable it) when you're using the address bar and the dropdown is visible.

No, that's the problem - it (partially) covers bookmarks even when the url bar is empty and there is no dropdown.


Example for Firefox: i click in address bar to copy link, it opens up and covers the bookmark i was planning to open.

"What? It pops up when you click on it, or when you hit Ctrl+ L. That seems perfectly expected, and is how URL bars have worked as long as I can remember."

Just tested on Chrome, and neither clicking or CTRL+L expands to cover bookmark bar.


Honestly, when the URL bar on Firefox started to get fatter on focus with the latest update, i found it a bit distracting. It was a sudden change to a component which people use the most and the behavior had been quite stable for some time. So, i guess that could have put off a few people and we started complaining ¯\_(ツ)_/¯


In the old days you had no choice but to use the widgets provided by the OS, unless you wanted to build everything from scratch, which was very hard. So everyone used the widgets provided by the OS and things were good. Those widgets were build by experienced hci experts, so it was hard to go wrong.

Now you have no choice but build all the widgets from scratch - a web browser doesn't provide any beyond a few form controls. So it is bad - every application must rebuild all the widgets from scratch, usually by designers with limited experience and skill.


Relevant quote from my reading yesterday: "Time itself is on the side of increasing the perceived need for usability since the software market seems to be shifting away from the features war of earlier years. User interface design and customer service will probably generate more added value for computer companies than hardware manufacturing, and user interfaces are a major way to differentiate products in a market dominated by an otherwise homogenizing trend towards open systems." - Jakob Nielson, Usability Engineering - 1993.

I think he partly got the prediction right, that usability would be big differentiator. Apple and MS over the following years had big efforts focused on consistency in their interfaces and we had what many consider a golden age of UI usability, at least from a consistency standpoint. I think what happened next is that two things came along and basically reset the state of UI design back to zero: Mobile and the web.

Both platforms were so radically different that Apple and MS UI guidelines were useless. We got a horde of web and mobile designers experimenting with all sorts of novel interfaces. Experimentation is a great thing but consistency has definitely suffered. I've long thought there was big money to made by somebody wrapping up a complete linux distro, with a set of common applications (libreoffice, et al) but putting in the (very significant) effort to standardize _every_ interface, write good manuals and provide customer support. Sort of like the service that Red Hat provides for servers but with a desktop focus. Maybe they couldn't eat MS's lunch, but if they could demonstrate big productivity gains for businesses, maybe they could.

In the last decade I think we've seen the (much needed) injection of artistic talent into the UI design space. UIs today are much more beautiful than in 1995. That's because businesses realized that users value beauty and hardware improved to the point where more visual effects could be provided without sacrificing performance. In the next decade I think we'll see a resurgence of focus on accessibility and usability centered around design guidelines that coalesce out of consensus in disparate communities rather than corporate policy. I think especially that as Moore's law continues to flatten out, and network connection speeds start to platau we're going to see a renewed focus on responsive UI design and application performance. I am excited about these trends and feel optimistic about software design going forward.

Too bad Nielson was totally wrong about customer service though. :-(


"UIs today are much more beautiful than in 1995." Beauty is in the eye of the beholder. I just looked at some Win95 screen shots, and apart from the large pixels (making the icons back then rather grainy), and the fact that displays were far smaller back then (giving a crammed look), I think I prefer the _look_ (if not the functionality) of Win95. A lot of stuff today looks washed out, as if the ink on the printer was getting low.

And as for valuing the beauty, my computer is for work, it's not an art museum. You're right that if visual effects can be provided without sacrificing performance, they're ok--but, I would add, only if they enhance something else (like usability). Visual effects for the sake of visual effects are like tail fins on cars.


You might be interested in Elementary OS? https://elementary.io/


That has most the issues that this article complains about. It gets rid of both the minimise and maximise button, leaving only close because that's how the iPhone does it.


Their rationale is that apps should just be closed or opened; there shouldn't be a need for intermediate states.


I work on a project meant to sell a service and meant to manage the service being sold to the consumer. However, the button to actually buy the service is hidden by a scroll bar on all but the widest of screens. Unless you scroll the widget or have a 22 inch monitor, you will not see the purchase button.

Why? The UI was designed in on a wide screen and we developers are just the implementers of the picture. UI is quite often taken from a drawing and little else. It looks great in a mockup, but it isn't all that practical.


> UI is quite often taken from a drawing and little else

The way I've solved this these days when working with people who aren't familiar with responsive design (older/inexperienced designers or clients directly) is to print them an A3 page with outlines of a vertical phone and tablet and a horizontal desktop screen, vaguely to scale, and all including a "below the fold area", and tell them to draw on that. This almost always gives me enough information to implement a properly responsive design.


This is a good idea actually. Now, I just need to be in the room before they draw stuff...


It drives me crazy that even though huge, wide computer screens are completely ubiquitous, there seems to be a whole generation of UX designers that only ever use (and therefore only ever develop) full-screen applications.

It's like they learned to compute on an iPad and then use the same mental model for every other device they encounter.


UX designers are appealing to their customer, who wants to see something which is very pretty on the big screen they use to present their work. They get paid for wow during the presentation, not for anything else.

A friend of mine hired a UX designer for her startup and the entire main page will soon look like a beautiful poster. But it is utterly horrendous on mobile, which is where 60% of the traffic is.

Not to mention that it is now littered with images that are going to make page load times horrible.

¯\_(ツ)_/¯


At least there was a scrollbar... one of the worst UI disasters I've encountered involves a truly horrible LMS by a trendy startup where most of a class of ~100 students couldn't log in on the first day, because the button was in the top right corner of a gigantic fixed-width overflow:hidden element. There were a bunch of other huge fuckups, not just in the UI...


That wouldn't happen to be Desire2Learn would it? I'm ashamed that they are Canadian.


I don't remember the name, but oddly enough I do remember that it was Canadian, and the company was very short-lived. It was around the time SPAs started becoming trendy (near the start of the last deecade), and I remember the multiple-minute load times, dozens of MB transferred, and the insane CPU usage, along with all the other horrible usability issues it had.


I work with fortune 100 companies that do shit like this. It's maddening.


It is really astonishing how many 6/7 figure pieces of software are designed this way.


On desktops, I wish we could just stop wondering “where is that icon?” or “what does this icon do?” entirely.

We should be able to search for any feature within an application in a standardised way instead. Maybe something like what Ctrl-P does in Sublime Text / Sublime Merge.

If this shortcut would work for any application within the same OS, we could get rid of most icons by default, while adding more consistency at the same time!


On macOS you can search menu items via Shift–Command–QuestionMark (or by opening the Help menu). Most toolbar actions are also exposed as menu items, so this lets you essentially search for almost every function of every application that plugs into standard macOS frameworks.

Some applications have features that extend beyond what can be surfaced through the standard menu bar but the infrastructure is there for "normal" apps.

Ubuntu used to have something similar in earlier versions of Unity. It would surface Gtk and Qt menu trees in a searchable interface.


> Shift–Command–QuestionMark

Oh my! I knew you could search through the menus. I didn't know there was a dedicated keyboard shortcut for it! I've been using MacOS for 10 years and never knew this… Thanks.


I also figured it out just now after 8 years of Mac usage. But I already knew that I don't know/use most of the dedicated Mac shortcuts (thought I think I should).


Yes! The Ctrl-P convention is the best usability convention to pop up in a decade. It can be so useful. Especially for any complex app like Photoshop, Blender, movie editors.

I was only aware that Atom and intellij had this. Atom has slightly better text matching. I think it keeps synonyms for searchable items. Intellij covers more of the interface. You can even jump deep into parts of the settings dialogue with it.


Other than Sublime Text I also like how Microsoft Office apps and Photoshop does it. I press Alt+Q (it was Ctrl+F on PS), type something in and press enter. If they have a search like that, then it doesn't matter where they hide the functionality.


Google will more reliably return what you're looking for than any hand-built documentation system.


That is missing the point. The Ctrl-P shortcut is for all those actions that you are aware of but use seldom enough to not warrant learning the keyboard shortcut. Or features that you are vaguely aware of should exist. The fact that it can provide a bit of documentation is a bonus.

Google is a competitor to this kind of feature as much as your hard drive is a competitor to your memory.


You're right actually. I was missing the point. Yes, this would be very useful.


I finally switched from Win7 to 10 about a month ago. Couldn’t put it off any longer once lockdown started. I’m tech management, so it’s office, mail, browser and graphics mainly. Illustrator 4 runs fine on an X220.

Win10. I just absolutely hate it. Every day I have to relearn something obvious. I can’t find the corners of windows to grab them, and when I do, it’s one damn pixel wide and I get jitters. Why is Candy Crush on the Start list when I never use it, but where the hell is Notepad? Bla bla bla.

Would someone please make a Win7 skin so I can get back to work?


https://github.com/Open-Shell/Open-Shell-Menu

There you go. Brings back the Windows 7 Start menu/


Nay, i'd rather https://cairoshell.com/


> I can’t find the corners of windows to grab them, and when I do, it’s one damn pixel wide and I get jitters.

The window borders technically still have more or less their old width, it's just that they're mostly being drawn as invisible (but you still should be able to grab them just as if they were actually visible).


This was always one of the biggest failings of open source software. Most communities in my experience absolutely explode when anyone suggests an UI change, even if it's to bring the application in line with well-known usability, accessibility or design standards. The only two outliers are GNOME coreutils, which have at least a semblance of consistency in their command structure, and the corresponding BSD tools, which unfortunately have opted for a completely different UI standard.

I'm afraid there's only one way around this: pressure from above. Pressure from the community keeps failing every day. Newbies try something out, rant about the bonkers UI in a forum or bug tracker, and the fans shut them down with what amounts to "it's how we've always done it!" Whoever decided on the UI of many of these have clearly got too big an ego to see that they are hurting users by "differentiating" themselves.


There was a time (roughly between 1982 and 1993) when very few could sit down in front of a GUI. I do feel like I am returning to that time. Here are some interfaces I could do without, except that I can't:

→ The command line. In 2020, I need to do a lot of things at a command line because there is no other way. For example, starting and stopping sshd needs to be a checkbox.

→ Tabs. Tabs. and more layers of Tabs: boot tabs, workspace tabs: work-spaces/virtual-machines/containers/emulators, Apps ⌘+Tab, Windows ⌘+~, the sad return of "Multiple Document Interface" in the form of tabs and hierarchies of tabs within those tabs, tabs within the page and hierarchies of tabs within those tabs, Views within the page with tabs within those views and hierarchies of tabs within those tabs, keep going recurring tabs possibly forever.

→ You deserve better than this: window snapping. And so-called "tiled" window managers which are little more than poor versions of 1980's window splitting.

→ Right clicking and yet another menu/sub-menus pops up of things I don't want.

→ JavaScript. Advertising. "Block pop up windows" has been enabled by default for a long time, but what about blocking pop ups within a page? An ad blocker for now, I guess.

→ The hamburger menu. Or for that matter, any menu with sub-menus and any menu with more than 8 to 9 menu items.

Here are some interfaces that have improved:

→ No modes.

→ The ability to go full screen when needed without compromise. And, being able to, fairly easily, get out of full screen.

→ UTF-8

→ more guides: the translucent lines or boxes that help align UI elements in flexible ways

What is missing:

→ pop ups/menus used extremely sparingly.

→ Tools that float, in the sidelines — not on top of content, only in the context of when you need them. For examples, see game interfaces, or excellent graphics applications.

→ What you deserve is "Zoom to fit" which when done well is great.


> starting and stopping sshd needs to be a checkbox

That's how you get something like this: https://www.jensroesner.com/wgetgui/


It’s funny you mention that, because if you keep scrolling to the section “for the haters,” you’ll find a pretty close approximation of my views, and a worthy response to those who scorn another’s preferences because they are against their own, when they aren’t mutually exclusive. It’s okay to have more than one way. It seems like some kind of appeal to correctness or directness or purity of task completion that I don’t understand, but which is very common in computers and software especially.

A poor craftsman blames their tools as the saying goes. A worse one curses their tools. A good craftsman appreciates the shortcomings and limitations of their tools and adapts their tool usage, tool choice, and their very tools themselves if need be. What kind of craftsman criticizes the tools another chooses on matters upon which reasonable people could disagree? Is such a tool unreasonable, or is it the craftsman who criticizes another exercising a preference and doing their own thing their own way?

Not every tool is for every job, nor is every tool for every tool user. Preferences are normal and vary. So should expectations. There’s always another tool. Try not to be the tool but rather the tool user.


- Under any other Unix you'd write a GUI over /etc/rc.d/rc.ssh and you called it a day. I dunno about TCL/TK under OSX, but on BSD/Linux it's a click away. Or better: iomenu/dialog. You pressed a keybind, and chose to start/stop SSH from a dialog under a term.

- I hate tiling. CWM has the best of the minimal and floating worlds.

- JS? use unbound and a hosts fetching + AWK script. Now you have a system wide ad and pest bloking.


Good piece, with accurate criticisms. I've lost count of the number of times designers insisted on re-styling links in some unintuitive way.

I suspect (but cannot prove) that people struggle to channel their natural inclination toward creativity into constructive channels. A big part of the job of a UI is to be familiar and hence easily understood. The best UIs often don't stand out -- they just let you get your work done effortlessly.


The single thing I hate the most?

Showing search results immediately, while the real search is happening in the background. I see what I want, click it, and it immediately changes to be something else.

I get the need to give users instant feedback, but not at the expense of user experience.


Somehow google even with all their money and rockstar developers still does this to me in search results. There's no built-in option I've found to remove the 'People also search for' popup that pushes all the content I was just about to click on after navigating back to the search results page. I had to make a filter rule instead.


Care to explain how to apply that filter rule?


In ublock origin add a line in "My Filters" for:

www.google.com##[id^=ed_]

I only created this today so I'm unsure if the ids change over time or if people get different ids for sections but it's been solid so far.


Oh this is cool! Have been using ublock for a while and didn't know you could use it to block particularly annoying things like this. Thanks a bunch!


I was so disappointed when Xfce, long my last bastion of consistent UI design, finally gave up the ghost and announced their move to client-side window decorations. It seems the days of the title bar + menubar + optional toolbar are numbered :(


There is still Trinity [0]. I think it will be very hard for them to switch to client-side decorations, even if they wanted ;)

[0]: https://en.wikipedia.org/wiki/Trinity_Desktop_Environment


KDE still champions server-side decorations.


I would be harsher than the OP: This inconsistency is deliberate and a natural consequence of attention economy. Every PMs dream is for their app to become the new homepage/home screen.

Want to switch out of Slack to send an email to your boss? It SHOULD feel painful, so that next time you do it on Slack.

Want to set yourself a reminder? Well Slack can do that for you. No need to deal with the "foreign" Google Calendar UI.

VSCode is also somewhat weird, in that it married an IDE with a Terminal, a File Manager, a VCS, and tiling window manager.

One day there will be the "great merger", Slack + VSCode + GitHub + Spotify, the Dev Desktop.


> One day there will be the "great merger", Slack + VSCode + GitHub + Spotify, the Dev Desktop.

So... yet another way in which VSCode is catching up to where Emacs was decades ago :)


Definitely agree on the Firefox megabar. As soon as that came out I had to use userChrome.css to make it stop covering the bookmarks toolbar when opening a new tab. When I open a new tab, I'm either going to type something in the address bar, or open a bookmark. Having one cover half of the other makes no sense. It's not like the regular address bar size is insufficient.


Competely orthogonal to the megabar being shit, you can[0] middle-click on a bookmark entry to open it in a new tab immediately.

0: could at some point? I revert and stop updating software that sends downgrades and regressions as updates, so I'm not sure what all firefox-latest is doing.


Yeah, but the order of events in my brain is: 1. I want to go somewhere new. Opens new tab. 2. I have a bookmark for where I'm going!


Fair enough - I think of opening a bookmark (What do you mean "in a new tab"? That's how you open a bookmark.) as a single action, so it irritates me immensely when I have to make a space, then put stuff into the space, and having to do that simply because I didn't know about middle-clicking - rather than the application being deficient - would be even worse.


Aww, he had to go after the Gnome stuff :/

Many a time I have spent over a minute decoding a Gnome GUI for incredibly simple applications. Is that a clickable icon? Why is that icon/menu option greyed out? Toggles, icons, buttons, toolbars thrown together with a rare tooltip. I fully agree they border on parodical.

And yet I still use the gnome tools frequently, because they are useful. So in a way I do feel bad for complaining because I am certainly not stepping up to the plate to improve these tools.


Problem is, would that even help? What I want is what Gnome 2 already was. Their changes indicate they themselves don't want that anymore. Any attempts at voicing my concerns are met with disdain. What more can I even do as a developer?


> What more can I even do as a developer?

Switch to MATE: https://mate-desktop.org/


You can fork! /s


Only if you want to maintain a private copy in perpetuity. It's always more constructive to work with an upstream.

Though in the case of GNOME, that's rarely productive. I had one of my contributions sit around (with tested patch ready to apply) for 10 years! Despite comments saying it was good to apply. That entire component ended up dying through lack of maintenance even when people were happy to contribute effort.

GNOME's problems like squarely with its terrible choices.


It feels like there are many unrelated terrible reasons for "UX" being a clusterfudge. "Webified" UX and "data-driven" UX are of course terrible. Data scientists in these fields don't come from the right background to know or care about causal analysis, so whatever is measured is not UX efficiency.

But there's other issues, unrelated to that.

Computers are complex and there are certain tasks we do not have to do very often. Back in the day, the consistency allowed me to know how to do most things.

I find this most notable in Win10. Back in the day, I could intuit many things, for example how to get something in and around my task bar to work as I want. Another example is to fiddle with audio settings, bluetooth or other settings stuff in general.

Today, I have to "try around" instead. There are two control panels, plus that task bar control panel. Sometimes you need to right click somewhere and choose an inconsistently named item to get to the right setting screen, sometimes it's better to search. Sometimes the setting is available in the "task slider" screen.

I always just thought it's me getting old. Because that's how old people use a computer.

Perhaps it's rather the UX disaster.


Speaking of the Firefox version 75 bar: Why has there been a change in the way the selection works? Now it takes three well-timed clicks to select an entire url. Is this an improvement?


To some apparently yes. I hate it. The URL is an input field, just like on a web form and a single click should place the cursor at the clicked position. That is the damn convention, and anything else just undermines consistency and usability


I'm currently on Firefox 75 on Linux. Clicking in the address bar when not focused consistently selects the entire URL.


Right. I forgot to mention that the problem comes with X11 and the clipboard.

The first selection doesn't and shouldn't copy to the clipboard because entering a new address would focus the bar and replace the existing clipboard. It wouldn't be possible to copy any selection into the address bar.

So to use an url from the address bar requires selecting and re-selecting the entire address. That second selecting has become far more difficult.


You want right-click > "Paste & go".


Actually the other way round: how to get an url out of the address bar. It used to be that a double click was enough since the X Window system copies selections automatically to the clipboard.


The main problem is affordances. In old days, users were expected to be new bees and needed to be get productive fast. So menus explicitly featured keyboard shortcuts, apps had status bar telling you what was happening, there were even main menus that told you everything you can do with app.

Then UX designers thought they should be clever and make this a game for user so they eventually figure out how to do X. Some such initial games produced "aha!" moments and UX folks took that as signal for doubling down in the name of minimalism. Now apps rarely have menus or status bars or even toolbars. Users are expected to struggle a bit to get their "aha!" moment. Sometime even critical functionality is kept hidden under weird actions like triple-finger squeeze. It's horrible for users but apparently UX guys think they are doing cool shit.


Absolutely true.

It's mainly the mobile "experience" seeping into the desktop. Undiscoverable UI incantations and improv galore. All to get rid of the healty PC ecosystem to hail in the brave new world of walled garden platforms with juicy store taxes.


That's because we have a new tool in our UI kit. Google.

If you are trying to underline a text using your fancy text editor, well you don't try to figure out what the icons means. Instead, you hop on google and type: how do I underline text in fancy editor. Google is part of our UI now.


Little of what he says is a problem on MacOS fwiw. I can resize my Slack by dragging anywhere on the toolbar, for example.


Using just resizing as an example: there are no visual queues you can resize any macosx window without hovering over an edge; the edge is visually 0 pixels wide; the resizable area extends outward from the window a few pixels which means you're clicking on the window underneath to resize the top most window (unless it is near the edge of the window underneath, then all bets are off); The resize cursor is unreliable, sometimes it does not show but clicking and dragging do resize the window; resizable windows are visually indistinguishable from nonresizable windows, and the same relative pixel that resizes one window either moves the nonreziable window or brings forward the window underneath.


The author does mention that:

What about Apple?

I can't comment on the current state of MacOS since the time I've spent actually using a Mac during the last 8 years or so probably totals to a few hours. Apple used to be good at this, and I hear they still do a decent job at keeping things sane, even post-Jobs.

But yes: that consistency, though still flawed in many ways (looking at you, iTunes^W Music.app) is what has kept me on this platform, unbroken from System 6.0.7 thru macOS 10.15.4.

Disclaimer: I've been in enough flame wars over this UI that I fully acknowlege that this is a matter of preference. You're not a bad person for preferring otherwise.


The fact that MacOS shows the menu at the top of the display all the time used to bother me but I've long since come around. As more and more cross platform Electron apps take over the desktop, I'm even more thankful that it's there, keeping a lot of this nonsense at bay.

Microsoft has been going downhill for a loooong time, the stupid Ribbon Bar drove me off of MS Office 15+ years ago. The control panel in Windows XP was a mess and it's only gotten worse as far as I can tell.


The ribbon bar was when I started noticing this mess. I liked the extended tab concept but the UI was inconsistent because some things were accessible through the tabs but others you had to go through the button, in a way that has never made sense to me.

I do blame monopolies for this in part. When some single thing dominates (office suite, web browser, whatever it is) the comparator shifts. It's no longer "how do these 6 things compare", it's "is what's new here entertaining enough and not too much of an inconvenience to cause me the pain of abandoning an entrenched tool?" So then users convince themselves the improvements are progress because it's always implicitly compared to the alternative of struggling against the lack of choice of other products.

That's not all of it by any means, but I do think it created a context for what's deemed acceptable ux-wise.

MacOS isn't all it's cracked up to be though and is maybe another historical source of this mess. I use osx daily and it's much, much, much too easy to lose track of open windows. Dialogs open and you don't even know they're there, and you find windows open that you thought were closed weeks ago.

This doesn't happen with a lot of other OSs, or at least used to not happen.

My favorite UX has always been through KDE, although I haven't had the opportunity to as much as I used to because of work reasons.


I was just thinking this. It sort of gives you a home base to return to, regardless of how weird/spiffy the rest of application is. Additionally "Help" is always to the right of the other menu items. And there's a search box under "Help" that you can use to search menu items. Which can be extraordinarily useful, especially in applications like Photoshop that have complex menus and panel systems.


Slacks design does look quite goofy in Safari, though, since Slack's toolbar design matches the browsers so it looks doubled-up…


Full screen mode would alleviate that issue though, right? <eye roll>


Huh. Slack's MacOS UI is the first thing I thought of when reading this article. No titlebar with standard icons. Bizarre mechanisms for popping up dialogs. It could be a poster-child for inconsistency with every "normal" mac app.


I'm genuinely curious how it came about that user-experience is now "try clicking, tapping, swiping, hitting a key" until something happens.


Let's avoid putting all the weight on the shoulders of Microsoft, although I tried to use W10 couple of times last few years, for the job (SE), it doesn't work for me, but for other major reasons.

Using Manjaro i3 on desktop and MBP 15 on mobility.

But overall in UI/UX we are having a hype of newness, to stand out among competitors on the expense of functionality and also making it all "accessible" for as wider audience as possible. FF's address bar zooming "feature" makes me feel like a damn moron :))

1. The job is still done on desktop/laptop computers and touch screens doesn't really make much difference I think. I wish all the major companies incorporated that point of view into the planning process. F*ck shareholders, you gotta think about your users in the first place!

Apple suffers another dead end here - so many colors it squeezes the energy out of my brain! Maverick was the last sane macOS with sane colors.

2. Regression to the mean, that's exactly what happened to educational system, and textbooks as well, make everyone comfortable with themselves, instead of pushing kids to actually learn.

Dumb down everything!


> All of these title bars denote active windows. The top one, Outlook, looks exactly the same when inactive ..

Yes, when did active-clickable-elements go out of fashion and everything became the same faded shade of cold blue brushed aluminum.


If you have a giant full-screen window with tabs, inactive controls are never visible, so there is no reason to make an inactive appearance.


I was going to make a comment about how this might be a mobile thing, where focus is less of an issue because everything is fullscreen, but now iPad does multiple windows and has the same focus problem…


One of the most annoying UI design features to me is multiple ways of doing the same thing on the screen at once.

Windows 10 does this endlessly. You can use the taskbar to get to your files, you can use the Start menu to get to your files, you can use the tiles in the Start menu to get to your files. This actually makes it far more confusing when trying to quickly get to your files. At least on macOS there is one single button to get to your files (the Finder icon).

If you've ever used Maya you'll know it's the same thing. The layout is incredibly overwhelming and when you want to quickly and effortlessly switch to a different tool, you have to think about what button to press. I switched to Cinema4D as their design is simple and very intuitive.

Good UI design to me is about having labeled buttons that have a depth of interaction, rather than putting all the buttons on the same screen at the same time. Obviously you don't need to abstract away menus all the time (like the proliferation of pointless hamburger menus), but at least cleaning up buttons makes UIs more usable.


Without experimentation there can be no progress. It is nice that you are satisfied with the "old" era UIs based on dropdown menus and predictable title bars and it would be nice to have some decoupling of functionality and UI so that you can style your apps to adhere to this paradigm.

What I don't see in your article is any reasoning WHY we should build our UIs in this way, and even if you did I suppose I would disagree. I hate dropdown menus, I hated them since Windows95 and never stopped hating them. There are many other approaches - string-based "tell me what you want to do" approach of Emacs, context-based morphic approach of smalltalk systems, etc. Each of them is interesting, each of them brings something new and works for certain applications.

It seems to me that instead of ranting how UIs are not what you want them to be these days, you could instead rant that you are unable to mold UIs to your liking and it would have greater utility.


> The newly released Firefox version 75 features what has become known as "the megabar"

Oh, thank god someone is saying this. This change baffled me. Who thought this was necessary or a good idea?


A lot of the decline has to do with the desktop importing web-based UIs: documents with a hodge-podge of interactive widgets that glue them into a cohesive app. Every app can be different and can be optimized for its function. New desktop apps are often built with either web-tech or some new UI framework that tries to give developers and designers that same level of differentiation and task-specific optimization you get in a browser.

Honestly, it isn't surprising that web style UIs are so popular: they take a lot less time to build, and you spend more effort building functionality than integrating the standard actions your app is supposed to support (think the stuff in the File, Edit and Help menu). Also, most users do pretty well with webish apps. Better than they should and most of the time better than they do with a GUI app. As they say, Good Enough beats perfect almost perfectly.


I wish they would use more words in menus instead of icons. I built this wonderful capacity to read quickly, I scan a list pretty quickly, some interpretation is necessary, but some icons are really obscure/vague.

And if I have to click more than once to find a scroll bar because it was so small I missed, then the gui is doing it wrong.


I often wonder what Jakob Nielsen would think of modern UI haha. Every decision goes against what he believes in. I wish someone would show him the inscrutable Snapchat UI.


Some of the worst offenders are 16x9 monitors. Great Apple did not follow that strange fad, and stuck with 16x10.


My desktop main monitor and laptop are 16x10, both are 10 years old. I can't get over how terrible 16x9 is for computer use, its a travesty that this became the standard.


Using TV screens for computer output is an idea that should have died with the Commodore 64.


I read this and I can’t tell if the OP is joking or what. Information Architecture and User Experience Design have dramatically improved application development in the last ten years.

This isn’t to say crappy apps aren’t built. They are. But when time and talent are leveraged properly, good things happen.


I'm also tired of articles that are like: "Here's some bad examples, therefore everything is bad".

It's missing the constructive part and probably finds the wrong audience.


I'm sure people would be happy to provide more bad examples--indeed, they have done just that (and I contributed my own gripes about Adobe Acrobat).

I can think of bad UI choices in nearly every application I use. I think one over-arching issue being raised here is the lack of consistency among applications, even on a single platform.

But if you don't think everything is bad, perhaps you could provide examples of software with good UIs?


Ah, the times of the Sun contributed and led HIG — Human Interface Guidelines — for Gnome that actually made sense!

But even that (Gnome 2.0) "simplification of UI" got a lot of flack from the community, but it was based on sound principles and actual user validation (it was already brought up in this thread).

What I like to think is that we are in a chaos at the moment, where experimentation is running wild, and at some point, we'll realise again that we can't have good UX for both large and small screens, and keyboards/mice and lack of them, all in one codebase, and we'll stop trying, and all will be good. For the next 5 years at least.


Whatever they say, as a Mac OS 8 and 9.1 user I don't think there will be anything similar for a long time: macOS today is a usability disaster. A user interface that is mined with options for all cases. Gosh, it's a pain in the ass to work with macOS these days of so many options you have. Before it was simpler but things were done in an effective way without cutting the user's chances of working comfortably and customizing their work desktop. And the Finder is a disaster, for me the epitome of the Finder will always be Norton Commander.


I'm guilty of many of these. The reason is screen real estate. You want to have the most essential stuff on the screen, and not in hidden menus or popup windows.

And your app needs to adapt to many screen sizes, all from mobile, pad's, notebooks to desktop monitors.

Then it's much more efficient to use the keyboard rather then reaching for the mouse to click on icons you have no idea what they mean, or pulling on scrollbars.

Only problem is that if you design your app like Vim, you will have to put a lot of time into teaching users to use it properly.


Frontend development/UI/UX design has gotten stuck in a weird loop. Many companies/startups get working on redesigning the UI every 6 months just because some competitor has a new design and a few people like it, or even worse, the designer likes the new "slick" design. It is bad because it is no longer concerns/standards about accessibility or usability driving those constant redesigns but some kind of self-created pressure to be part of a rat race.


I’ve been helping family get set up with Zoom etc via remote desktop (recommend Chrome Remote Desktop, easy enough to set up). Watching them do things makes me realise how absolutely unintuitive computers have become.

Click to download something. Where is the thing gone to? They don’t know that the little icon in the top right with the down arrow means “downloads” and they don’t see that it’s just gone blue. One click or double click? I don’t even know any more.


Recently my MacBookPro stopped working with an external LG monitor. Until then the laptop used the highest resolution spanning the whole wide monitor.

Then it suddenly stopped and only displayed a 4:3 image in the middle. Preferences didn't show anything.

I've learned that you need to option-click the settings to see all possible hidden resolutions. There was the wide large resolution again. Not sure how one is supposed to used this preferences panel usability wise.


> The Decline of Usability

Featuring a 600px wide container for text.


inspect element ; div.content ; .content,.priv ; max-width ; delete

Compared to any 'modern' website this is a fucking pinnacle of usablity.


Thanks goodness Firefox provides Reader Mode usability feature.


What's the problem?


I thought that was just for desktop and there it is not bad. But it also doesn't use the entire screen on mobile...


Some very good points here, I haven't been using Stack in a while and I can hardly believe they could come up with something like that! What a nightmare...! I can't fully agree with the rant about scroll bars: so easy to reveal them by just scrolling (easier than a mouse click), but a lot better in terms of aesthetic. Of course a compromise, but an acceptable one. I would rather prefer scrollbars on desktop to be larger, because they are not visible most of the times anyway, so why make them such small targets that are difficult to click on? The author seems to be critical towards flat design, what are the arguments around that? The emotional impact of design is not something that can be disregarded that easily, it's legit to expect a modern UI to be efficient and slick at the same time. Aesthetic is paramount, for the same reasons why typography matters. Users nowadays expect to deal with a clean, lightweight UI, rather than with one that looks like it was designed in the Pleistocene.


This is the worst device angle pest we've seen in games already.

PC games designed with consoles in mind make PC games UI worse. It starts with the menu and ends at in-game controls.

This is the same thing we see in windows 10 now. It looks like it must be build in a way that you could use it drunk with your nose on the screen even though more precision is not only possible, it's normal.


I don't agree with every point made in the article, but it was pleasant to read.

Especially the keyboard focus got me, as I really like to use my keyboard to do as much as possible. It's far superior to the mouse unless you really need your mouse (for image manipulation for example). We are rebuilding our UI for Emvi [1] right now in the browser "keyboard first" and I must say this feels quite right. Of course you still need to support mouse and touch devices, but it's so much more fun to start out with a clear image of what you're trying to achieve. The handling is much faster. You can read about it here [2] if you're interested.

[1] https://emvi.com/

[2] https://emvi.com/blog/a-new-experimental-user-interface-QMZg...


Hah, what a delightful version of Evince the authors makes fun of. Newer versions removed the “Open...” option entirely AFAICT.


I might be a little late to the party here, but I don't entirely agree with him.

What I do agree with: UIs have gotten worse on the usability front, this is undeniable. Each application does its own thing when it comes to layout, and each app is also subject to change so not only do new users struggle to find where a button is, but experienced users have to re learn the UI. I also agree that a lot of the bad changes to the desktop come from mobile. Something that works on mobile because of the small screen, limited horizontal space, and inaccurate input method will inherently not be the most efficient on a usually large wide-screen device with very precise input methods. There is also a large element of sacrificing usability for aesthetics which doesn't help.

What I don't agree with: I don't think we had ever reached a stage where we nailed usability. The "golden age" he talks about, with the file, edit, view menus weren't all that great. I remember using word 2003 and having to click through each menu and reading each entry one by one to find the option I needed because it wasn't obvious where it should live. The one advantage this system had was that every app used it and so everyone understood the paradigm, which is arguably a bigger factor to usability than the actual design.

He also makes it seem like (although I don't think he argues this point) every new innovation in UIs past a certain point were bad. But he also gives a counter example, the bookmark bar. Older web browsers didn't have this feature, it was something that came later. Some wizz kid one day implemented it and this feature happened to stick. We haven't solved UIs yet, and so we have to try new things in order to figure out what works and what doesn't.

Finally, I don't think the most important thing with many UIs is how easy it is for a new user to understand. Most people would agree that Vim has a great interface, but it just takes a while to learn it. This goes for a lot of specialist and professional applications, I'd prefer it to be designed to be useful for the veteran user, not the newbie, but with good documentation to make the learning as easy as possible.


This is also why Windows Phone and Windows 8 were dead before they arrived, there is no discoverability besides randomly clicking and swiping to see if something happens. And if nothing happens, you still don't know if there simply is nothing more, or if you haven't used the magic swipe/click yet.

It seems that many 'new' interfaces try to do their own thing, and try to go for looks rather than functionality. If you have no consistency and no known functionality, the UI is basically worthless.

There are some ways to get a 'better' UI without breaking so much, like the macOS menu bar that is context-dependant. You save some space in each window, yet because you can only click one menu at a time anyway you don't lose any functionality or consistency. The downside is the extra step of first getting focus on the application you wanted to use (but in most window managers you had to focus the window first before using the window-embedded menu bar).

It's not that a UI can't be made better, it's just that it's hard to sell to people with no knowledge or affinity with basic HIG principles. Such disconnect between reality and 'design' could be found a while ago where feature phones and some laptops came with either round trackpads or round screens. That's a neat artistic thing, but completely useless in a reality where everything is rectangular, including the things you interact with in the world as well as digital content.

Same with the touch-centric UI components getting ported to desktop operating systems that assume lots of swiping and touching where there is none. While possible to execute it reasonably well (I think there is a 'touch mode' in recent windows versions vs. touch-integrated so you can just switch between them instead of having a mess of both worlds in one). Same with Apple's idea where your input device they ship with the computer has the input capabilities matched with the OS and the first-use explanation of the UI when you power up for the first time. And if you don't use it, you're not losing anything, because the traditional buttons and menus are still there, visible, and not in some hidden swipeable area.


I'm having a real hard time squaring a rant about usability that expressly states that everyone should be held accountable to good design, and a website design that went out its way to be unreadable on mobile.


One thing I've noticed is how many more mouseovers there are.

I've noticed I have an always-on internal process where I'm hunting for mouse cursor dead zones. I don't recall it being that way before.


I am using a LibreOffice more than Word and Excel these days, besides being far too slow, the UI on these apps keep changing for the worse. I give my feedback, but now there is a search field in the title bar. The MS people claim this is for discoverability. Then let’s put the whole damned app in the title bar. This idea that apps need to be made for stupid and lazy people is why there was Office and Works. This is a catastrophe. I need professional tools, not Farmville word processor edition. Done griping, and I wholly agree!


GNOME MPV (now Celluloid) seems like a bad example since it's not part of the GNOME project.

GNOME Videos gets away with the same UI choice as in that app, media is opened from the lists and not the menu.


Honest question: is there a single desktop manager that conducts usability studies and isn't guided by hunches and conventions? Or more broadly: any open source software that does?


what the post forgot to mention is how even though the hardware on a win95 machine was slow. the interactivity felt fast. no slow loading bars. software was snappy. & fewer bugs. compared to the present day with fast AX series on iphones, Zen processors, almost every piece of software feels slow. from the 'native' desktop apps to web apps running boatloads of JS in the browser. let's not mention endless AB testing


UI/UX is just one side of the problem, talk about app sizes...specifically those Electron ones...I’m not kidding, the other day I saw a modern day Note taking app promoting itself as stellar Note taking solution which comes “featured packed in a single install file, just shy of 100MB“...and I thought to myself, oh gosh! Am I the only one that thinks WinAmp, an approx. 2.5MB install, is still the most kick ass app to date?!?!


Nowadays you need 100MB to really whip the llama’s ass.


Ornamentation vs function is as old an argument as when humans began crafting objects. I would argue not a decline but a phenomenon that ebbs and flows over time — check out this NYT article from over a decade ago https://www.nytimes.com/2009/06/01/arts/01iht-DESIGN1.html


> Putting things in the title bar saves screen real estate! This is true to some extent, but screen real estate in general isn't much of a problem anymore.

Doubt. I may have 3x27" monitors on my desk now, but I work on a 15" laptop way more than I ever have, too. Screen real estate is at a bigger premium for me now than it has been since like 1997.

(You can't just scale up resolution past a certain point. I'm also getting old.)


The way the "screen space" argument is used in practice, it's almost always misleading, because the same apps that insist on shoving their UI elements into the title bar, are usually also guilty of copious whitespace in the context area. The article even points that out.

I think what can work reasonably well is an option to make the title bar auto-hide (but pop up if hovered) while the window is maximized. This shouldn't be the default, because new users should be able to find the window-related commands easily - especially Close.

MATE can be configured in a somewhat similar fashion - it has auto-hide, but it doesn't expand on hover - it's just completely hidden. This works reasonably well in practice anyway, because there is a shortcut to close, and un-maximizing can be done via the task bar in the very rare case where it's needed.


"I'm also getting old" It could be worse; you could stop getting old.

(I turn 70 in a couple months.)


Recently moving from Fedora to a Mac to use my employer's computer one thing that's still very annoying to me after 6 months is the top menu bar that the Mac keeps fixed in the top of the monitor, even if I have the program window not occupying full screen.

I have a 4k 20 something inches monitor, so I keep around 4 different windows open. To go from a bottom right corner window to the top left corner File menu is a long trip.


Personally, I really like the top menu. It prevents apps from trying to style their own version of it, and it's always in the same place. Developers know that users always look there, so almost no apps skip making it. Was always the worst to me on Windows/Linux when programs would make their own crappy versions of a top menu that were all inconsistent and bad.

I use a mix of 27-inch 4k monitors and a 21:9 34" monitor, with many things open all over the screen, and have never felt it's too far away. You might try increasing your mouse speed a bit (you'll get used to it pretty quickly). If you can get from the bottom right to top left corner without picking up your mouse, you're probably at a good speed.


For my personal system I use a tailing WM (i3) and actively use hotkeys, so I less affected by GUI degradation described in the article (though Firefox megabar is really annoying).

But for an average user IMHO best GUI design was in Windows 2000 - almost all changes done later made GUI look more modern (whatever it means), but less usable. And Linux GUI (GTK/QT) unfortunately follows this pattern.


While some are egregious usability disasters, like hiding the scroll bar, others points are just a sign of times moving on, like the ≡ for menu.


> others points are just a sign of times moving on, like the ≡ for menu

The article addresses this at the end. In software, the "times" do not move on like an unsteerable force. We (users and designers) decide how the times should move on. For non-mobile, the ≡ for menu seems like a step backwards and we shouldn't accept it without complaint.

Applying mobile patterns which are suboptimal on the desktop seems like a UX antipattern to me. I think a huge part of the problem is that software -- both free and nonfree -- needs to show "change" in order to signal it's still alive and maintained, but change for change's sake can be a bad thing, especially if you already had a pretty good (or consistent) UI.


> change for change's sake can be a bad thing, especially if you already had a pretty good (or consistent) UI.

I run Trinity Desktop on Linux for exactly this reason: it's basically what KDE 3 looked like 10 or 15 years ago. I run it so I don't have to re-learn all my workflows every time somebody comes up with some new eye candy.


"Hamburger menus" are and will forever be, in my mind, a lazy design for "catch all" stuff. (On Apple's platforms, it's three dots, but it's the same thing.)


Joel Spolsky has an old post somewhere where he says the reason the Palm Pilot failed but the iPhone succeeded at making mobile devices "a thing" is that desktop and mobile are two different things; the Palm tried to do a desktop interface on mobile.

I'm ok with three-dot menus on mobile, especially for features like "settings" that I don't use all the time - Whatsapp's dots menu works for me.

Using a dots/hamburger menu on desktop is lazy, I'd agree with you there; using it on desktop for things that people do use a lot like the editing menu in a painting program is atrocious.


I'm generally not a hamburger menu fan, but I've come to accept it in Firefox/Chrome/Edge/etc because I rarely ever need to bring up the menu anyway so it conserves precious vertical space on a widescreen display.

When I DO need to bring up the menu, I'm usually one to two clicks from where I need to go.

When a hamburger menu encompasses the whole UI (metaphorically speaking) I'd definitely agree it's problematic-- and the same goes for random placement of menu items into a hamburger menu. That's just lazy design.


The typical desktop hamburger menu is still horribly unstructured compared to File/Edit/... that it replaced.

IIRC at some point in Opera, you could move the top-level menu into the hamburger, but it was still the same top-level menu as before. That would be better.


I think hamburgers work in browsers because they are just chrome for an actual application running inside the browser tab. Having a browser menu just above an application menu inside the browser tab is something I can actually see being problematic.


I'm sure, given a bit of time, one could come up with a consistent design language based around "hamburger" menus. At the moment though, as the article states, a lot of previous consistency (e.g. open/save/exit will be in the "File" menu) is being sacrificed as everyone rolls their own.

I've actually seen applications where the first button on the toolbar is a hamburger menu and when you click it you get a dropdown with File/Edit/... etc options, that work as expected. I can live with that for now.


> I've actually seen applications where the first button on the toolbar is a hamburger menu and when you click it you get a dropdown with File/Edit/... etc options, that work as expected. I can live with that for now.

So instead of just having File/Edit/etc. on the toolbar they're nested inside another menu? This seems like introducing an extra click for no reason :(


A good example that works for me is SumatraPDF, as 90% of the time I don't need the menu (I use keyboard shortcuts) so the screen estate saved is a fair trade-off for the extra click the few times I do need it.

[EDIT: probably more than 90% of the time, actually; EDIT2: and I'm pretty sure there's an option to show a proper menu bar all the time for those that want it]


I find the OS interfaces to be OK.

The internet on the other hand is a train wreck. Oh you want to read a website? Not before you click away these million popups and cookie banners and news letter things. Scroll down? haha no. You're in for some sort of sideways scroll phamplet thing. Or my personal favorite alternating down and sideways by page number (yeah I couldn't believe it either)


Well. It's a generation of programmers brought up on Javascript. They are in charge now, and we are paying the price for that.


what about the infamous iOS keyboard states? is white active or inactive?


I fundamentally disagree with the entire premise of this article.

Yes, usability has become less uniformly consistent within a single platform. But that's because the feature set of desktops, laptops, tablets and phones has increased exponentially, beyond what the classic desktop GUI could handle.

Not only can you access a huge percentage of the world's knowledge within a few seconds, but new UX paradigms such as search boxes and recommendation engines have completely changed the game.

Now when you're trying to build an app that has a modicum of consistency across sizes from phone to desktop, whether you're using a mouse, trackpad or touchscreen, whether you've got a hardware keyboard of software one or are using dictation, and so on...

...then you have to make tradeoffs. Yes, the purely desktop experience has become less consistent. But at the same time, an app can be more consistent across platforms, which is what many users want when they're switching between platforms multiple times a day.

And as an industry, apps really do seem to fairly quickly standardize on UX conventions like tabs, hamburger menus, autocomplete, drag-to-refresh, and so on, which aren't any less intuitive than right-clicks, keyboard shortcuts, minimizing, or drag-to-trash-to-eject (remember that?).

So relative to functionality I don't see any decline at all. Young children can pick up an iPad and learn to use it without instruction. I don't remember young children doing that with a Mac Classic or Windows 3.1.


> Now when you're trying to build an app that has a modicum of consistency across sizes from phone to desktop, whether you're using a mouse, trackpad or touchscreen, whether you've got a hardware keyboard of software one or are using dictation, and so on...

Maybe we shouldn't strive for uniformity across many different interfaces.

My office has switched to MS Teams, which is an abomination on the desktop. But would be perfectly fine on a tablet. I can't have multiple chats open simultaneously. If I open a shared document in a chat or team, then I go back to the chat, I have to click several places to get back to the document (rather than having it, you know, opened and in a separate window). A desire for "streamlining" the experience or some other such bullshit has produced one of the worst productivity/collaboration tools I've seen in 25+ years of using networked computers.

I wouldn't expect a remote server to present the same interface as a desktop as a tablet as a phone as a watch. It's absurd, acknowledge the distinctions and design for the system that it's executed on. I'd rather MS Teams on Windows be like, well, a desktop application:

Contact list, chat window(s), documents opened in the application that can edit them, all with multiple windows taking advantage of the actual capabilities of my system. I have two monitors at work, but with MS Teams I may as well just have one.

And that's just one of the more egregious examples, many others are like it and it's the result of laziness or hubris or ignorance on the parts of the designers/developers.


Microsoft even had MSN Messenger. The former commenter has no excuse, but lazyness and a great lack of understanding of 90/00's computing environment. W9x-w2k/KDE's paradigm were the best ever for a DE based multitasking.

This is a PC. Why does the parent commenter want to _downgrade_ its user experience to the one from a mobile user?


I wouldn't even go so far as to say that the mobile experience is a downgrade. The issue is one of respect for the medium the program is executing on.

On a tablet, MS Teams would be (mostly) adequate. The tabbed interface is somewhat natural:

  +----------------------------------+
  |      |search/command field|      |
  +---------+--------+---------------+
  |Activity |Person1 |Chat Files Org |
  |<Chat>   |Person2 +---------------+
  |Teams    |        |Some stuff     |
  |Calendar |        |               |
  +---------+------------------------+
Select the main thing you want in the left column, select a sub-activity in the next column, then another, or interact with files/chat/whatever.

It actually works well as a model for how to do chat/collaboration with a tablet. It makes good use of space (the first two columns take up almost as little space as needed). It's touch-friendly. If it launched files into their own editors, it'd be a perfectly reasonable tablet application.

But on the desktop, I'd really like to chat with more than one person at at time. I'd like to view the files and chat with whoever sent it. The desktop computer offers one of the most flexible GUI systems we have, don't force users into a singular mode of use unless you're running on a kiosk.

---

I'm picking on MS Teams, but look at Slack and Discord as the things it's imitating (or all are imitating each other). Restrictive single-window applications neglecting what makes the desktop so flexible/capable and distinct from tablets, which their present UIs are more suited for.


A touch interface for a chat application is the worst thing on usability happened on IT ever.

Physical keyboards as a cover should be a must shipped item on every tablet.

A tablet is a consumer/broadcasting device, like the smartphone. You are not producingwith it. You are either consuming media, or sharing it. It's the perfect ad device. And zoomers are blindly embracing it as if it was better and more "modern". They are not right.


> A touch interface for a chat application is the worst thing on usability happened on IT ever.

That's a strong opinion, not one I'd agree with (except for the how touch-device interfaces have spilled into desktop interfaces).


I love the article! Couldn't agree more! One of the trends was set by, who else, Microsoft. They went mobile first, failed miserably, but still kept the concepts.

It's prevalent in their UWP apps - especially settings, in Win10 - one of the worst UI's I've ever seen.

I can't explain how much I hate this.


Everybody doesn't agree with you. I believe going Mobile First was the best decision they made in decades.


Why? Windows is a desktop OS primarily used to get work done. “Mobile first” doesn’t make sense here and just gets in the way (it personally made me give up on the platform and switch to Mac instead).


I wholeheartedly agree. Even though I pretty much hate everything Mac, I'm also thinking of switching.

MS has lost its vision for years. Their "innovation" is just some idiot coming up with a completely stupid idea, and convincing their even more idiotic bosses it's cool, investing a lot of time and money, at some point semi-realizing it's crap, and then being ambivalent for years.

Case in point - WinRT - they pretty much rewrote the whole OS, re-invented everything, forcing us, the developers to use their s*it, everyone hates it, it brings no benefits whatsoever, it's waaay worse than non-WinRT code, and even now - after 9 years -, knowing everyone hates it, they're still re-wrapping it and saying it's "amazing".

They're launching an android phone - which is BEYOND STUPID. They're rewriting Edge using Chromium, instead of using Mozilla's code (basically, putting us a Google's mercy).

Then, they're building this Windows10X OS just to deal 2 foldable screens, something time and time again people have shown they're not that keen about it. They're willing to risk losing Windows 10 OS developers, since they're changing APIs, introducing countless bugs, just to deal with foldable screens (something that may or may not catch on).

The list could go on.


Having a ui that adapts to screen width is important to me because I do a lot of things at once, the window manager on Mac is the worst for this


there's a difference between adapting to every screen, and being mobile first.

The point is: no matter what you do, to have an app that works on both mobile and desktop, you'll need two UIs (otherwise, you'll get a crappy UI on both platforms). So, if I use UWP on desktop, it should "bend" to desktop (all the controls should be desktop-friendly). If I use UWP on mobile, it should "bend" to mobile.


I date all this to Jakob Nielsen retiring the ever-so-wonderful https://useit.com for the utterly forgetable https://nngroup.com

(Fortunately the redirect still works.)


The OECD published the results of a massive survey of member countries some years ago, titled "Skills Matter" (https://www.oecd-ilibrary.org/education/skills-matter_978926...). The researchers defined 4 levels of technology proficiency, based on the types of tasks users can complete successfully. There was a very good summary published here (https://www.nngroup.com/articles/computer-skill-levels/) and excerpted below.

For each level, here’s the percentage of the population (averaged across the OECD countries) who performed at that level, as well as the report’s definition of the ability of people within that level:

“Below Level 1” = 14% of Adult Population

Being too polite to use a term like “level zero,” the OECD researchers refer to the lowest skill level as “below level 1.”

This is what people below level 1 can do: “Tasks are based on well-defined problems involving the use of only one function within a generic interface to meet one explicit criterion without any categorical or inferential reasoning, or transforming of information. Few steps are required and no sub-goal has to be generated.”

An example of task at this level is “Delete this email message” in an email app.

Level 1 = 29% of Adult Population

This is what level-1 people can do: “Tasks typically require the use of widely available and familiar technology applications, such as email software or a web browser. There is little or no navigation required to access the information or commands required to solve the problem. The problem may be solved regardless of the respondent’s awareness and use of specific tools and functions (e.g. a sort function). The tasks involve few steps and a minimal number of operators. At the cognitive level, the respondent can readily infer the goal from the task statement; problem resolution requires the respondent to apply explicit criteria; and there are few monitoring demands (e.g. the respondent does not have to check whether he or she has used the appropriate procedure or made progress towards the solution). Identifying content and operators can be done through simple match. Only simple forms of reasoning, such as assigning items to categories, are required; there is no need to contrast or integrate information.”

The reply-to-all task described above requires level-1 skills. Another example of level-1 task is “Find all emails from John Smith.” Level 2 = 26% of Adult Population

This is what level-2 people can do: “At this level, tasks typically require the use of both generic and more specific technology applications. For instance, the respondent may have to make use of a novel online form. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) can facilitate the resolution of the problem. The task may involve multiple steps and operators. The goal of the problem may have to be defined by the respondent, though the criteria to be met are explicit. There are higher monitoring demands. Some unexpected outcomes or impasses may appear. The task may require evaluating the relevance of a set of items to discard distractors. Some integration and inferential reasoning may be needed.”

An example of level-2 task is “You want to find a sustainability-related document that was sent to you by John Smith in October last year.” Level 3 = 5% of Adult Population

This is what this most-skilled group of people can do: “At this level, tasks typically require the use of both generic and more specific technology applications. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) is required to make progress towards the solution. The task may involve multiple steps and operators. The goal of the problem may have to be defined by the respondent, and the criteria to be met may or may not be explicit. There are typically high monitoring demands. Unexpected outcomes and impasses are likely to occur. The task may require evaluating the relevance and reliability of information in order to discard distractors. Integration and inferential reasoning may be needed to a large extent.”

The meeting room task described above requires level-3 skills. Another example of level-3 task is “You want to know what percentage of the emails sent by John Smith last month were about sustainability.” Can’t Use Computers = 26% of Adult Population

The numbers for the 4 skill levels don’t sum to 100% because a large proportion of the respondents never attempted the tasks, being unable to use computers. In total, across the OECD countries, 26% of adults were unable to use a computer.

That one quarter of the population can’t use a computer at all is the most serious element of the digital divide. To a great extent, this problem is caused by computers still being much too complicated for many people.

Let that phrase sink in: across the OECD countries, 26% of adults were unable to use a computer. In some countries like Japan, the number is even higher (about 1/3 of Japan's population can't use computers, which may reflect the aging population, poor interface design, or some other factor.)

These data were based on surveys from 2011 through 2015, and if TFA is correct about the usability trends, surely it's gotten worse.


That OECD study, and its implications, was a major inspiration for my essay "The Tyranny of the Minimum Viable User"

https://old.reddit.com/r/dredmorbius/comments/69wk8y/the_tyr...

The problem is that we're stuck between a. rock and a hard place. People -- the general population -- need to have useful devices and interfaces. The market will fill that need. But even very modestly advanced users, to say nothing of elite one, those who make technology happen, are left out.

From the essay:

Let's assume, with good reason, that the Minimum Viable User wants and needs a simple, largely pushbutton, heavily GUI, systems interface.

What does this cost us?

The answer is in the list of Unix Philosophy Violating Tasks:

- Dealing with scale

- Dealing with complexity

- Iteratively building out tools and systems

- Rapid response

- Adapting to changing circumstance

- Considered thought

- Scalability of understanding, comprehension, or control

- Integrating numerous other systems

- Especially nonuniform ones


It's all about fashion now. The less technical the target user the less capable the system is. Windows 10 drops distinct window borders? Fashion. Smartphones drop 3.5mm jacks? Fashion. Microsoft puts a fucking phone lockscreen on a desktop OS? Fashion.


This brings back the good-ol' days of early Linux. Anyone remember Enlightenment, and it's crazy theming?

I think what's changed is tech use and literacy. Most people spend hours on digital devices today, and are a lot like the tech nerds of the nineties.


Another rather infuriating example:

On iOS Mail you now have to hit the "reply" icon to move a received email to another folder, mark it or print it.

Very poor discoverability. They should at least have changed the icon when they changed it into a multiple purpose thing.


A lot of things in iOS are counterintuitive. Everything happens via a "Share" option. You want to do something, you click "Share" option and hope there's that option that you need.

By the way, on my 7 at least, there's a "Folder" icon right before the "Reply" icon that can also be used to move the message and you can add move option in one of the swipe gestures as well.


They did this change and then rolled it back a few updates later. I don’t understand why someone considered the bottom toolbar to be a problem and why their changes were actually approved and included in the update.


1) The examples cited are valid UX/UI design criticisms.

2) The author makes quite a few important points about UI problems (I especially appreciate the point about the importance of maintaining high standards for free software).

3) Concluding that "usability" is "in decline" from a handful of anecdata is an irritating, insincere, clickbaity absurdity that serves only to make the author and those who agree feel more important, that they're Older and Wiser™ for having grown up with CLIs, while the Children Today™ are ignorant fools who ought to Get Off My Lawn™. I'm so, so tired of this attitude getting in the way of sincere design critique. If the author had instead titled this "Some Problems With Various Software UI Design" I wouldn't have a problem. But then no one would click on it, I guess. (The author anticipates some of these and the following objections but doesn't actually make any satisfying argument against them.)

4) Design (among many, many other things, like art and language) are output by cultures. Cultures evolve unstoppably. Any argument that suggests that cultures should just "stop changing" are arguing the impossible.

5) Cultures CAN be steered deliberately, but generally only with massive efforts, such as civil rights in the 20th century (and even then.... :/). But saying "It's people like you and me who decide to change UI design" is completely insufficient. I understand and very much appreciate the idea, of course—be the change you want to see in the world—but insinuating that new ideas are dumb and useless is itself useless.

6) Cultural change is absolutely critical to continued survival of the culture. Many new ideas will fail. Many people will fail to learn from history. But some people will, and some new ideas will succeed wildly. Stagnating in a perpetual, rose-tinted dream of everything running on a command line doesn't help anything.


I don't think he is arguing against new ideas. He is arguing against a trend in UI design. In some cases, he's arguing against novelty for novelty's sake.

As for cultural change being a positive force: I agree. However, for a lot of people computers are mainly tools to achieve a goal, not a goal in themselves. Just like you would be annoyed if your screwdriver was deprecated, and instead all that was supported was a power screwdriver -- yes, it's useful sometimes, but don't test your newfangled ideas on me when all I needed was an old fashioned screwdriver.

My metaphor is flawed because physical screwdrivers don't deprecate themselves out of existence, but you get the idea: for most people, computers are just tools. Change to see "what sticks" is annoying and they don't want to become guinea pigs.

Particularly irritating is when the screwdriver manufacturer tells you that a- manual screwdrivers are no longer supported, and b- you were unscrewing screws the "wrong" way -- like desktop environment developers sometimes tell their users: "it's wrong to want icons on your desktop" ("but that's what I like and always did!" "Well, you're wrong, feature removed!")


I missed the part in the article where he was praising the usability of the CLI.


I may not be using the software as intended, but I like to move windows around with the title bar. Chrome was the first I noticed hostility against this with a small title bar, but all browsers have followed.


I feel that "everything is a fullscreen application" is not such a bad idea, as I can't concentrate on more than one thing at a time regardless. The rest is just tabbing.


I could get very little work done if I couldn't reference something why working on something else. Often I need to reference more than one thing. I also may have a result I need to see separate from the source of that result I want to edit.


Maybe the ability to display references should be a feature in your tool?

As for my experience from Blender, you either utilize its UI to display it where you want it or put the references directly into the scene. Its also common for 2D artists to just use layers on the canvas for keeping references.


Simply put; If it ain't broke, don't fix it.

I feel like modern development has developed into "features for the sake of features" rather than actual improvement.


But developers, designers, devops, marketers and everyone in between need to justify their jobs so they need things to do.

Maybe this is a result of companies hiring way more full-time people than they need in the long-term instead of using short-term contracts to develop the initial product and now they need to keep all those people busy.


The missing menu bars are really the most egregious; resulting in totally mystery meat navigation (props if you know what that's a reference to!)


Suggestion to author. When you write an article on the decline of usability, you might want to consider centering the text.


During the Amiga times, users would scold you for writing a dialog prompt with non-standard placement of buttons.



I truly feel like Apple and MacOS has next to none of the problems mentioned here :)


Ironic a web page with such a poor usability having this kind of title


Ha! And I simply hated Adobe apps.


Personally loved win 7 the most!!


iOS/Mac also experienced a significant decline in usability after Steve Jobs died. Here's a comparison before vs after: https://uxcritique.tumblr.com/


iOS 7 was a rushed redesign. It took 5 years to fix many of those issues.


It is still unusable compared to iOS6


A friend of mine nailed the problem. His theory: for most apps the user is the commodity; not the software!

This is the number one reason why usability of software is not improving and often even declining. The other is feature creep. I'll get to that later. Caveat: I will be generalizing a lot. :)

For the first point —

In the case of commercial software this is obvious. You get paid an hourly rate and the company buys software that you then use. Aka: you are not the one deciding which software to buy. How much you enjoy the experience of using it and how productive your are while doing so is thus not really important for the vendor.

In the case of open source software you commonly do not pay for the software too. The software is developed by people for various reasons. They may use it themselves or they just like working on it. Again – if you have an issue they do not really have reasons to care that much.

Feature creep —

One of the things that makes a developer very happy is adding a feature to a software and exposing it to the user. As a developer myself I know the feeling. It's wholesome, warm, fuzzy.

But when you expose a feature you need to add a user interface to this feature. This is the most difficult part. The number of parameters driving a feature is also called a 'parameter vector'. The more publicly exposed dimensions such a vector has the more difficult the feature is to use.

A feature that has ten parameters may be useful to 99% of users if only three of these ten are exposed. The rest can have magic numbers in the code. Adding another seven dimensions to the publicly exposed parameter vector of a feature to cover 100% of use cases is a bad idea.

Deciding these things requires intricate understanding of the problem space from the user's side. Most developers are not good at this. And more parameters added to a UI somehow feels better to most people. Even though they do understand that this can be counterproductive.

So my friend had this analogy: imagine if you got paid to use a mobile phone. Imagine your government bought phones from Apple, Samsung; whoever. And then you got paid for using them. Do you think we would have something like an iPhone or modern smartphones? Unlikely. We would have crazy awful phones from the pre-smartphone era. Probably with much worse UX than some Nokia or Ericsson phones had at the time just before the iPhone appeared.

This is the situation we have with most of the common closed- and lots of open source software. Again: I am generalizing here.

On the bright side: software that needs to fight for its user base – whether open or close source –, often has better usability.

And certainly: for everything I said above there are countless counter examples. But the overall trend seems obvious to me. I agree with the author of the article 100%.

Well, maybe I'm just grumpy and old too. :]


The whole "desktop metaphor", as usually implemented, is trash. I'm a happy user of i3 window manager (a tiling window manager). It's not the first and probably not the last, but it's the first time I can quickly and efficently arrange application windows on my screen. I think this will become the default eventually, tiling WM are the way. The way it uses the screen is beyond anything. They will just make it more intuitive and comfortable for first-time users. i3 requires you to memorize, but preferably define your own hotkeys.

Meanwhile applications like skype, other instant messengers, slack, music players have grown and now are fullscreen by default. Non-blog websites are usually large and can't be displayed in a simple window. People are complaining about 80 character rule for code, and go to 120 characters and beyond - which again means you can fit fewer windows on a screen. I think web browsers and websites are largely to blame. Because that's like most users interact with computers today, that's what they expect and don't know it can be any other way.

Every single application wants to be THE fullscreen application. I think it's an admission of defeat! Over the decades, they've tried - and failed - to make smaller application windows that people consider useful. And it's not the fault of application makers - it's the broken "desktop metaphor" where you're supposed to move windows like physical objects. It works on a desk because you have two hands and 10 fingers. Imagine working at a desk (no computer) using only 1 finger! That's how it feels using mouse. The default window managers are crap at actually managing windows and arranging them usefully. Dragging corners, window borders, moving windows feels miserable in the long run, and when you close one of your windows you need to repeat it when you want another app window to fit into your layout. So many people just don't bother, get a bunch of fullscreen windows and alt-tab through them.

And applications with tabs are a symptom of the disease, too. Web browsers, the blue Microsoft Word, IDEs, and so on. It's alt-tab fullscreen windows in sheep's clothing. Nothing particularly wrong with alt-tab method, but it doesn't scale to a large number of windows we have nowadays.


Tiling window managers are a pain without a keyboard.


What is this wacky computer that you and apparently all of the Gnome developers are using that doesn't have a keyboard?

PCs have keyboards! It's the best part of the computer.


Mainly PoS machines.


You mean like Windows 2?


Android and iOS are tiling window managers, so they already have become the default.


What does this mean? How are they tiling at all? All apps are full screen.


UX careerists are clowns who chase trends and aesthetics over quality and function.


Add marketing-oriented metrics to the mix.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: