I think it's the real problem of the industry. It used to be that Intel had a strong brand because you'd buy a new computer in three years that would knock your socks off. It isn't that way anymore, and from the top of the bottom of the stack we should be addressing perceived performance.
My #1 frustration as a Windows user is that every so often an applications or the whole system freezes up for a few seconds. I use MacOS sometimes and also Linux and I don't think either one is much better.
For instance when I launch a command prompt often it is up in <1s, but sometimes it takes 10s. That's a very simple program, but often there are security checks involved; even ssh logins to a linux server on a LAN have a long-tail distribution in time-to-prompt because maybe some daemon was swapped out.
I would be satisfied I had some visual indication that progress was happening (ALWAYS AND <100ms LATENCY) and some insight into the process (e.g. real progressbars... if hadoop can do it, why can't pandas?)
We don't want to trouble people with details, but what's the harm in telling the user we are waiting on a 30 second timeout?
> For instance when I launch a command prompt often it is up in <1s, but sometimes it takes 10s.
On Windows, a reason for this might be virus scanners. Get rid of all third party virus scanners and tune the settings on the built in ones.
I personally am of the opinion that virus scanners are a waste of computing resources, in that if you reach the point where a binary you run has been infected you are probably compromised in ways a virus scanner cannot fix anyways.
What's more, virus scanners often run with SYSTEM privileges and consist of hundreds of thousands of often shitty LOC, presenting a huge attack surface. These days you might infected because you're running "anti-virus" software.
A modern-day, lightweight Linux system is just as responsive as those, in my experience. Often more so, because the systems you mention were often limited by slow hard-disk or even floppy-disk access, whereas modern Linux tries its best to cache absolutely everything in RAM.
If you have other devices though, you may perceive it as getting slower. An older laptop with an HDD absolutely chugs compared to an SSD. That's been my experience with older devices, my newer ones spoil me on any OS
Running old software on new hardware is by far the best user experience that I've had. Everything is blazing fast to the point of feeling instantaneous.
Unfortunately it's getting harder and harder to keep things running with various licensing schemes and whatnot. Hell Microsoft even keeps breaking my old Office install with windows 10 updates forcing me to constantly fix it because they want me to update to their office 360 offering. All that said, both the hardware and software stack are getting a little silly even at the low level. You can send a keypress around the world as a data packet faster than you can send it to your monitor now a days.
Running up-to-date FLOSS software is an even better experience, in my view. It doesn't even get all that much slower or more RAM-intensive over time - certainly nowhere near as much as windows 10 or Office does! Of course you need to choose your software stack wisely if you're looking for that "lightning-fast" experience.
Even "heavyweight" DEs can pull this off. I've been running Manjaro with KDE on my Pinebook Pro and it's been overall a snappier experience than I had even hoped to get out of what is essentially a four year old phone (with only 4GB of RAM) in a laptop form factor. Faster than Android or Windows 10, for sure
My #1 frustration as a Windows user is that every so often an applications or the whole system freezes up for a few seconds. I use MacOS sometimes and also Linux and I don't think either one is much better.
For instance when I launch a command prompt often it is up in <1s, but sometimes it takes 10s. That's a very simple program, but often there are security checks involved; even ssh logins to a linux server on a LAN have a long-tail distribution in time-to-prompt because maybe some daemon was swapped out.
I would be satisfied I had some visual indication that progress was happening (ALWAYS AND <100ms LATENCY) and some insight into the process (e.g. real progressbars... if hadoop can do it, why can't pandas?)
We don't want to trouble people with details, but what's the harm in telling the user we are waiting on a 30 second timeout?