I ask myself this from time to time. It's hard to quantify quality, but I'm certain that the exponential increase of power in todays PC vs. that of a PC from the 90's is not reflected in an exponential improvement of the vast majority of software currently using this power. It seems that the more power that there is available, the sloppier the use of it. I wonder if someone already formulated some kind of law that describes this phenomenon :-)
This. I think about this all the time and discuss it at length with people. Lotus 123 could do things people use Google Sheets for today with a few orders of magnitude less processing power, memory, and network bandwidth. In terms of UX, a lot of the stuff was near-instant vs noticeably delayed today due to everything running in browsers.
I argue that the UX has actually gotten worse since software is becoming more bloated faster than the hardware improvements can keep up with. There are more layers of abstractions and VMs running VMs running VMs simulating DOMs listening for changes to objects to update models to trigger actions which bubble up to listeners which fire events which change data structures that eventually update some text on the screen. Consider what happens in the physical memory, at the hardware layer, to have a modern React app print a line of text when a text box is modified. Contrast this with what a terminal does and what happens in memory. Printing text to the screen in a "modern app" is now so complicated that it's effectively impossible for a human to determine the CPU instructions and memory manipulations required to do the things we're doing.
And what do we do? We make more transpilers, more frameworks, move more to the browser, split our systems across networks and around the world. Contrast saving a file to a floppy disk with uploading that same file to a "modern app" which stores it on S3 using Paperclip with Rails, as many modern apps do. Think about all the systems that file goes through, the network requests, the protocols involved, the standards required to make that happen, and the infrastructure which powers it all. Think about the data centers those VMs running that app run in, the people who oversee them, the tooling they need to run those systems. Think about all the project managers who had a hand in setting any of that up, the financial planning of each aspect, the development time spent.
Recall the time when one could reason about what goes on inside a computer at every level, when key presses registered immediately, when the memory allocations of applications and the OS they ran in made sense to humans. When it felt like magic instead of digging through a trash heap of ads to find the tiny sliver of what you needed.
what the hell are we actually even fixing anymore?
I've been hearing discouraging things from both architecture and silicon people at Intel. It seems like they lost direction, probably because of the recent explosion of mobile/ARM.
I know there are some changes in UIs and windowing systems that many people apparently consider indispensible improvements, but which I couldn't care less about. One example is the rise of compositing window managers. IIUC, a compositing window manager has to store a bitmap image of every window in RAM. But it would be fine with me if in this regard, our systems still worked like Windows XP, Mac OS 9, and X without something like Compiz, where things are drawn directly to the screen when necessary. That alone wouldn't get us back to a comfortable multitasking system in 64 MB of RAM, but it would be a start.
We need generalisations to make sense of the world around us, but they always come at a cost of accuracy. Sometimes, the loss of accuracy is so great that the generalization does more harm than good. I think trying to equate the causes and possible solutions of this situation with the Israel-Palestine situation is one such example.
Yep. Human brains are made to see faces in clouds - and patterns everywhere. Because its the only way for that little brain to function. In reality nothing is the same unless it's identical (not a copy - but the exact same thing, maybe seen from different angles or at different times). We like our clever analogies, and they serve a purpose, but even when making them it's best to be aware that it's a product of our brain and to always be ready to question if it actually serves the intended purpose. Even if you can use a specific analogy in one context doesn't mean it's useful in another one. I think it's okay to make such analogies - as long as everybody including the person making them is aware of the shortcomings and that being able to make one is a very, very low threshold, given that it comes from brains that see animals and human faces in floating water vapor.