Hacker News new | past | comments | ask | show | jobs | submit login
Maybe people do care about performance and reliability (buttondown.email/hillelwayne)
342 points by soopurman on Feb 14, 2023 | hide | past | favorite | 273 comments



I worked at a hole-in-the-wall-you've-never-heard-of software company that landed big contracts with large multinationals consistently that, from the customers' perspective, came down to one reason: it wasn't SAP.

The licensing model of large, enterprise software does have ramifications for users on the floor. In this case it's because the software had a per-seat license and required expensive consultants to modify your forms and workflows. It cost millions of dollars and years to roll out SAP at a factory.

The software we were making was faster, could be rolled out in weeks, and was user-extensible.

The IT folks at said multinationals hated our software. It was the end-users that demanded it, and ultimately, won us our contracts again and again.

Performance and reliability matters! It let folks on site get more work done, faster, and with better reporting and insights than they had ever gotten with their bloated, slow, expensive, and difficult-to-use software from more "traditional" vendors.

It was a neat experience.


I've had similar experience working on a B2B product that we sell to financial institutions.

As the end users got a taste for our UX, they began to threaten resignation if they had to go back to the old system.

Word of mouth and human competition are a hell of a thing. Once a branch manager sees how efficiently someone across the street is running their business (which is effectively identical due to mountainous regulations), they will start doing our marketing for us.

I feel like tuning software for the human factor is something that almost every MBA still misses to this day. Making "this software feels good to use" an explicit objective is clearly impossible. You have to want the experience to be good at a fundamental level. Its an ambient part of every item that merges. You have to care about the product and customer very deeply to achieve this.


You have to care about the product and customer very deeply to achieve this.

So many miss this, and the big boys don't only miss the forest for the trees, they can't see the leaves either.

Total, narrow, limited scope.

Then big boys win by Hulk Smash. Startups win by leaves and forest.


I think this is ultimately what won over our users and why they championed our product so hard. A manager at a factory would use their budget to buy our software off their approved vendors list, get it going in a couple of weeks, and within months they were among the top performing factories. As word spread and other sites adopted our product the same trend happened.

And we cared deeply about the end users. One of the best things I did was get our development teams on-site with users. We had lunch with them, saw how they used our software, and took back tons of ideas for improvements and features that we released frequently and often. I got to know a number of our users on a first name basis and would use some of them to try out new features before we released them to the rest of our customers. It was awesome.

SAP is still the 800lb gorilla and probably always will be. It was neat to work in an environment where we cared this deeply about our users that they fought to be able to use our software.


Software developers miss it too. Most libraries are pushed for how efficient they are, how few allocations they have, which algorithms they use, etc. regardless of in what circumstances they're being used, and easy to use libraries are assumed to be making a significant tradeoff.

Ease of use isn't something that can be measured though, and most people don't have the framework for arguing UX decisions so debates come down to "well I think this is easier to use than that". Maybe not SAP, but no matter how terrible a system is there's someone who's used to it and will argue that it's easier to use than the alternative, and that proponents of the alternative just need to get used to it.

I kind of feel like this is something a leader with a feel for it needs to dictate. Torvalds made a similar comment about good sense with Julian Hamano, which I don't feel effectively supports this, but I do agree with at some level.


My first software job was doing custom lotus notes database applications. One of the ones I was working on was going to be overlapped by future SAP implementation..

my boss said do it, because it was so much cheaper and no one could say when or if the SAP would be installed. They had to map the business to SAP, instead of having the software be flexible.

But it turns out when you build something that fulfills a business need and makes employees life’s easier they like it.. a lot. Even if it’s just a Lotus Notes database application. I even won a very minor award for that one.


> They had to map the business to SAP, instead of having the software be flexible.

this is both an advantage of SAP (and realistically, the ERP industry), and a curse.


As a fellow dev who got their start doing Lotus Notes work, I feel this comment on a deep level.


> The IT folks at said multinationals hated our software

why?


I'm sure they had lots of reasons.

I got to ghost on a sales call once with one of our customers (I was a lead developer at the time).

From what I could tell their IT people didn't like our software because:

- They were already in the middle of a multi-year roll-out of SAP, integrating our software was a wrinkle in a plan they'd been working on for a long time

- Nobody got fired for choosing SAP, some saw choosing software from a small shop they'd never heard of was risky

- Annoyance: they were in charge of acquiring and rolling out the software the factories used! Why were the floor managers demanding that they integrate our software? What do they know about software, enterprise contract negotiation, etc!? How dare they!

- Support: they knew how to handle issues with SAP, user accounts, authentication, authorization, etc; integrating another piece in the mix across the entire company would be a whole project

It was a fun call to sit in on.


Another reason that people often overlook is security. Even if each individual software product has a good security model, differences in the way they interact can lead to security vulnerabilities and opportunities for attackers to move laterally through a network. In the case above, it may be that IT knows how to integrate SAP with their Active Directory system in a manner that ensures that users have access to the data that they need, without giving everyone write access to everything. A brand new product from some small startup, even if it's competently engineered, won't have the same pre-set integration playbooks as SAP. The in-house IT will have to figure it out themselves, which adds time and overhead for every operation. New hire? Before, you'd give them a SAP account, and you'd have a playbook that would automatically or semi-automatically provision everything for them. Now, you have to provision accounts in two systems. Compliance? IT already knows how to validate an SAP install for compliance and generate a report for the appropriate regulators. Now they're going to have to learn to do that with your new system. And so on.

I listen to several infosec podcasts, and one recurring theme in those podcasts is managing users who do "shadow IT", i.e. by buying and integrating new products or software that make their day-to-day jobs easier, but do so in a manner that opens up huge gaping security holes that IT doesn't even know about… until the breach happens and the company is all over the news for leaking customer data.


Security is important. What makes SAP secure? I personally found errors in customer systems. And at the time even Microsoft couldn’t implement OAuth2 correctly with regards to the specifications… which is no surprise given how often it had to be amended.

A good deal of “security,” even in the enterprise, is a lot of theatre and show boating. Write a formal specification and throw it at a model checker and you’ll probably start finding holes in most software stacks. But hardly any software developers do that let alone IT managers.

The later generation of the stuff I was working on at that company had to also be certified under certain important regulations. The system had to be able to be auditable in the sense that application logs couldn’t be repudiated in court and users couldn’t tamper with data. That was some pretty serious work.

But yeah.. there’s a lot of “lol security” out there and as an IT person trying to manage ISV solutions it can be a huge pain trying to sort the wheat from the chaff.

But just buying MS or SAP and thinking you’re done with security is also as bad. Don’t overlook security.


    What makes SAP secure?
Nothing, inherently. What makes SAP secure is that a lot of IT organizations have experience with setting it up, and know enough about its pitfalls to avoid them. With a new product, IT is going to have figure out the security pitfalls (hopefully by reading documentation, but more likely through testing, hopefully not through breaches). If, as the grandparent post indicates, the product was set up by someone outside of IT, it's quite likely that that person doesn't actually know about security, and may well have inadvertently opened a security hole by, for example, creating an insecure proxy account.

To re-emphasize my point from my original post: far more security problems result from the interaction of systems than result from systems themselves. SAP may be set up in a perfectly secure manner. The new product may be set up in a secure manner. However, their interaction may still result in data leakage or denial of service.

Even if your product is perfectly secure (which it isn't), the mere fact that it's one more component, interacting with all the other components of the company's IT infrastructure, is reason enough for broader corporate IT to be cautious.

Furthermore, it's often the case that when there is a problem, security or otherwise, it's not going to be your customer that's on the hook. It's going to be the company's IT department. Would you like to suddenly support a piece of software that, a week prior, you didn't even know existed, much less deployed at your company?


To your point: I remember going to our IT security people asking for them to allow us to add Python to our computers. We said, "there's nothing in Python that Excel can't do" (not 100% true). Their response was, "If we could prohibit everyone from using Excel, that would be our preference too."


Given how many times they must've been burned by end users opening infected Microsoft Office documents, can you really blame them?


This was a lesson I learned very recently.

As a dev, I think about security in terms of exploits and making sure software doesn't have them (as well as having features required to implement user/data policies).

An IT person thinks about policies too but for them security is primarily about tools. If a piece of software will work behind their firewall and IDS, integrate with their monitoring software and Active Directory, export reports in the format this or that other tool needs, then it's secure from their perspective.

It makes sense when you think about it, and actually allows for a fair bit of freedom once you understand the boundaries. What is unfortunate is all the time I wasted gathering pen-test reports and all kinds of other junk when that wasn't the real problem at all.


Exactly. A lot of developers focus on lack-of-vulnerabilities as the essential aspect of security, when, in reality, for corporate IT, the essential aspect is visibility. It doesn't matter how secure a black-box app purports to be, the mere fact that it's a black box that IT has no visibility into it will (justifiably, IMO) lead IT to treat it as insecure.


And that's all fine, but when the IT Department turns itself into a black box and doesn't tell anyone what bar software needs to clear to be used that's when shadow IT happens.


> Annoyance: they were in charge of acquiring and rolling out the software the factories used! Why were the floor managers demanding that they integrate our software? What do they know about software, enterprise contract negotiation, etc!? How dare they!

Ok, so this one I actually kind of agree with. Too many times I was in a position where a bunch of decisions were made about software or hardware, and then IT was brought in at the last minute to try and get it all to work in an unreasonable time frame. And then IT was asked it integrate it in impossible ways.

If another department brings IT in early, different story. I got in to IT to use technology to make people's lives easier, not to play with whatever the latest fad bullshit is or develop an inflated sense of self-worth. If some random nobody's software is going to do that at the cost of a minor annoyance to me, that's fine.


Having worked both sides of this, it's generally the case that IT is absolutely terrible at their job that ends up with them getting called in late.


Company: "IT is terrible at their jobs, what are we going to do about this!"

IT director: "Give me a budget to hire competent IT people"

Company: "On the other hand, we're ok with this IT incompetence"


Interesting, in my experience generally it's business types that don't know shit from their elbow and don't include IT early enough due to negligence.


My experience is admittedly limited but every case I've seen of IT being called in late was that IT would have reasons for no. The VP didn't even learn their lesson when circumventing IT ended up with all corporate e-mail down for nearly a week. (As predicted, although it ended up taking longer to fix because I was on the other side of the world, internet access but unreachable by phone.)


That's fair, 90% of people are shit at their jobs. I'm just saying, if it is me, I'm definitely upset you did that and I think I'm justified in that.


A low-trust environment is exhausting for everyone, and a difficult pit to climb out of.


I would guess that completely incompetent IT departments that only degrades everybody's work are very highly correlated to workplaces that use those huge, all in 1, hired consultants based software packages.


Sounds like a good reason to bring them in even earlier


At one point I wore a hat in a IT leadership role for factories. We hated third party software because it created silo's where our data became un-usable. There is no point in an MRP/ERP factory system if all the bits aren't talking/working together. People fight and fight you as you try to get there, but once you get there and it all starts to work together and you no longer have jobs travelers looking like daisies with escalations all over they get it and their life gets significantly better.


Was that last sentence autocorrected maybe? I'm struggling to parse the last bit about travellers and daisies.


Not the OP, but just trying to be helpful.

"Travelers" are paper packets that move through a manufacturing process with whatever material is being improved. Lots of machine shops have these printed out packets of specs, communications and notes that accompany a given order as it makes it's way through the process.

I'm pretty sure the "daisies" comment was related to how many shops escalate a given order by adding bright stickers or other obvious, visual indicators to the traveler, and shops can sometime become so overwhelmed that everything is escalated and all the bright travelers start looking like a field of flowers.


Ah cool, that makes a lot of sense. Interesting peek into the process.


This seems…tangent with…my experience at another small company doing integrations with enterprise software like Sage. The various enterprise systems of course had their own reporting solutions, but customers preferred how ours let them build their own or request designs that they could modify using skills they already had: Excel and Windows.


Was it the German company SAP?

How did the end-users even find your software especially when the IT folks hated it?


Our sales people had connections in the industry that led us into various "approved vendor" lists. This eventually allowed us to worm our way into getting demos with floor managers who could use some of their budget to buy our software. Word of mouth spread from there and within a couple of years we were driving their IT department up the wall. It was grass-roots and the end-users, the people on the floor doing the work, loved it so much that it spread.


And probably broke their MRP/ERP systems. But the IT departments probably came up with workaround to get data out of the system and feed the ERP/MRP system just enough to keep things going but probably way less efficiently than they would have ultimately been if pain points would have been addressed in the existing system instead of the virus of your software injected inappropriately. And then you probably badmouthed the MRP/ERP system for it's failings (that were partly related to the damage done from using your system). Weak sauce man. In a different life I wrote best of breed department level software too but at least I very much understood why enterprise systems were preferred outside the department so that I COULD ADDRESS THAT CORE ISSUE THAT NEEDED TO BE ADDRESSED AS PART OF MY SOFTWARE. But yea, IT department bad, me good.


Haha, yeah I get that. That's why I sat in on calls with their IT folks. We made it as easy as we could to get data back out of our system. We exported in a bunch of different formats and could integrate with several ERP/MRP/BI systems directly.


No need for such a ridiculously aggressive response.


IT folks hated it? Or maybe just the bigwigs? You don’t say why but guessing because you innovated away their headcount.


I assume that they were having to do end-user support. I was working in IT at that time and recognise that reaction from IT.


Oh you just need to support this XYZ new database, our app doesn't have a way deploy with your deploy tools, doesn't have granular permissions levels (in a factory environment), etc. Why do you not want our tool? Also, we will silo away a bunch of data needed for an MRP/ERP system to be effective to our system outside the MRP/ERP system. Silly IT people, not understanding our work environment (even though you support every person on the factory floor and have to understand their job and workflow to do that, to the point your techs are better manufacturing engineers than our actual MEs).


You sound like you're exhausted from dealing with people who are constantly doing end runs around your software. I can understand that.

On the other hand, if everyone in your org outside of IT execs despise SAP -- and make no mistake, they do, every last one of them -- then maybe it would be productive to wonder why that is rather than blaming your idiot users.


Off topic: I’ve been a DH customer since 2004 and love it because to me it’s the perfect PaaS. DH is way under appreciated by HN crowd.

If you have any influence though, can you ask for the dedicated & vps offerings to get more love. Hardware is terribly out of date now (lack of nvme & modern proc).

I realize DH needs to hit a price point & margin, so I’m not asking for latest gen hardware. Just hardware that isn’t a decade old :)


People don't directly care about performance and reliability, but it does affect their behavior.

Back in the day at reddit, for example, we could see an uplift in usage when we made the pages faster, there was nearly a direct correlation.

At Netflix we spent a lot of effort on reliability because every time we had a major outage, there was a dropoff in subscriptions with the cohort that had been affected.

And I've heard similar from other people in the reliability space -- that there is no direct impact on retention from availability issues, but you can see an effect in the long term.


What's the difference between caring and affecting behavior? If people cancel their subscriptions after a major outage, wouldn't it be reasonable to interpret that some of them cared about performance and reliability?

I know I care. If I pay for something and it sucks, I stop paying for it because it's not up to my expectations.


I wrote it poorly and can't edit now, but what I meant was they cancelled 3-6 months down the line at a higher rate than those that didn't experience the outage.

So they didn't cancel right after, but it does appear to affect their overall view of the service.


I think people do care about these things, but they don't always know what it is, precisely, they're caring about. They just know that the thing they have is disappointing.


The problem for the company is the people just silently leave. They don’t cry they don’t complain they just go away.

That’s what kills companies.


100% work at another big-ish tech company on the networking stack and every time we decrease latency, we usually see an increase in traffic as more page loads succeed. It's still quite hard to make the case to PMs and business leaders that there's more latent demand left for our product at lower latency levels, but it's also hard to quantify that extra traffic into revenue vs costs of latency decreasing.


These features always matter, but they matter the most when you have competition. When the other guys’ retention problems become your selling point, you are making money off of them.


At my previous job I was given permission to fully rewrite the call centre script/tooling used by sales agents. I also came from that team, and took a quick detour to the data team before finally landing in a development role. Giving me a full understanding of not only the agents needs and wants, but the limits on data being fed into this system.

The resulting page loaded 10 times faster than what it replaced, which was a 'no code' option that the sales director set up. My solution was built with HTML/CSS/JS/PHP and contained authentication, accessibility, and a toolbox of modules for specific tasks that loaded only when needed (and still lightning fast). Scoring a perfect 100% in every test I threw at it.

The positive result of this change was immediately apparent. Not only to the agents and managers, but to the data team. The speed resulted in a full 130% increase in sales, across all agents.

Why? Well, the old script took time to download Google Fonts and a boatload of other garbage which put the page load at 2+seconds. This was dead air piled on-top of the delay caused by the auto-dialer. Meaning everyone we called had 3+ seconds of nobody saying anything. When this was cut to 1 second, there wasn't a telltale delay to work around in the sales process.

So at least in this case, it took the sales director to care enough, not the dev.

I built the entire thing in 3 days with zero assistance.


It feels like we need a youtube channel dedicated to 30min case studies of this.

MS spent x months to build y, here it is in 3 days. etc. etc.

People do not realise how fast development and software can be if you design with those priorities. It's orders of magnitude.


Also, many companies and people seem to have forgotten (or never learned) that small teams outperform large ones. There is a point past which every additional person will make the project take longer, and result in lower quality.


Like the Handmade Software ideals! https://youtu.be/hxM8QmyZXtg


Great story, but I think the last sentence can lead people to wrong conclusions. It sounds to me like you spent more than 3 days on it. A crucial part of what you did, IMO, was to figure out what's needed, how you could do it, and then convincing people it's worth doing.


The author says that he's not a product manager and doesn't have a particular insight. As a PM, I can share how I think:

The main question is "does it matter?" The answer depends case by case.

For example, if I am the PM for TurboTax Premier desktop software, I know that it's latent - but it probably doesn't matter in a way that deserves prioritization. Sure, my clients would like for the forms to load faster, but they care infinitely more about the scope of tax situations we cover, and the correctness of calculation. A person uses my product a few hours per year, so even if there's an accumulated minute of form refresh latency, it's just not that impactful. What that means is - if I have an "extra developer", I am going to direct them towards product scope and correctness, not latency.

On the other hand, something like VSCODE (an example the article mentions), speed is part of the value proposition. As a developer, I need to quickly change text, quickly look up a reference, quickly switch files, etc. If things are sluggish, it isn't just frustrating but limits my ability to do development work. So a user of a slow IDE would be very motivated to seek alternatives, since they basically live in the IDE. As a PM, I would absolutely invest in performance if it was making my product actually less useful.

It's like anything else, investment has a tradeoff. Chances are, whatever car you drive could have a faster 0-60 speed and in isolation, that would be great. But are you willing to pay 10x for the car? Are you willing to give up the seats and the trunk to make it happen? So your car may not have the best 0-60, but if it's an affordable family minivan that's probably the right call.


Regarding TurboTax, I'm pretty sure the clients want the software to work slower; there are times when the software just pauses and says something like "Checking over to make sure we found all deductions", even though obviously all of that work has already been done.


I was actually thinking about that case - does the slowness "suggest" that the software is really thinking deeply and working hard for me. I do think you have a good point.


It does indeed. Here's an article that discusses it more [1]. I know of a few thins that introduce an artificial wait with a progress bar to make the user feel like it's doing something complex.

There's also situations in the real world too, coin counting machines apparently also delay showing the results because people won't believe that dumping a jar full of coins into a machine and having an instant answer can be accurate.

People also do it too, locksmiths may umm and ah when called out to pick a lock because if they do it too quickly customers can feel aggrieved at a large invoice or even feel less safe in their homes if they've seen their front door picked in 10 seconds.

1 - https://www.theatlantic.com/technology/archive/2017/02/why-s...


The third example (locksmiths) I get, they're making a little show to avoid a difficult conversation with some customers[0]. A bit scummy, but then again, plenty of customers are no saints themselves. However, the first two examples feel like self-fulfilling prophesies to me.

Normal people have no reference point to judge how fast or slow software and hardware should be, other than through direct experience. By adding fake delays to avoid being honest and perhaps reassuring occasional surprised users, vendors just screw with the mental frameworks people build, at scale. It's a wasted opportunity, too, because if you're brave enough to be honest about execution time and weather the initial wave of distrust, then you may become a new reference point for your users, who will now view your competitors' software as bloated.

--

[0] - Though recently I brought this story up with a locksmith, and he said he thinks it's just stupid and there are hardly any situations in which it would even make sense.


I think the locksmith thing comes from a couple of things:

1. Some disreputable locksmiths (word gets around) - probably very few

2. A locksmith will try the fast methods first, if they work he wins, if they don’t he didn’t waste too much time.

3. After doing a lock once he could immediately redo the same lock much faster and this may make it look like he could have done it very fast to begin


The tax software situation you're describing sounds totally reasonable, but it does seem indicative of a shortcoming in the market. Like, is there really a fundamental reason why the same feature set couldn't possibly be developed while also being faster for customers and not requiring more developer resources per feature?

In some cases I could see that being the case: if the core task fundamentally requires significant computing resources and margins are too thin to through more resources at it while remaining competitive. That's the "every grocery store has one cashier with a long line" story (except for stores that change the story, e.g. with self checkout or a pricing strategy that frees up some margin to spend on more staff). I wonder how much of the slow performance of tax software is attributable to this, and how much is attributable simply to accumulated bloat or other things that might fall under the vague label "technical debt."

But it's also clear that there are technical decisions that can make the software slower for customers without making feature development easier or cheaper or faster.


// and how much is attributable simply to accumulated bloat or other things that might fall under the vague label "technical debt."

(I am the person you're responding to) - I do think that very often it indeed falls into the vague label of "tech debt" but not all debt needs to be repaid.

For example - let's say I am writing a new Turbo Tax feature and I can do it quickly by leveraging 4 existing APIs. My feature is "slow" because each API does its own complex IO operations, many of which are redundant across the 4 calls.

Is that technical debt? I have the opportunity to create a new API that optimizes the IO for my use case, and it would be 4x faster. But, it may never be "worth it" to do that because my feature is seldomly used and is still fast enough for the user to not lose productivity. So I would say that's not really a tech debt, just a technical tradeoff that was made.

In general, I find that a lot of latency comes from (a) reuse of suboptimal pieces/frameworks that accelerate development and (b) not taking the time to optimize. There are times when these are tech debt, and there are times when these are fine.


> Is that technical debt? ... So I would say that's not really a tech debt, just a technical tradeoff that was made.

Well, I'd still call that technical debt. All debt is a tradeoff, and sometimes debt is worth it (or, at least, is judged to be worth it at the time the debt is taken on). In this case, it might be deemed worth it, and reasonably so, especially if there simply aren't enough competitors that this would make a difference at the margin, or if it's simply not feasible for any competitor to implement that same feature in the same time frame with the same amount of developer resources and make it perform faster.


Anecdotally, I stopped using TurboTax (after using it for 8 years) for 2 reasons. I was never confident that my tax situation was being handled correctly. The UI is noisy, the explanations don't explain. So I would buy audit insurance just in case. The second reason, it was slow and navigating between pages and sections was annoying. So now I pay a real person to do my taxes. It takes a quarter of the time, costs about the same as TurboTax + audit insurance, and now I now longer stress about taxes.


Audit insurance is a scam. If you get audited, you just update your return (unless IRS already did that for you) and pay (or get credit for) the correction. IRS isn't trying to put you in jail, they just want you to file correctly.

What do you think the "real person" does? They put your data into TurboTax or equivalent.


This. The IRS will only come after you if they think you were deliberately lying. Mistakes are simply to be fixed even when they are doozies. (One year I ended up amending my previous year's return because I had managed to fat-finger an extra digit into a number. It didn't change my tax bill *that year* by a single penny so nothing looked off. It was only when I saw the huge amount of losses being carried forward that I realized something was off. I didn't get any sort of complaint from the IRS about that despite it being a 5-figure error.)


> IRS isn't trying to put you in jail, they just want you to file correctly.

A lot of people would live much easier lives if they believed this.

You can make a really half-assed filing and all they will ever do is send you an invoice for the difference.

In fact, you could simply not file your taxes ever and the IRS will eventually inform you of your actual tax burden, at which point you should probably pay accordingly. I've never gone beyond this point.


The IRS is one of the most reasonable divisions of the government to get in a fight with, and there are insane amounts of protections built in for people who honestly believe they’re filing correctly. My dad even had them inform him of a filing error and included the additional much larger return he was due.

What you hear about and what they WILL get you for is deliberate fraud.


If you're not self-employed there's like a 0% chance you're getting your taxes wrong by holding TurboTax wrong.

If you're FAANG there is some chance you're overpaying by not doing the RSU cost basis properly.


I imported Coinbase Pro transactions and they all got counted as short-term gains because they'd been moved from Coinbase to Coinbase Pro within the past year, despite having been purchased several years prior.

Now, you can definitely make the case that this is mostly Coinbase's fault for not sending over the actual purchase date, but TurboTax charged me a bunch of money to do that import, so I feel like they should have done some basic quality control.

I don't trust TurboTax to get my taxes right, because last year they got it very, very wrong.


But that's an example of you overpaying - no need for audit insurance if you're making mistakes in their favor.

I don't think it does wash sales across different brokers properly though, but neither does anyone else.


Oh, I definitely don't worry about audit insurance, and operate on the assumption that the IRS will be non-antagonistic towards me if I make a mistake. I want to pay the correct amount in taxes, and the IRS wants me to pay the correct amount in taxes. We're on the same side, and my understanding is that they'll act as if we're on the same side unless there's clear reason to believe otherwise.

What I'm objecting to is the idea that I'm not going to get my taxes wrong using TurboTax. They fucked up badly.

I'm certainly not arguing that the solution is to buy audit insurance from the same company that told me to overpay by several thousand dollars. Every penny I pay to Intuit is a moral failing on my part, and on the part of the legislature that succumbed to their lobbying.


That's a fair point...I tend to be paranoid about these things, and I think your point reinforces mine to an extent. Yeah I have simple taxes, and yet couldn't feel confident about it through TurboTax. But at this point we're way off topic!


> A person uses my product a few hours per year

I gather from this comment that the software is only used by individuals filing their own returns, not by professional tax-preparers or accountants. The latter two certainly spend more than a "a few hours per year" in whatever product they use for preparation. When it's tax season and they have to get the work done by April 15, professional aren't going to tolerate a few more minutes per customer.


The idea that software wasn't bloated and slow 20,30+ years ago is just myth. Everything was bloated and slow. MS Word in the 90s and 00s would regularly crash and take your file with you, and it often took minutes to start up. Yes, there were some brilliant counterexamples, just as there are today. But most software today is far more enjoyable and rapid to use than that from previous eras. While the software isn't as snappy as it theoretically could be, that is mostly because the market finds the tradeoff between features and speed. I'd rather have a search take a second or two longer if it is able to handle synonyms and different word forms. VS Code is so much better than Visual Studio from around 2000.

Usually, the truly bloated software (like a lot of edutech) is just because of regulatory capture. Most of edutech isn't judged by the market, but by committees of corrupt idiots, who always exist and for which the open market is the solution for.


> MS Word in the 90s and 00s would regularly crash and take your file with you,

It's an old joke at this point that you can tell when a person first became a serious computer user based on how frequently they save their work. My muscle memory for ctrl+s is so ingrained that I type it every minute or two in things like Google Docs that literally ignore it.


To this day I always copy comments to the clipboard before submitting them or after I feel like I've put a lot of work into them -- owing to dubious browser UI[2] c. 2000, where e.g. you might accidentally lose focus of the text field, and then the backspace button would navigate backwards[1] and lose everything you had typed up.

So my muscle memory is a lot of ctrl-a, c.

[1] A mistake that Chrome adopted afterward and didn't reverse until 2016: https://tech.slashdot.org/story/16/05/19/2041232/google-chro...

[2] Wasn't the only cause, of course. The request could fail as well.


This is my Jira trauma resurfacing thanks. I'm quite heavy on technical explanations, like to put a lot of context, links, references, alternatives, notes in my tickets. Now I just write with outlook (less formatting options than word, actually saves my 35-pages-long shit while typing and people can read it without logging into some slow monster, and I can insert images really fast inline) and save to add an attachment to Jira.


Still possible to lose data in the modern dubious UI on the web, so you could an extension like GhostText and type comment in a regular text editor where you. get recovery for free even if both apps crash


> GhostText

How did I not know about this?? And it supports Firefox + (neo)vim!


I still have this total distrust about closing Google documents without any sort of save operation.


I remember playing “Maniac Mansion” for 12h without saving. Right before the end my computer crashed. All was lost. But at least I did the second run in 8h.


Pretty much every time I finish typing something I end with ctrl+S. Its muscle memory from the old days, but now it triggers vs code to format whatever I just wrote, which is a convenient side effect.


I may have lost days on aggregate from just waiting many seconds for the browser to respond after an accidental Ctrl+s that triggers "Save Page" - why is that operation so slow anyway?


It seems to list a bunch of directories. It is slower on Windows.

It may also make sense to cache some search data and ping some network resources, because those dialog boxes don't just show directories nowadays.

Surely I would prefer them to do those asynchronously, but the UI designers seem to all disagree.


My IDE auto-saves as you make changes, yet I still mash ctrl+s frequently. I think it's mostly games that gave me the habit though. PC gaming has always been a gauntlet.


> My IDE auto-saves as you make changes

This gave me shutters. Maybe it is some sort of manual transmission-type feeling of control, but that is terrifying to me. Something about that feeling of owning when the source file is changed.


The most annoying thing for me is that the idea of take something similar, make my changes to it, and save it under a different name only mangles the original file in today's world.


“Save” vs “Save as” is such a terrible convention…


It is, because arguably the more common case is wanting to save a snapshot under a different name and continue editing the original. "Save as", unfortunately, does the opposite.


I, personally, want save to act like committing a transaction. I don't like making changes to something and having every key typed or item clicked on be an immediate change to the file. The common case for me is to start making changes, then save them when I'm ready, sometimes as another file name. At least 30 years of this behavior has been ingrained into my workflows. We have undo, but I'd also like to have a revert changes option that acts like rolling back a transaction. The simple rollback is to just close the file without saving, but that doesn't work if every change big or small is a commit.


Felt the same way after first going from eclipse to intellij. This fear, this lack off. But after a while i didn't notice anymore and i have never run into a situation where something important was lost. In the opposite. Saving and making copies and even duplicating code in a file like an older version of an sql statement is some kind of fear/hoarding impulse, that doesn't seem to pay off at all.


Oh, really? I've got vim set up to auto-save the buffer every time I leave insert mode. It's phenomenal. I literally never need to think about saving.

I guess more analogous to your "save" mechanism would be my git commit.


I felt the same way, but there are three layers of defense for me, in order: robust undo, git commits, and the IDE maintains a history of its own with a certain number of revisions back in time.


I wrote my first program in 1994 in Turbo Pascal.

Yes, you're right, Ctrl-S is so deeply embedded in my subconsciousness that I will be probably pressing those keys on my deathbed.


Open office had autosaving functionality about 15 years before MS Office managed to figure it out, despite loss of work due to unexpected power outages or Office freezing up was a regular occurrence for everyone I knew back then.


I still can’t get used to Confluence exiting edit mode upon Ctrl+S.


This explains everything.


Really disagree on this one.

It's true lots of software were unreliable back in the day. But: - Crashes sometimes != Slow, bloated, and unreliable. I'd rather have something simple and responsive that crashes every now and then, than something that is aggravating to use all the time. - With the massive increase in hardware performance, there's just no excuse at all for slow unresponsive software now, especially from the big vendors. The reasons are not even usually technical; the bloat is from ads and tracking and more

> I'd rather have a search take a second or two longer if it is able to handle synonyms and different word forms.

I'm not sure people are complaining abut search being slow. Most complaints about search is that is lower quality than it used to be.

> VS Code is so much better than Visual Studio from around 2000.

VSCode and Visual Studio are two completely different products, one is a code editor, and the other is a full blown IDE. There were very good editors back in the day. And VSCode has started showing signs of a decrease in quality.


It depends on the software in question, but yeah, a lot of it actually was faster even on era-appropriate hardware.

An example that stands out in my mind is Photoshop 7, CS1, and maybe (memory is a little blurry) CS2. It felt considerably more responsive running on a thermally-non-ideal iMac with a single core PowerPC G5 and spinning rust hard disk than PS CC does today on a well cooled M-series MBP or custom built Ryzen 5950X tower with a 7000MB/s PCI-E 4.0 SSD.

That's just ridiculous. Yes, Photoshop has taken on some functionality since then but there is no excuse for anything to feel laggardly on such powerful hardware when it ran great on comparatively pedestrian machines 15 years ago.


What are examples of VSCode dropping in quality? Aside from some extension oddness, it has been a solid performer in my toolkit for many years.


> VSCode and Visual Studio are two completely different products, one is a code editor, and the other is a full blown IDE

What does Visual Studio from around 2000 have that VSCode doesn't? I get that VSCode offloads a lot of work to the language server and extensions, but it offers IDE-like features including auto-completion, code navigation, and the UI is more than just code editing - it integrates version control, a test runner, debugging, etc. What is it missing that makes it not a full blown IDE?


> What does Visual Studio from around 2000 have that VSCode doesn't?

A visual GUI designer, for example. Also a visual database model designer. Code generation too, and more

These are resource demanding features


> The idea that software wasn't bloated and slow 20,30+ years ago is just myth

All you need to do to prove to yourself how bad things have gotten is to find an old PC (or VM) and load up a copy of Windows XP.

The amount of bullshit between mouse click & updated photons arriving back at your eyeballs is insane in 2023. Software used to be much faster than me. Now, it is significantly slower. The only things that are reliably better are our networks and hardware.

I saw a recent talk on Blazor wherein Steve Sanderson ran through an example in an old version of visual studio. Watching how fast the project loaded actually made me depressed in a real and deep way: https://www.youtube.com/watch?v=2nRDdeIMGVo (@ 45:15)


Having spent years and years with visual studio, I can guarantee it didn't function like that on the computers of the time with the typical software/driver load. He's running that on a VM that is vastly faster than what you could get at the time.


Running Visual C++ 6 long past its prime was qiote a nice experience, before I had to switch to VS2005. Sure was nice to be able to have for-loop variables not spill out the loop scope but it was depressing as hell to see a very nice and responsive machine keel over.

Spending hours upon hours every day getting an app snappier and snappier and watching the VS hog making everything so slow and sad.


Days ago I ran W2k, something I missed between Windows 98 and Windows XP, just for fun, under a VM, with no Direct3D hardware acceleration, just KVM accelerated virt-manager. The virtual machine ran faster than the GL accelerated Mate DE in the host.


He's running 20 year old software on today's hardware, of course it's going to be fast.

The idea that software is slower now is nonsense, if anything it's faster in general. It's using orders of magnitude more hardware, of course, but how is that a problem?


Because it isn't providing orders of magnitude more utility in exchange.

Compare what the latest greatest word processor does vs the equivalent from the 90s to what a computer game today does vs the equivelent from the 90s. The latter is what orders of magnitude of improvement looks like.


See this is a particular problem.... You don't use 100% of the features of your software, hell, you might not even use 50% of the features. The problem here is one of measuring YOUR net utility of the gross utility realized by all users.

Now, in an ideal world you could get an 'idiot' version that would meet all your needs and run nothing extra and be nice and fast. Of course now you have countless different versions of software to release and test and hope nothing fun happens. Or, you do like everyone does and releases a big old binary of fun that does everything and give QA a few less things to do.

I won't say about word process, but spreadsheets?, well not many years ago many of them would stop working at 65k rows and say tough luck, and these days we have absolutely huge datasets running in them.


> Now, in an ideal world you could get an 'idiot' version that would meet all your needs and run nothing extra and be nice and fast

If I'm not using it, why is it slowing the application down? I don't buy this reasoning. Binaries are not big in modern terms and they don't go slower for including code that never runs.


If by never runs, you mean things like automatic grammar and spell checking, which way back when were things you'd click a menu to run, and now are constantly running reassessing your lifes worth every time you click a new key. There tend to be any number of 'small' things like this that add up over time and suddenly we're running slow. It also doesn't help many applications followed the Apple school of thought and remove any configuration options so you're unable to disable much of this.

This is ignoring the plethora of bad decisions in the past we're paying for now. Your word processor is likely running in a virtualized environment in your operating system because 30 years ago running full blowin programming applications with no security in your document was a good idea. Then you'll have another layer or two of anti-virus on top of it.


Dude, I can run the entirety of Windows 95 in a VM (itself running in a browser) and Word from that era on top of it and it is still orders of magnitude faster than Word today. So

> Your word processor is likely running in a virtualized environment in your operating system

Is also not a good reason.

This is what we're saying! None of the "good" reasons given for why software is slow today stand up under scrutiny!


I would tend to disagree. In the 1990s System 7 would start in a few seconds from hard disk on a Mac SE/30.

Word would start in less than a second (not sure which version it was), and most features we use today were available. Adobe Photoshop 3.0 would start in less than 3 seconds, and same with Premiere 1.0.

Of course you lacked some memory protection features which could make the whole computer crash because of a single program failure, but daily experience was much better on old systems. They felt way faster because they have less input lag and because UI was less bloated.

I'm not saying 1990s computer and software were better, but they were way faster than what we use today.


I'm pretty sure you're exaggerating quite a bit.

Anyone can watch this video [1] to see how long it actually took to boot up or launch Word. But bootup seems to take 20 seconds, and launching Word took 7 seconds. Which roughly matches my memory.

So about an order of magnitude slower than you're describing. (The SE/30 had a clock speed twice that of the SE, but launching things is mostly bound by the hard drive speed.)

Other things like saving a 3-line Word document took 6 seconds. Quitting Word took 6 seconds. Basically, everything was pretty slow back then.

In contrast, I just tried launching Word on my M1 MacBook, and it took about three quarters of a second. While saving a file is simply instantaneous, as is quitting. And launching Photoshop takes 7 seconds.

Edit: here's another video [2] opening Photoshop 2 which takes a full 28 seconds.

[1] https://www.youtube.com/watch?v=g0O7heFHA-k

[2] https://www.youtube.com/watch?v=LP_JNatS2Sg


>but launching things is mostly bound by the hard drive speed.

which got slower and slower taking swap speed with it, which slowed down everything you were actually using. You'd defrag the drive regularly and that wouldn't do it so you'd have to wipe and reinstall from scratch. And that was some hours of work but like having a brand new machine again when you were done. You couldn't believe how bad it had degraded to. The good old days or selective memories?


Windows reinstall every few years still gives a performance bump IME, nvme SSD be damned. It’s hard to keep a windows installation clean with all the startup junk that worms its way in over time.


Oh man, I forgot about defragging. There was something quite satisfying about knowing your computer was “tidying things up” behind the scenes.


Oh, when I was younger I was mesmerized by watching all of the little squares in the defrag tool change color and move around. It was like someone tidying your room for you!


As recently as 2015 I was advising people that if you finish your work too early to go home, you should spend your time on updating our docs, defragging your hard drive, and reading the docs for your tools (eg, learning keyboard shortcuts or optional flags).

Defragging is one of those clear cases for time shifting things from your high value time to your low value time.


On the video you sent Photoshop takes 6 seconds to start and it seems like it is stored on an external disk which could explain why it's slower.

I'll try to make a video on real hardware once I have time to show that I'm not exaggerating. And remember that the SE/30 is a computer from 89, so not even early 90s.


I checked again. He double-clicks the icon at 16:28 and the watch cursor ends at 16:55. That's 27 seconds. I might be off by a second or two, it's hard to measure exactly.

But this matches my memory as well -- Photoshop was by far the slowest program I had at the time to start up. I remember being bored waiting because it just took forever to load. While applications like Word definitely took several seconds to load, but that wasn't such a big deal.

Of course, all of this was a vast improvement over the several minutes it would take loading programs on my Commodore 64 a few years prior... from cassette tape! ;)


Bootup taking 20 seconds is consistent with my memory as well. It's a bit of a stretch to say that's "a few seconds," but it's not wildly off. 20 seconds is pretty fast, especially considering the anemic HDD speeds of the time.

I used an SE/30 well into the late 90s as a kind of boutique writing appliance. I never used Word, I used BBEdit, which opened rather quickly. I always felt getting the machine up and going was faster than a standard Windows desktop.

The SE/30 was a hell of a machine. I set up an SE for a friend in the mid-90s for her to write papers on, and she really liked using it and never complained about the speed (and she did use Word). Which is saying something, because the SE was kind of a dog.


> In the 1990s System 7 would start in a few seconds from hard disk on a Mac SE/30.

A fresh System 7 install on an SE/30 would boot from a hard drive in ~20 seconds. Adding system extensions could double this pretty easily.


On a not so fresh install it boots in 13 seconds here : https://www.youtube.com/watch?v=LgltFQss3yU Pretty sure mine would boot even faster


How many wireless networks was System 7 trying to connect to?

I mean, we all know the answer was 0. Hell, the systems you're talking about were likely not networked at all. I have a feeling that if we took all the 'slow' software we're talking about and cut out all the pieces reaching for the network in one place or another that we'd gain about an order of magnitude of speed back. Of course we'd lose about that much in functionality.


I mean, it's also possible that that's because the SE/30 was running software designed to also support slower machines like the Mac Plus, Mac SE, or Mac II? The SE/30 was kind of a beast when it came out.

In fact, I'm going to directly relate this to Wirth's Law. I think there was a brief span of years where software didn't get slower as quickly as hardware got faster. My experience of the early 90s was that my computer was slow as hell, and those of my more well-heeled friends were incredibly fast.


Only as a corollary, to put things in perspective, in the early '90's that was Windows 3.x on a 33-66-100 Mhz CPU with 4, maybe 8 MB (yes megabytes) of RAM and programs had a splash screen to mask the fact that they were very slow loading.

With at least 20x processor speed, double the bits, and 1000x memory (let alone SSD's) I would have expected something better than what we have.


exactly correct. The question is not: "was software bloated in the 90s". It's: "given that hardware capability increased in by several orders of magnitude, did software quality/speed see a similar increase?" The answer is a resounding no. It would be like moving from a tricycle to a supersonic car and somehow taking longer to get from point a to point b.


I'll concede that we're not going supersonic, but our speed is maybe more at 100mph than slower. Computers (including phones, accomplishing the same tasks that desktops were) are absolutely faster than they were.

We also have better UX, more functionality, and better quality.

If people want to run the software of the mid-90s on modern hardware, I'm sure they can figure out a way. The upside is that when it crashes or you have to switch back and forth with modern software with greater functionality, the underlying system will let you do that very quickly.


I dunno, we have ridiculously amazing displays, live video, amazing audio, intuitive, reactive interfaces, and profound knowledge at our fingertips. Seems like we're doing pretty good!


> intuitive, reactive interfaces

Really? Do you think modern user interfaces are great? Everybody implements their widgets from scratch using html, css and Javascript. The boring old ui toolkits standardized a lot of features that are today non existent or different everywhere. For example a plain old list widget where you could select multiple items had standard ways to select ranges, to add ranges to a selection, to toggle the selection of single items, to select all of them. Most modern software doesn't have those actions. And even if it has some of them you have to find out how. Whereas such things used to be commonplace. Regularity of boring features is actually intuitive. Nice looking is not intuitive.


How bloated software can become is limited by available hardware resources. 20-30 years ago typical RAM size / CPU core count where much smaller than nowadays. If you run desktop software written 20 years ago on modern hardware it would work much faster than modern desktop software with comparable functions (not always, but quite often). But I'd agree that there was plenty bloated software which run slow on hardware most people afforded at that time. May be this happens because software is typically developed on top of the line hardware but users on average spend less on hardware then developers.


> with comparable functions (not always, but quite often).

Is that really the case? Are 2010 skype and 2022 discord comparable in terms of functionality? Are 2000 winamp and 2022 spotify app comparable?

Todo app 15 years ago was a simple CRUD app. Today todo app has to do CRUD, sync, offline mode, public API, integrations with popular services, collaborative projects and support 6 platforms.

People whine about bloated web tech in app, and how good it was with native while forgetting that availability and feature parity on all platforms is a feature too.

I still remember how bad it was before electron as a windows user. Half the apps that seemed cool(omnifocus, bear notes) had mac only desktop version, other(1password, evernote) had a native windows version that felt ugly and unpolished.


> Todo app 15 years ago was a simple CRUD app. Today todo app has to do CRUD, sync, offline mode, public API, integrations with popular services, collaborative projects and support 6 platforms.

Sync was done in many ways, thanks to the app using actual files to store information. It wasn't a concern of the app itself - nor it should be. Off-line mode was the default. Public API wasn't needed. Collaborative projects is something nobody asks for in a Todo app, and of course, portability gets much easier when you have much less code to port.

Still, I could imagine apps back then having all those online and multiplayer features[0]. But even then, this doesn't add up to modern bloat. APIs, collaborative editing, sync, integrations - these aren't compute-heavy or real-time features, they shouldn't cause a big performance impact. That is, unless you're doing something stupid, like blocking on network requests, keeping state on a server, or just constantly parsing and serializing JSON (or XML).

> Are 2000 winamp and 2022 spotify app comparable?

Yes. WinAMP reigns supreme. Spotify app is hot, bloated garbage and has only a small fraction of features WinAMP offered. The entire value of Spotify is in their service part - but music streaming existed in 2000. You probably could make WinAMP stream from Spotify if you tried hard enough. I hope someone does and uses this to demonstrate what should be obvious: there's no technial justification for Spotify being so heavy, so feature-less and so bad UI/UX-wise.

--

[0] - They didn't have them, because most of those features only became useful once smartphones and mobile connectivity took off in the earnest.


>WinAMP reigns supreme.

I mean, kinda but not really.

Back in the day a large number of us likely had huge (exceptionally legally questionable) MP3 libraries that we managed. And while, yea having 100GB of music with just about everything was nice, it is also a major pain in the ass. So much so that Winamp pretty much died after streaming (long with legal issues in MP3s) took over the market.

Now, if the music market wasn't legally locked down, would there be better streaming apps? I believe so. So it appears we may be asking the wrong questions. Not why apps are getting slower, but why it seems the market has fewer competing apps at all levels.


> So it appears we may be asking the wrong questions. Not why apps are getting slower, but why it seems the market has fewer competing apps at all levels.

This is exactly the phenomenon I recently started describing on HN with the phrase "software is resisting commoditization". It's rare these days to see an app you could use for a while and then replace with an equivalent alternative.

I think SaaS is a big driver of this - by keeping important functionality (and user data) server-side, the user ends up being locked into your software. No need to rely on IP protections - there's just no way for them to pirate the bits running on your infrastructure. And even if someone reverse-engineered your APIs and built a better frontend, the users of that alternative would still be tied to your backend, and thus your service.

This means there's no business in making alternative frontends. Instead, it's better to start your own SaaS and go after a different market slice. Even seemingly equivalent products quickly drift apart, each optimizing strongly for slightly different audience. It's easier than to fight another company over their users directly.

A tailored set of features is a good "unique value proposition" for a while, but it may be too easy for someone to eventually replicate. Taking user data hostage is better, but users don't like it very much. The best UVPs seem to have nothing to do with software.

Spotify is a stellar example here: the real value they own isn't software or infrastructure, it's all the relationships and contracts they've established in the music industry. This moat is impervious to nearly all competition - unless you're insider on the music label side, or plugged into Softbank's infinite money hose, you're not going to replicate it. Spotify, in turn, doesn't have to give a shit about its music player anymore.

How to fix this? I'm not sure if it can be. We'd need to destroy the ability for businesses to prop their software with some unique propositions that can't be easily copied by competitors. I can't see it happening without a total overhaul of intellectual property and computer crime laws. Things like Data Portability section of GDPR help a little, but ultimately there's just too many ways to create those tiny moats that make applications non-substitutable.


"Feature" phones from 20 years ago (most Nokia and Ericsson even into the android era) could sync personal data such as phone book and calendar over the internet [0]. The libraries doing this were originally written in C and their compiled versions took up maybe tens of kb running on constrained hardware. The functionality is not remarkable.

The UX of those phones was pretty poor though.

[0] - SyncML.


That's true. But I think the critical mass wasn't there yet, those features weren't used much outside of business circles.

> The UX of those phones was pretty poor though.

That... really depends. Having physical buttons was nice. I could write on those numeric keypads about as fast as I do on full touchscreen keyboard today, except I'd make less errors and could do it without looking at my fingers.

Which brings me to one piece of feature phone UX I strongly miss to this day: fixed latency. The firmware/OS was pretty much (or maybe even de facto) a real-time OS. With few rare exceptions, every interaction had consistent, fixed latency. Because of that (and physical buttons), I quickly learned to operate my phone without looking at it, or even pulling it out of my pocket. Unlock, menu, down, down, OK, [wait 1 second], down, OK, start typing... - these kind of sequences quickly became muscle memory.

All that was lost with switch to smartphones, as both Android and iOS have randomly changing and unpredictable UI latency, and the UI itself isn't fixed in space either.


Yes, they are all equivalent. There are variations in specific features but it should be obvious how irrelevant that is.

The present day "apps" you describe are bloated because they bundle an entire web browser and more, maybe the equivalent of a container, to run the little sliver of JS/html that presents the UI to the user.

The reason they are bundled like this is to enable web developers to work on them.


Newer software in most cases has more features but do we all really need all this features? All features I use in the newest MS Office where already present in Office 2000. Sure there are people who use features added recently, but if only a small fraction users uses a feature it can be implemented in a plugin (given an architecture which allows independent extensions). This way all these new feature would not increase startup time and would not send OS to swapping if you don't have enough RAM.


> I still remember how bad it was before electron as a windows user. Half the apps that seemed cool(omnifocus, bear notes) had mac only desktop version, other(1password, evernote) had a native windows version that felt ugly and unpolished.

My experience was very different, may be because I don't care much about how an app looks but care is it allows me to do what I need to do fast. Before electron most apps followed Microsoft UI guidelines, had consistent look and feel, hot keys for most functions with basic hot keys (like save/open/help e. t. c.) consistent in different apps, low UI latency (unless the system is swapping but electron made this problem worse by using more RAM).


> Are 2010 skype and 2022 discord comparable in terms of functionality? Are 2000 winamp and 2022 spotify app comparable?

The increase in resources available since 200s is measured in orders of magnitude. Are there similar increases in software features that warrant the increased bloat?

> I still remember how bad it was before electron as a windows user. Half the apps that seemed cool(omnifocus, bear notes) had mac only desktop version, other(1password, evernote) had a native windows version that felt ugly and unpolished.

Now all apps are ugly and unpolished


Frankly, I don't see much difference in functionality between 1996 ICQ and 2023 Whatsapp or Telegram. Why people keep reinventing the wheel?


> Are 2000 winamp and 2022 spotify app comparable?

In my opinion - yes. Most of what Spotify provides implemented in the cloud (on server side). Client is a UI to select and stream music. Winamp supported music streaming to but didn't have an advanced UI to select what to stream. I see no fundamental reasons why a desktop app for Spotify should use much more resources. Given open API it should be possible to make a Spotify plugin for Winamp.

I haven't used Spotify desktop app but can guess it is written using electron or something like that and this is the main reason it uses much more RAM/CPU than Winamp, not because it does more work.


So now it is bad everywhere. Good user facing software integrates itself into the platform so that the user can combine multiple tools. That got completely lost through the "app"-ification of all desktop software. They only integration that is done nowadays is done through cloud APIs. Half of the time they are done to sell tje users data and not to fulfill the need of a user.

Why else do I have to upload my fitness/health data to see it on my smartphone in addition to Garmin watch?


Discord? Kopete p0wns Discord using 1/16 of the resources.


That reminds me of the old @bruised_blood comic: https://twitter.com/iamdevloper/status/926458505355235328


its really a perfect continuation of the Mark Twain, “I didn't have time to write a short letter, so I wrote a long one instead.”

Really succinct fast code probably takes more time than verbose, slow code.


Brevity is difficult.

It takes me a lot longer to write small code, than big code (CTRL-C -> CTRL-V).

A big part of my refactoring, is looking for copy/pasta, and trying to do things like refine base classes or protocols, and whatnot.


That is only partially true. Office 2000 and 2003 were great releases, the main applications (Word, Excel, Access, PowerPoint and Outlook) were fast and stable as long as you weren't running some weird corporate domain deployment.

Earlier and later versions were worse, and Outlook was horrible if you were stuck on a domain.

Excel always had issues if you were doing something stupid like using it as a database and all of the apps had issues if you were being stupid with OLE via copy/paste or were using Internet Explorer.


> Office 2000 and 2003 were great releases, the main applications (Word, Excel, Access, PowerPoint and Outlook) were fast and stable as long as you weren't running some weird corporate domain deployment.

Or tried to use Microsoft Equation more than a couple of times inside a single document. The thing was ridiculously prone to crashing on top of producing butt-ugly typesetting.

Of course, that falls under “using OLE”, but IIRC so does using WordArt, which was rock solid (albeit much simpler and in particular incapable of in-place editing).


> Excel always had issues if you were doing something stupid like using it as a database

I remember a long, long time ago firing up Access. I quickly closed it and continued to use Excel as a database. As it was only for personal use this wasn't an issue.


When I first started using spreadsheets professionally (Excel 2009), I was confused—how do I name the columns? Surely you don't just enter the column names in the first row! I also tried putting my data into Access.

I hated Access and eventually abandoned it. Meanwhile, I pretty quickly decided that Excel is the fastest and overall best Microsoft product by a mile. I now just use Google Sheets, but if you have Windows and need to do lots of spreadsheeting, Excel is just by far the way to go. Far, far better than Libre Office, Numbers, or Google Sheets.

To be fair, I was rarely dealing with more than a couple thousand rows. If it were millions, and complicated joins, maybe Access (or probably SQLite) would be better, but oh my god, how is Excel so much faster and more polished than everything else? (Rhetorical question: Joel on Software has some stories that help answer.)


Shame as Access is really powerful and far easier to use than Excel if you're familiar with SQL.

Pre-web I used it for internal process automation - it required less effort than anything else (including modern capable frameworks like Rails) to provide a lot of functionality to applications with a low number of concurrent users (1-20 users).

I do really miss that - this kind of app is now often implemented with a low-code platform (with expensive consultants) or a web framework (5-10x the effort required for the same results).


Well, MS Word 5.1a on a mac ran really fine, as did all the Apple/ClarisWorks software. Word Perfect on a PC ran fine, too. Then came the inevitable bloat. Later versions of MS Word didn't run that smooth anymore, ClarisWorks became unsufferable. WP killed itself.

There's still lean software. Mellel (for macOS, https://www.mellel.com) is a very responsive editor, with a style edit model that's a bit different from Word. But it takes a lot of work to make something increasingly capable and keep performance.


Interesting...I remember MS Word 5.1 on Mac being slow and crashing so often I was saving a new document every minute or so. Word Perfect (on the same mac) was stable and fast. But feature-wise both were complete for me - then and today I guess.


> The idea that software wasn't bloated and slow 20,30+ years ago is just myth. Everything was bloated and slow. MS Word in the 90s and 00s would regularly crash and take your file with you, and it often took minutes to start up. Yes, there were some brilliant counterexamples, just as there are today. But most software today is far more enjoyable and rapid to use than that from previous eras.

Install Windows 2000 and Office 97 in a VM. Word will be blazingly fast compared to the latest version. What features do you ever use in the latest version of Word that weren't present back then?


On today's hardware, or on hardware of the time?

Things like word used to be much better in speed of the interface - it used to take fewer clicks to do stuff


On today's hardware. The point is just to show that modern software is slower and less efficient than older software.


Honestly this is the worst take yet.

Go open that Excel 97 with a spreadsheet containing 65,537 rows.... Oh, you can't.

People aren't building modern software that does the same set of things so attempting to measure it is much more difficult than opening up a zillon year old VM.


How many spreadsheets are that big? And how much bloat do you think supporting that accounts for?


This is an irrelevant question if you're in the business of selling most software.

Most people don't care if their spreadsheet takes 1 second or 10 seconds to open. What they do care about is "Bob sent me a spreadsheet and I can't open it" or "The app crashed and lost all my data" (and technically I've seen spreadsheets crash plenty, but they tend to have a recent copy of the data saved which is eating up resources to monitor and do this). And things like "I want to hit undo 50 bajillion times".

And that is just feature bloat. There is a ton more 'just import a library' for that bloat because adding libraries is generally much easier when it comes to fixes then finding the place in your source code and fixing it.


There's another difference which is a stretch of time in which personal computers got faster a lot faster - often faster than than bloat could catch up (for a bit). That produced periods in which some previously slow-seeming software got quite snappy. This naturally stopped happening with the same regularity but it still does, occasionally - e.g. the excitement over M-series Macs.


> While the software isn't as snappy as it theoretically could be, that is mostly because the market finds the tradeoff between features and speed.

I don't know how much the market really has a say in this. I don't use Slack because I want to. I use it because I must. When we had open protocols, you really could choose the best client. Nowadays everything is a walled garden.

It's amazing just how poorly we've managed to make a chat application run. Across the board companies are using JS-based apps not because it necessarily leads to a better user experience but because it reduces their costs. Running a couple might be fine. Running more than that really slows things down. And developers tend to forget that most of the world isn't running a machine with the specs that they run with and many don't bother testing their stuff on machines with more limited resources.

I routinely have to kill applications when trying to build software or pair program because our video conferencing tools aren't light on resources either. I can't just add more RAM to my laptop because everything is soldered on now. That's another big difference from the past couple of decades. People could cheaply upgrade their hardware every couple of years. Now, you have to buy a brand new device, so people naturally hold on to their hardware longer. Mobile phone users are hanging on to their devices longer. Just yesterday we saw news that GitHub employees can only refresh laptops every four years. We should be working to make more efficient use of resources instead of targeting everything at the latest hardware specs.

Obviously there was old, bloated software. I think most of what people remember as being slow had a lot to do with spinning disks. I upgraded a family member's computer to something more modern not too long ago. He's pretty set in his ways and still uses old versions of MS Money and the like. It was amazing how much faster they ran. I can't know for certain, but I don't see that sort of future for JS apps. They're slow on considerably more advanced hardware. While more memory would help, more CPU cores likely won't.

I wish performance would be taken more seriously in the UX world. But, it costs less to cut corners and when you have a captive market, who cares? Once one company starts doing it, others do too, making it a race to the bottom. I'm thankful there are still indie developers building high quality, platform-native applications.


Kopete did the same Slack does today (with inline Youtube videos and LaTeX equations) with KDE3 running in the background under a machine with 256MB of RAM.


> I don't know how much the market really has a say in this. (...) companies are using JS-based apps not because it necessarily leads to a better user experience but because it reduces their costs.

That's quite literally the market having a say.


I'm sorry, I did phrase "companies are using JS-based apps" poorly. I meant it to mean that software vendors were delivering JS-based apps. That "(...)" you used is pulling a lot of weight. You deleted all the other context I had and stitched together two paragraphs.

The explosion of JS apps is a cost saving measure for vendors that given real choice, I don't think many consumers would opt for. The point is once you have lock-in and network effects, your customers don't really have much of a choice in the matter. At the very least, you can't say that's the market speaking in favor of the substandard apps any more than you can say it's the market speaking for any other choice the vendor makes.

As a related example, I'm sure people that were using 3rd party Twitter clients aren't feeling like the market has spoken and the best app has won just because Twitter killed off their API access. Their choice to stay on the platform has nothing to do with the their new-found love of the official apps.


JS apps have bounded the crappiness at the bottom end of SW competence. For example MS Teams can now be run in a browser tab where the bad stuff it can do (on purpose or thru security holes) is bounded by the sandbox and it works on Linux, unlike predecessors.


The security sandbox is a fair point. Although, embedding a web browser brings in a whole lot of extra surface area so you need to update regularly to stay secure. Electron's defaults have historically favored developer-friendliness over security and require a level of diligence from the app developer, as well. It's gotten better, for sure.

The Linux point is interesting. I have a Linux workstation I use regularly (in addition to a macOS laptop) and sometimes the desktop integration is nice to have. I just also struggle to believe a company worth 10s of billions building tools for software developers couldn't solve the problem in any other way. It feels a bit like we let companies off the hook. Whatever misgivings people have about the UI consistency, we have plenty of Linux desktop software that runs well and developed on far smaller budgets. I could live with resource-hogging vendor software if there were open protocols or APIs in place to supplant with something of my own.


> The idea that software wasn't bloated and slow 20,30+ years ago is just myth.

It's not entirely a myth. MS Word is not a representative example, though. It's been pretty awful from the start.

Today's software, on average, is very much larger and less performant than software from the olden days, and generally is not more featureful.

What is is, is cheaper to produce and modern software tends to have a much prettier (and graphical!) user interface.


Microsoft Word 97 might have felt laggy when I ran it on a first-generation Pentium CPU, but the race to the GHz bailed it out. Now that Moore's Law is running out of room, what's going to bail out this generation of software bloat?


The widespread availability of high-bandwidth and low-latency Internet access is what's been bailing out the latest generation of software bloat, not individual CPU performance.


When people say 20-30 years ago they actually mean 40-50 years ago, because it's the same people that have been saying the same thing for 20 years, picturing in their head the handcrafted assembly QuickDraw routines.


> MS Word in the 90s and 00s [...] often took minutes to start up.

And how quickly would the latest version of Microsoft Word start up on 90s-era PC?

Okay, so it wouldn't start up at all because the computer would not have enough memory, among other things. Let's say we built the absolute closest computer that could still technically run modern MS Office. We'd choose the absolute slowest CPU that can still boot Windows 10, the smallest amount of memory, and so on. It would still be many times faster than a 90s-era PC.

How quickly would modern Office open on that hardware? What about Office 95?


I remember using Corel 8 on a 486 with 8MB of memory. I would write the text in notepad and paste it into Corel, otherwise it would take 2s for each letter to appear.

I still find it amaizing that it worked. A 486 is something like half the speed of a single core ESP32 today...


CorelDRAW 8 came out in 1998. That it even runs on a 486 is a miracle.


Lots of peple used 486s back them. Even in 2003, lynx and gv/xv/mpg123 were perfectly usable on a FreeBSD desktop with FVWM.

If it had 16MB of RAM, or 32, it could run better. A 486DX133 would be fine.


Disagree. VS in 2000 was the most productive environment I've ever worked in.

I was writing VB applications and could get the UI done in minutes, script it up and attach it to DCOM objects running on servers in hours. Whole applications done and dusted in a few days. It was easily the most productive environment I've ever worked in.

Now I'm writing Go in VSCode, and the language server + plugins + tools take forever (well, multiple tens of seconds). I have to run the UI in a browser, the server in a container, and the database in another container (with all the pain that entails). It's a mess and it's just not nearly as productive.


No. MS Word 5.1 on my 1990 vintage Mac Plus felt subjectively faster than the current version, and it had all the features I needed.


I think part of the reason for this myth is that if you run old software on a modern computer, it's blazingly fast. That doesn't mean it was fast back in the day too.


A big part of the problem in my opinion, is that people are generally not interested in investing (money, time, effort) in technology. They focus almost exclusively on products. The short term is always given a much higher priority than long term.

If someone invents a much better way (faster, easier, cheaper) to do some key part of a widely used product (operating system, database, file system, network protocol), they won't get any traction until they have built a competing product with all the bells and whistles around that technology.

That can be a formidable task to a startup founder with limited resources who has to try and compete with products that have huge budgets and decades of development behind them. Investors won't touch it until you have a completed and tested product with many customers already signed up. Customers won't touch it until it is a 'drop-in replacement' for their existing solution which requires resources to finish. Classic catch-22 or chicken-and-egg problem.

You can't just find a way to do something 10x faster and expect others to flock to it.


The positioning heuristic is to never sell a "drop-in replacement", there are myriad reasons that goes poorly. In addition to the problem of feature parity, you also have the extremely painful and expensive implied migration which companies are loathe to do if at all avoidable, you will show up as a threat to the companies you are replacing, possibly with much larger marketing budgets for pushing you out of their market, and you have people whose job is tied to the thing you are replacing quietly sabotaging the deal inside the customer. 10x performance usually won't cover the cost of these issues, in my experience it requires more like 100x performance before the customer calculus starts to swing in your favor.

Often you want to position your product to augment their existing systems by addressing a clear gap and nothing more, solving a single serious pain point without actually replacing anything of note. This is simpler and meets less resistance in enterprises, and once inside it becomes much easier to attrit tasks handled by other systems. Land and expand.

The challenge for startup founders that want to avoid being a "drop-in replacement" is finding true gaps in the market that are also scalable. Obvious gaps almost always have a reason they are left unfilled.


> You can't just find a way to do something 10x faster and expect others to flock to it.

I have learned this the hard way. People are generally loss averse and resistant to change so making things better is often an uphill struggle.

As an example, I have worked on improving a build tool that is notoriously loathed in part for its poor performance. Part of the problem was that it was a jvm based tool and the jvm has terrible startup performance when loading lots of library code (the tool also made it easy to bring in dependencies so even small projects would often pull in massive amounts of vendor code). The only way to get a reasonably tight feedback loop under those constraints is to run tests in a persistent jvm process that can reuse the loaded classes from prior runs. The difference between using a cached jvm and a fresh jvm could easily be the difference between your tests running in tens of milliseconds or multiple seconds. I produced benchmarks that proved this.

But one problem is that any resource leaks in your code or the vendor code will eventually cause the jvm to run out of memory (which often would be a slow process of gradually degrading performance). For this and other reasons, people would often opt out of the in-process test running and fork a fresh jvm with each test run even though it could easily cost them hours of time over the course of a week. The problem was that using the in-process runner required stronger programming discipline. My claim was that code that could reliably run inside of the build tool without resource leaks is more desirable than code that cannot. It is also not particularly more difficult to write, but it does require skill and discipline. There was no tool to statically detect resource leaks so the burden does fall on the individual programmer. It was an exercise in frustration to try and explain the value proposition and argue with people with very different priorities from me so I walked away from the project but to this day it saddens me how much time is being sacrificed to the altar of poor programming discipline.


How about starting jvms in advance?


> The more interesting question is “why don’t people avoid slow software.”

Outside of pathological cases, even slow software usually is still lot better than no software and doing things manually, especially when the number of users is >1. I believe that is big reason why people do not avoid slow software.


Yeah, sometimes there are literally no alternatives or no alternatives "that have it all".

Jira comes to mind as one example, where it seems almost universally hated by developers but loved by others, yet everyone (including others) complain it's slow, but damn if you cannot model near every single process in Jira if you'd like to.

Wordpress is another where performance is atrocious and security is usually lacking, but if you look for a plugin that does X, someone somewhere probably have already coded it for Wordpress.


I guess I'm one of the few developers that like Jira. If you can keep management from going wild with permissions and workflows, it isn't unpleasant to use. Granted, since it needs to serve everyone, it's not going to be amazing for anybody.

Amazing solutions tend to have narrower focus. They tend to exclude complex workflows, forcing simplification. And there's an added bonus. The more admin tools you lack, the less likely management or other departments are to use your system. If they don't use it, they are less likely to be meddling with its configuration.

In short, Jira isn't a terrible product. It's a product that doesn't protect you from terrible people. ;)


I've always felt that Jira's (and similarly Wordpress's) biggest pro is its biggest con.

It will let you do whatever you want.


I can't avoid slow software because older, less bloated versions of mandatory tools in my industry (translation) are sunset with no recourse, and incompatible file changes mean I have to upgrade.


Seconds matter an awful lot to me. If things don't happen fast enough, I end up alt-tabbing to the browser, losing my train of thought, and perhaps wasting a few minutes browsing the web.

I also work semi-offline a lot. My current internet connection is roughly 100 KBps. The connection in the underground isn't much better. I also work in airports and hotels. These pages that take 10+ seconds to load end up being abandoned.

Amazon famously measured how slow pages correlate with lost revenue.

You don't really notice this until you use something significantly faster, and you can stay focused on the task much longer. Things just work better. It's like using a sharp knife for the first time.


Would you wait for 15 to 20 seconds for an SPA to load, if it's guaranteed to work faster during usage, or have it load in 1 second but be slower while using it?


This implies that the typical SPA, once loaded, is just "there" and can be used rapidly as if it's some offline app. In reality, many of them make a huge amount of fresh network calls on each navigation.

Still, it's an interesting question from the user's perspective. Reminds me of the downside of lazily-loaded images on the web. Whilst designed to help people with data constraints, it can have surprising side effects. The anecdote that comes to mind was a rural town in India where inhabitants would visit a particular place for internet access. They'd open a few URLs and just let it download, could take some 20 mins. Next, they'd return to their home to actually look at the pages they downloaded.

This usage pattern doesn't work with lazily loaded images, it forces them to scroll and wait beginning to end on every single URL.


It depends on where those 15 seconds fit. If I have to hold onto some thought for 15 seconds, forget it. If I'm loading an IDE or some other tool I'll use all day, it's fine.

My bookkeeping software has lots of 1 second delays and it's really annoying.


> Does performance matter more when the decision maker is also using the software? If any econ grad students are reading this, that’d be a good thesis topic.

As a former Econ grad student, this is a case of what's called the principal-agent problem, which is a widely studied issue. From Google:

"The principal-agent problem is a conflict in priorities between the owner of an asset and the person to whom control of the asset has been delegated. The problem can occur in many situations, from the relationship between a client and a lawyer to the relationship between stockholders and a CEO."


"apex predator of grug is complexity

complexity bad

say again:

complexity very bad

you say now:

complexity very, very bad"

https://grugbrain.dev/

No really -- I would bet that the single greatest contribution comes from the massive growth in complexity, in all relevant areas, over the last 20+ years.


Some of the complexity is, unfortunately, introduced by grugs, because complexity smarter than grug and sometimes pretend is simple.

Couple months back, I read an article that made a bold claim: that the old wisdom saying most software is IO-bound stopped being true some time ago; instead, most software nowadays is CPU-bound, typically on parsing JSON.

JSON is something that may seem simple to grug, particularly a webdev grug. So simple that they'll use it for structuring and exchanging data everywhere. But while on the API side it seems simple, actual JSON parsers and serializers are quite complex, and the format itself expensive to parse and wasteful[0]. All that complexity gets silently embedded into everything, and the overhead at runtime is paid at nearly every step and nearly every level of software stacks, even though most of it isn't even needed.

That's the failure mode of grug understanding of simplicity. Sometimes a little more effort up front yields much lower total complexity.

(Also, eliminating complexity is good. Shifting complexity from developers to users is criminal.)

--

[0] - It's better than XML in most scenarios, especially scenarios in which neither of them should be used in the first place.


what? no. most software is 100% architecture bound, because with the available tools and time budget building a synchronous client-server system is the max-bang min-buck solution.

the mainframes are back with a vengeance, as basically everything nowadys is waiting for the network.

or! populating caches (phone book that has a few hundred names in it takes ages to load initially, start menu also, etc.) to avoid said waiting.


What are you suggesting instead? A protocol change like protobuf for example? Or an architecture change (send less data in the first place)?

Just wondering what a more complex but better solution would look like in this example.


Either, both, or more. It's situational.

Switch to protobuf. Or switch to CSV. Or switch to SQLite database files. Or stay with JSON, but reduce the complexity of the format, even if it means you need a post-processing step on your side[0]. Change overall design to do less back-and-forth. Batch requests and replies. Send just the required amount of information, instead of having the receiver discard 95% of the reply every time. Don't convert to JSON (or other ad-hoc stringly-typed serialization format) and back from it inside your own application process, just because it feels "simpler" than using a data structure[1]. Etc.

I get why people like using JSON protocols, even defaulting to it for ad-hoc ones. I do that too! It's the local optimum for protocol development[2]. JSON protocols are easy to extend and easy to debug. It's a good starting point when your protocol is in total flux. At some point, however, the protocol mostly solidifies. It should then be revisited and tightened up a bit.

The main benefit is of course performance. Even if the refactor can't simplify the architecture of your project (e.g. no possibility to batch things or reduce amount of places that do communication), what grugs often miss is that performance improvements alone can reduce complexity.

The canonical case here is scaling: switching from single-process to a scalable distributed solution is a massive jump in complexity. Keeping an eye on performance and removing (or avoiding) waste introduced for the sake of "simplicity" or development velocity will delay the point at which you need to switch to distributed solution. A little up-front cleverness and complexity now, plus a little spend to beef up your servers, may delay the switch forever. Computers are fast, we're just not using it, and grugs seeking simplicity by gradient descent and sleep-walking into high-complexity regions of solution space are partly to blame here.

--

[0] - I still shudder when I think back to a certain charting tool in the browser, that required on the text input side to be supplied with datapoints in the format:

  [{x: 42, y: 100}, {x: 43, y: 64}, ...]
Maybe this is because it matched the internal representation (if so, I do have some thoughts about it too). But doing it like:

  {x: [42, 43, ...], y: [100, 64, ...]}
or:

  [[42, 100], [43, 64], ...]
would easily cut the input size by 50% or more, meaning that much less work for the parser in the library and the serializer generating the input text. This shorter format is arguably more human-readable too!

[1] - Sounds stupid, but I've seen this happen in otherwise sophisticated and somewhat performance-sensitive codebases. Beyond the waste coming from most of your data being passed around in pseudo-JSON strings, converted to various scalar and array types at the point of use and then serialized back, it leads to surprising amount of subtle bugs - especially when people manipulate the serialized format manually, because it's again "simpler" this way.

[2] - Every programming language these days has good JSON support built-in or easily available, it's hierarchical without footguns (c.f. YAML), it's plaintext and has good editor support, but still relatively compact (c.f. XML), it's malleable, it's browser-native, etc.


Every time I see this posted somewhere it's been long enough for me to forget about it, so I go through and read it again and glean something new from it.


He also has a great Twitter account here: https://mobile.twitter.com/GrugBrainedDev/with_replies


I'm going to pre-emptively link Casey Muratori's Refterm Lecture Part 1[0], because invariably when this topic comes up people are all "but optimization takes time and we gotta push the new features right now or the market will eat us."

Computers are so ludicrously fast these days that in the majority of cases you do not need optimization at all. You do need de-pessimization though.

https://www.youtube.com/watch?v=pgoetgxecw8


The hilarious thing about the whole RefTerm thing is the armies of people defending Microsoft by saying that Casey is some sort of Dark Wizard and that his skills are magic and unique, which is why nobody else can reproduce his approach.

Before I saw that video, if you had asked me to write a terminal I would have made one with essentially the same design as Casey. I didn’t even think that there is another way! Like… just draw the grid of chars. What else is there to do?! Why add extra steps?

Microsoft added extra steps.


Timing being nice, Casey just started a series on performance / optimization [1].

The pitch is that it should be useful even for people not using "down to the metal" languages - although at this point in the series they have been essentially:

* yelling at the sky that software is slow (amen to that)

* making fun at python for not being C (which is, I mean, true)

* explaining why memory caching is important (which is both true and not obvious to most people)

* diving into the details of SIMD intrisincs of x86 using some form of vectorized quantum hardware-specific instructions, which I'm sure is going to help me avoid a few calls to render in my React app _any time now_.

(Just kidding, I'm pretty sure I'm going to learn stuff from the series, which is the point.)

[1] https://www.computerenhance.com/p/welcome-to-the-performance...


I think performance simply just isn't on the radar of a big chunk of devs these days, especially self-taught ones. Front-end wise, what with the trend of using frameworks (react/vue/etc) & a billion lines of javascript downloaded from npm, it'd probably be too much of a task to even begin profiling.


IMO it's unfair to blame the developers. Almost every developer I know loves working on performance stories. It's one of the few times we get to use our deeper technical and algorithmic skills and get a clear metric to optimize for. However, businesses are reluctant to prioritize that work and get impatient when they don't see new "features". I worked on a project where PMs would constantly complain to me about a particular admin page being slow. I wrote a ticket for them that would implement some optimizations (simple pagination and SQL optimizations) and asked them to prioritize it. They had time to complain every chance they got, but the ticket never got prioritized. I eventually got so sick of the complaining that I just took a Saturday morning and did it.


I dunno, I run into a lot of devs on reddit/here/etc that argue "who cares about performance just spin up more containers! I just wanna write {insert slow interpreted language here}."


I think you're reading that the incorrect way, though it's hard to say. This is my take on it

Dev: I get paid for writing features. I can write new features faster in {insert slow interpreted language here}.

Also if the company is of any size, whatever they write has to fit in the rest of their CI/CD stack and pass whatever kind of code review the company does or doesn't implement. So yea it's easy to say devs suck (and they do), but without looking at the entire process they are involved in it can be very hard to qualify why they suck and where the suckage comes from.


The slowness of interpreted languages is frequently not the main culprit for slow performance though. Most load times in web apps I've worked on usually were spent doing inefficient or unnecessary DB queries or API requests.


Almost every single time I've had to deal with something being slow in the Python web app I work on, it's been a bad query, or bad ORM usage spawning thousands of additional queries to pull in relations.

I've seen a couple out-of-memory deaths (mostly non-paginated db queries, but compounded with the inefficiency of py2), and occasionally an accidentally quadratic loop (which would be faster in C, but usually can just be made fast in Python with more thoughtful code), but it's almost always the DB—oops, I need to add an index, oops I did a cross join, oops, I'm spawning additional 10 queries for each row returned. The time spent in Python on a slow endpoint is usually negligible.


This is the claim but it isn't necessarily true as often as it is claimed. If it was how would Stack overflow back in the .NET framework days have managed on I think 2 web servers plus a failover on the massive amount of traffic they do while also being responsive. There's 0 chance a python/ruby/etc web app could have done the same thing. And to think they could theoretically use less CPU and RAM on the same workload now running in .NET 6 which is dramatically more efficient than 4.x Framework.


> I think performance simply just isn't on the radar of a big chunk of devs these days, especially self-taught ones

My experience is the opposite of this. It's usually the self-taught devs that care a lot about performance, and the formally educated ones who are willing to trade it away.


I'd almost split it out slightly differently, but I work indirectly with a lot of peoples code problems and this is just my own observations.

People that are formally educated tend to end up in large organizations based because HR like the idea of college degrees. Large organizations are process oriented. Somewhere above the dev the specification of the application is created. The dev typically does not get to choose their tools, instead particular versions of the tools are provided, the versions of the libraries they need to target and so forth. Also after the code is written it needs to pass any number of tools and gates and QA systems and UAT systems before it's ever used in production. In general the last thing the dev is thinking of is 'is this fast' and instead they are thinking "I have 400 things on my to do and I hope this doesn't get bounced back to me"


Some projects have performance as it's core proposition. Notably Solidjs, Astrojs, Qwikjs, among many others.

React has dominated the front end for years now & brought some great innovations, but it's dominance is waning.

> performance simply just isn't on the radar of a big chunk of devs these days

Many devs seek to optimize for career prospects. Since the majority of FE jobs center around React, these devs focus on React. As companies prioritize FE performance, the jobs related to performance follow & the mass of FE developers follow after that.


As a "one of these day devs" I only care about performance when it starts becoming a problem and I see nothing wrong with the way I'm going about this.


> I see nothing wrong with the way I'm going about this.

The wrong part is that you don't measure performance. Which was OP's point. Just measuring the performance is very hard, labor-intensive, resource-intensive task. "One of these day devs" mostly don't even know how to approach this task, but even if they knew, the mountain of infrastructure they sit upon, which is in many cases completely opaque to them will make it impossible for them to be productive (or do anything at all) when it comes to estimating performance of their programs.

Add to this also the fact that most things when it comes to performance are, basically, out of your control. If the problem is in the framework -- maaaaybe you can replace / patch the framework. If the problem is in the browser -- with a 0.1% probability you might convince users use another browser. If that's to do with OS in which the browser is running -- well, you, yourself, probably won't install a different OS only to make your own program happier...

But, the complaint isn't about the "one of these day devs", it's about the infrastructure in which they live that made it, basically, impossible to care about performance.


The problem is that performance and reliability are issues that creep up and by the time they're a problem, they are much more expensive to fix than if they had been first class priorities from the outset. Everyone who didn't care before finds new reasons to put it off because now it's so expensive to address.


Performance issues are also often trivial to head off from the beginning but require a rewrite to remediate later.

To everyone that argues that premature optimisation is bad, that’s like saying we should go to the Moon by building a bus and then fixing any performance issues that prevent orbital insertion after it is moving down the highway successfully.


Are you certain that you can adequately judge when the lack of performance is starting to become a problem for people who aren't you?


> I don’t know if there’s anything we can do.

I know something we could do, at least for websites. Google could finally make good on their announcement and make CWV actually have a measurable impact on rankings. I work for affiliates/SEO people. When CWV was announced, suddenly they cared about performance. Then the deadline came and went and Google decided not to make it really count after all, and they stopped caring. You don't have to be better than your competition if you're not extremely slow, because either you're on 1 and you get the user, or you're not, and the user never sees your site (traffic on rank 2 and 3 drops dramatically, and beyond that it's barely noticeable).

If Google decided to punish slow load times, large layout shifts, huge payloads, and tons of JS by downranking the sites, you'd quickly have very fast websites all around.


You would have very fast landing pages, which you still do today. But the issue is SaaS apps are soul crushingly slow, but google does not care about that because google cannot login.


But SAAS apps don't get their traffic via Google. Their sales site does, but not the app itself. To get SAAS apps faster, you need upper management to care about the performance of the app, which they also don't. They care about a convoluted web of short-term ladder-climbing moves dictated by multiple layers of the principal agent problem.


Google gets data from the Chrome UX Report (CrUX, everyone can access aggregated data on Big Query), so they'll even get information on performance from pages they themselves never see, and "the origin" (aka the domain) is the most relevant group for most websites, because they won't have enough traffic on each individual page.

If Google only relied on measuring itself, it would just be a cat and mouse game to identify Google's testers and serve them a stripped down version. But they already have the actual real user data.


> at least for websites

We could start by stopping using these stupid pointless frameworks that provide little functionality but reams of overhead.


How much of it is frameworks like React vs boatloads of trackers and other ad-related stuff?


From my experience looking at sites for improvement opportunities, if the site is ad-monetized, it's the ads, otherwise the frameworks are multiple times the tracking stuff (analytics etc). Oh, and of course google tag manager is used to load analytics and nothing else, because you need those extra 240kb of JS to feel complete :(


In my experience? For SPAs, it's mostly 1) bad use of the framework, causing way too many pointless DOM updates, and 2) doing lots and lots of network requests, and particularly blocking requests.

2) is a bit due to tracking, but usually mostly a consequence of the API-first design. Despite using a large and powerful framework, the app itself does very little locally, as anything non-trivial ends up being a blocking network request, or a burst of several such requests (usually not blocking the app itself, but almost always blocking the user from continuing until the requests are done).


Angular, React and many other frameworks can be fast. If you add ads, many DOM updates and to large JSONs you can grinde every site to a halt.


I agree with the author that we can make local changes, and I think we should.

However, I disagree with the author on global changes. I think we can do them. In fact, I think local changes can grow to global ones, or close to it.

Here's my personal plan: like many of you, I want to make money off of my FOSS projects. However, instead of going the donation route, I'm taking an entirely different tack. I am setting myself up as a professional. This includes accepting some liability for my software.

You can bet that if I'm accepting liability, I will be testing my software until it screams.

But the flip side of being professional means crafting software that is fast, and I'm going to do that too. Yes, I'll be expensive, but that software should be a massive lever as a result.

The bigger the lever, the more likely that that software will help my clients out-compete their competitors. Helping them do so is my goal because if they do, their competitors will have to start caring about performance.

And once people care about performance in one thing, it starts to disgust them to have to use slow software in other things. So it may be slow (over decades), but I'm hoping that my work will help performance go viral.


what on earth do you mean by "professional"? and how can your software be "expensive" if it is also FOSS?

or is this satire?


You open source it, but offer a commercial license with warranty and support contract. Many organisations require support contracts, so won't use the free version even if they are internally capable.


yes, i know that but unless in the unlikely event that this guy's FOSS software meets their requirements, they will not do that.


Normal_gaussian got it right. The support and liability contract would be the expensive part, not the FOSS.

Notice that I said that I will be expensive, not the FOSS. I meant that my services will be expensive.


unless you are very, very lucky - dream on


The world is changed by unreasonable people with dreams.

So yes, I will. Thank you.


i would say it is changed by reasonable people with a plan


FOSS in no way means "free of cost". We use a piece of FOSS software where I work, and it costs 4 figures per seat.


what is it, how many seats, why not maintain it yourself if it is so useful? i can't believe that there is no closed source doing the same thing

FOSS means free, depending on what you want from the software - if you want more than the FOSS offering does, you may have to pay for improvements, or you may not.


All good questions, asked by all the devs here. I'm not about to defend the use of this particular product. I was just pointing to an example of FOSS that not only isn't free, it's very expensive.


sorry, but i really can't believe this. to start with how can foss manage a per seat license? i get needing support, but if you have the code (which you must else it is not foss) why can't you strip out the per seat code (which is a no-no in my experience on closed code) and just pay a one-off/per problem support contract?


We could, of course. But it would be a license violation and legal wouldn't stand for it. We can each use it without charge for our own, personal purposes, of course -- just not for the company's purposes.

I can't help you with your disbelief, but it is true that nothing in the FOSS philosophy prevents companies from charging for their work.


of course, you know your business far better than i do. and as you say nothing stops people making money off foss - i've done it myself. but the normal route for this is via people finding the software useful,consultancy, support and adding paid-for features. i have never, ever seen per-seat licenses for foss software. if this software actually exists, you could at least point to its web page.


My guess would be something like Docker Desktop for Mac. It's free for personal use, but if you are using it from a company standpoint, it is not free


My boss cares a lot about performance and reliability.

I work for a financial services firm. We are consistently highly rated for customer service. One of the reasons that is the case is that our website is fast.

It takes a lot of work by our devs for that to happen!


From a business point of view, performance and reliability (and code quality) are irrelevant. The most important thing is that the product has a market to explore. We can see this in many products.

I've been working with software since 1999, I've never seen any developer or manager worry about performance. I've seen a religious battle over formatting or using a Java library, or "it's not the Ruby way".

I never participated in any performance or reliability meetings, or projects, or religious battles.

In general, performance is only fixed in very extreme cases. For example, I worked on fixing code that generated a few megabytes of SQL. Or code that uses all of the server's memory.

And it's hard to get permission to fix some extreme bugs. The code that generates a million stored procedures after a few months still exists at one of the places where I worked.

At another company, I fixed a problem (infinite printing) that had been around for 20 years (I even made a birthday cake for the bug), under pressure from the biggest customer who couldn't use a report and no workarounds worked anymore. 20 years of workarounds.

You are sometimes seen as a radical if you try to fix the most absurd bugs. Today I document the bug and wait for the prioritization. Sometimes you fix it after 20 years...


I agree. Typically when it comes to UI issues features and being first to market tend to win over speed. Simply put the slowness of the average human interacting with the software is a much larger factor then the software itself.

Of course this is also to a point. 1 second versus 10 seconds to do a major software operation isn't going to affect most users. Now stretch that out to 100 seconds and priorities start changing. You get in the 'really annoying to people' zone.

I think one of the differences when looking at performance is how many times it is executed. I might pull up some customer record/ticket in SAP 50 times a day. If its 1 second or 10 seconds doesn't matter that much because I can generally interweave it with other operations I'm performing. Now, if for some reason I was executing that a million time a day, like a single database query, suddenly we're talking about an issue that's worth fixing.


Its a mix.

I'm sitting here trying to replicate a service across multiple zones this mornin— oh, it's not morning any more sigh — and, well,

  Code: ReconcileVMSSAgentPoolFailed
  Message: We are unable to serve this request due to an internal error
At some point/level, Azure can't¹, so I can't. There's GCP, but high switching costs. (¹this is a fractal error, too. Part incompetence, part societal and economic factors way out of certainly my control.)

But absolutely there is incompetence. I fielded this, this week: "the tests broke", "oh, sorry about that. I've introduced a bug. a while later It's fixed now." "the test is still broke for me?" "… you need to pull the fix…?"

I've got another dev staunchly refusing to enable the logging necessary to print tracebacks, … while simultaneously being perplexed by a bug we're facing. IDK what's causing the bug either … but, IDK, let's get some logs in the meantime?

Higher ups wondering "why can't anyone answer technical questions? How are our devs so incompetent that they don't know the answers?" after the entire team that managed the component to which the question is directed at has been laid off.

I've answered questions in the form of,

  A: We do X. Here's a screenshot. <screenshot>
  Me: Where in your screenshot is X?
  A: Oh, you're right.
Often enough that sometimes I wonder if ChatGPT hasn't already replaced some of the people around me.


I bet you these developers can rotate the shit out of a red/black tree though! Thanks LeetCode-centered hiring processes!


Zoom (videoconferencing) was started by a former Cisco WebEx employee who was fed up with how sucky WebEx was, and decided to do something about it. So even in the enterprise space where the payer is not the user progress can happen.


I think there's actually something in this. That's why I've created my own CRM/ERP system. All page loads that are slower than 100 ms is considered a bug and I'm usually on under 50 ms.

And of course I've a TUI version of it as well, with all data local (and synced to the cloud).

I really hope that once I'm ready to put it out there, I will find the right audience.


In my experience it's less about what the end user does or doesn't want, but more about what the company can or cannot sell. That is why new features are the priority, because once a feature is complete, there is a clock on how long you can sell it. The problem, of course, is the that the market is usually 95% saturated from the existing customer based. So the gain is incremental at best as a few extra customers filter in now that the new feature is available, and then the race is on to find a new thing to tempt in new customers.

Reliability and performance doesn't matter to people who are chasing sales (which is most of a company, not just the sales team, because that's how company success is measured). Retention is someone else's job, and so the focus will always be on churning out more and more feature that can boost sales.


Evil enters the world "because the people who do give a crap don’t have the power to do something about it."


I like how this article highlights the complexity of many competing factors. We like to understand things by boiling them down to simple rules and catch phrases that make us feel like we understand the world but that is almost never how things work.

As devs we need to take pride in our work, even when it feels like there's a lot of pressure to cut corners and ship trash. Take a few extra cycles to make your system just a little bit more resilient and give positive feedback to your peers who do the same.


In ~2006 I was working in tech support and we moved from a snappy homegrown ticketing system to one where tabbing out of each field took ~5. We all hated it and complained immediately, but the contract was signed and we weren't able to influence management. Evidence for "Customers aren’t the clients" and "People don’t decide on satisfaction".


Suppose your software is a check-in system for an airline, online or mobile. The check-in is probably one of the least-loved aspects flying. How many flyers will decide to pick a different airline based on a bad check-in experience?


I've worked at a number of big companies that have run experiments on TTI & response times and they've all shown statistically significant results that responsiveness impacts revenue, engagement & retention.


The author is my consciousness personified trying to slap me into learning tla+


Yes, people do care, but if they are not the ones making the purchasing decision (and instead it gets made by someone else who never has to suffer the software themselves) the problem will proliferate.


"It’s well-established consensus that software is slower and more bloated than it was 20, 40 years ago." says it all. I see now why the suckless community makes the position they do.


>This doesn’t quite agree with my personal experience. I pay a lot of attention to people using computers in my daily life: clerks, receptionists, administrators, etc. And they’re pretty universally frustrated with how slow and unreliable their computers are. They’ll complain for hours if you give them the chance.

These are not the people buying these systems. It's someone much further up the chain. They often don't care if the software is a dog to use. Unless that means they need to employ more people.


> Programmers want to write fast apps. But the market doesn’t care.

That's partly because Moore's Law, Dennard Scaling, ... meant that our apps were going to get faster next year regardless. Intel had better marketing than Joe Programmer. So Intel got paid.


It's hard to make performance and reliability sexy to people who make decisions.


I am sometimes surprised how work on optimization (unless explicitly demanded) is regarded negatively. Just do it anyway. No it does not add to my salary but it does wonders for my self-worth and work satisfaction.


People care about performance and reliability when there is more than one way to do the thing they want to do.

The slowest, most inefficient solution is (depending on the value of solving the problem) better than no solution at all.


Alternate (and I think more accurate) phrasing: people tolerate bad performance and low reliability when there is no viable alternative tool that does the things they need.

As is usually the case - software, particularly SaaS, resists commoditization. For almost any piece of software you may be using (and I am counting things like social media platforms or e-commerce sites in this), there is no equivalent substitute you could switch to on short notice. Often there's literally none at all. When there is, the effort to migrate your data tends to be prohibitive.

Also, in my phrasing, I wrote "need" instead of your "want". From my observations, almost all interactions regular people have with software are out of necessity, in the sense that they're forced to use it to get what they want, and they have no power to change it.

Examples:

- Software your employer tells you to use at work, which sometimes is quite horrible, thanks to magic of enterprise sales.

- Software your government tells you to use to submit various forms related to taxes, healthcare, childcare, business, social services, etc. This sometimes manages to not be all that bad.

- Software your daycare suddenly forces you to use to communicate with their staff and access CCTV feeds. Software you can't ignore because absence reporting and invoicing goes through it too. Software that's hot garbage, surveillance capitalism optimized mobile app with no web version, created by some bullshit startup with lots of VC funding, which explains why, out of a sudden, every single daycare started to switch to it, despite the app being extremely limited, having abysmal ergonomics, discriminating against people who aren't glued to their smartphones, and likely violating GDPR in more than one way.

I guess you can tell I'm a bit bitter about that last one.


I would switch music providers if they had a client that wasn't buggy. So there is a market for quality software. It's just challenging.


I work in a company which creates and maintains a dumpster fire product, closing on 20 years now. The product is neither fast nor correct. It has more tech support than developers (and our tech support writes code and solves problems usually associated with development, except, essentially, they are hot-patching the garbage our developers produced, live, at customers' site).

The company was acquired a year ago by an international giant, but it didn't really change on the inside.

So, what's the secret sauce? -- Exclusivity. Competition died off 10-15 years ago. The company used to compete with some big names in the field, but all those withdrew long time ago. So, we won.

Unfortunately, this is also a "success story" of many other companies, some of them I even had a (dis-)pleasure to work for. This is also a solid market strategy: don't compete, find a market niche where you offer a unique service. Once you are there, do the absolute minimum to make users happy: over time you will have accumulated enough features and unique workflows that are virtually impossible for the competitors to replicate, at least not with the kind of funding a typical start-up may go into town. Also, new features are easier to add, and they bring about as much of customer satisfaction as improving the old and broken stuff. The novelty factor makes it look like customers are getting more treats, but the treats are really low-effort.

In the case of a monopoly like the one I'm in, users feel trapped, often they don't even know how much better their software could potentially work, they accept ridiculous gaps in quality because there is no alternative. The requirements to performance and reliability are thus at the absolute minimum, and often lower.

---

Just to give you a practical example of what I'm talking about: recently, I discovered that some genius from the department which is responsible for the core component, a service that, beside other things is responsible for processing and saving configuration submitted by users has implemented "buffering"... nevermind that it's working on Linux, with filesystem API which already does buffering and caching... But, unlike Linux filesytem API, this NIH caching will lose user's data if the service stopped after acknowledging the save to the user and actually writing data to persistent storage. Nevermind that amount of data it has to write is negligible (at most few Megabytes). Nevermind that it's typically an interactive operation where users are in no rush to write such data...

I couldn't even convince the team responsible for the component that there's a problem... let alone do anything about it.

And, yes, our product used by some of the largest / wealthiest companies in the world.


because normal people r dumb and they will always think there poor hardware is not enough to run bloated software slows so buy more whole android phone market is like this.


The first sentence:

>It’s well-established consensus that software is slower and more bloated than it was 20, 40 years ago.

That's an absurdly ridiculous statement presented as fact with no citations


People care about features. They don't care if a page takes 2 seconds to load or has a thousand npm packages holding it up. They care that they can click a button and it does the thing that they need to do to accomplish their task. This stuff is all magic and pixie dust to 99.9% of people. And if adding the new feature creates bloat and slowness, so be it. If this tiny little seemingly insignificant thing that user wants is not there, it doesn't matter to them if you have to completely break your "architectural purity" and hack something gross that messes with your Lighthouse score to put it there. It just needs to be there.


Microsoft had to force people off XP and then again off Windows 7.

Firefox ate Netscape's (edit: Mozilla Suite's) lunch.

Oracle eventually had to buy MySQL to stop the bleeding (and try to upsell).

So no. While people do prefer a 100% solution over a 90% solution, they much more strongly prefer a 90% solution now over a 100% solution that keeps them at work an extra hour or two nearly every day.


Netscape -> IE 6 dominates Netscape-> Mozilla Suite -> Phoenix -> Firebird -> Firefox.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: