Hacker News new | past | comments | ask | show | jobs | submit login

This article and every other comment seems to miss the real issue: the lack of testing.

Software differs from all other means of production in that we can in fact test any change we make before realizing it in the world.

With good tests, I don't care what the intent was, or whether this feature has grown new uses or or new users. I "fix" it and run the tests, and they indicate whether the fix is good.

With good tests, there's no need for software archeology, the grizzled old veteran who knows every crack, the new wunderkind who can model complex systems in her brain, the comprehensive requirements documentation, or the tentative deploy systems that force user sub-populations to act as lab rats.

Indeed, with good tests, I could randomly change the system and stop when I get improvements (exactly how Google reports AI "developed" improvements to sorting).

And yet, test developers are paid half or less, test departments are relatively small, QA is put on a fixed and limited schedule, and no tech hero ever rose up through QA. Because it's derivative and reactive?




> With good tests, I don't care what the intent was, or whether this feature has grown new uses or or new users. I "fix" it and run the tests, and they indicate whether the fix is good.

Except when the tests verify what the code was designed to do, but other systems have grown dependencies on what the code actually does.

Or when you're removing unused code and its associated tests, but it turns out the code is still used.

Or when your change fails tests, but only because the tests are brittle so you fix the test to match the new situation. Except it turns out something had grown a dependency on the old behavior.

Tests are great, and for sufficiently self contained systems they can be all you need. In larger systems, though, sometimes you also need telemetry and/or staged rollouts.


> Except when the tests verify what the code was designed to do, but other systems have grown dependencies on what the code actually does.

Assuming you mean systems in terms of actually separate systems communicating via some type of messaging, isn't that where strong enforcement of a contract comes into play so that downstream doesn't have to care about what the code actually does as long as it produces a correct message?

> Or when your change fails tests, but only because the tests are brittle so you fix the test to match the new situation. Except it turns out something had grown a dependency on the old behavior.

I think this supports GPs point about tests being second-class and not receiving the same level of upkeep or care that the application code receives and one can argue that you should try to prevent being in a position where tests become so brittle you end up with transient dependencies on behavior that was not clear.


To extend on jefftk's point, the tests have a specific scope (what the code is responsible for), which probably doesn't cover what it is now expected to do after months/years of getting used.

Your code was supposed to calculate the VAT for a list of purchases, but progressively becomes a way to also force update the VAT cache for each product category and will be called in originally unexpected contexts.

BTW this is the same for comments: they'll cover the original intent and side effects, but not what the method/class is used for or actually does way down the line. In a perfect world these comments are updated as the world around them changes, but in practice that's seldom done except if the internal code changes as well.


> the lack of testing

Having worked on very long-running projects, testing or well-funded QA doesn’t tend to save you from organizational issues.

What typically seems to happen is tests rot, as tests often seem to have a shelf life. Eventually some subset of the tests start to die - usually because of some combination of dependency issues, API expectations changes, security updates, account and credential expirations, machine endpoint and state updates, and so on - and because the results from the test no longer indicate correctness of the program, and the marginal business value of fixing one individual broken test is typically very low, they often either get shut off entirely, or are “forced to pass” even if they ought to be an error.

Repeat for a decade or two and there quickly start being “the tests we actually trust”, and “the tests we’re too busy to actually fix or clean up.”

Which tests are the good ones and bad ones quickly become tribal knowledge that gets lost with job and role changes, and at some point, the mass of “tests that are lying that they’re working” and “tests we no longer care to figure out if they’re telling the truth that they’re failing” itself becomes accidentally load-bearing.


And yet, test developers are paid half or less, test departments are relatively small...

Huh? Your first part seemed to be repeating TDD optimism but then you switch test departments. Just make your claims consistent, I'd suggest you instead talk about tests being written by the programmers, kept with the code and automatically run with the build process.

However, I don't think even TDD done right can replace good design and good practices. Notably, even very simple specifications can't be replaced by tests; if f(S) just specified to spit out a string concatenated with itself, there's not obvious test treating f as a black box that verifies that f is correct. Formal specifications matter, etc. You can spot check this but if the situation is one magic wrong value screws you in some way, then your test won't show this.

there's no need for software archeology, the grizzled old veteran who knows every crack, the new wunderkind who can model complex systems in her brain, the comprehensive requirements documentation, or the tentative deploy systems that force user sub-populations to act as lab rats.

Wow, sure, all that stuff can be parodied but it's all a response to software being hard. And software is hard, sorry.


test developers? test departments? QA teams? It's worth mentioning that the vast majority of software orgs don't have the luxury of such distinctions, i.e. software teams are generally directly responsible for the quality of their own work and can't punt problems across the org chart.


Probably the most shocking revelation I've had in my time as a developer has been that testers are never paid the correct salary (some too high, most too low), and that if you have someone who is good at testing and is becoming proficient at coding, their smartest move is not to become a FT developer, but to skip right over developing and go into security consulting. Instead of making 75% more money by switching careers, you can make 150% more.

Software needs people with the suspicious minds of good testers, but security people make more money for the same skillset.


Chesterton's Test




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: