Hacker News new | past | comments | ask | show | jobs | submit login
Software Tests Are for Millionaires (2020) (mrsteinberg.com)
72 points by jimhi on Feb 15, 2023 | hide | past | favorite | 84 comments



I'm a huge proponent of testing. I also believe the 20/80 rule seems to apply here. You gain enormous benefits from just a few tests, and I feel that benefit reduces as you head towards unrealistic goals of 100% coverage.

Either you're writing tests or you're manually running the software (manual testing) or you're letting your users test your software in production. It might be that all of those might be the correct option as some point. However, in my experience, success in shipping code that works and finding and fixing bugs takes considerably less time when there are at least some automated tests available.


> Either you're writing tests or you're... manual testing

100% strong agree. One of the main reasons I write tests now is just to put the code to use without having to also write all the UI/front-end/usage code.

> unrealistic 100% coverage

I'm not entirely sure how I've managed it (I've got a few ideas), but for the last year or so, I've been able to almost trivially reach 100% test coverage, and without a crazy ratio - one recent project that I actually counted, it was 3 LOC of test code for every line of "product" code. Time spent was more like 1:1, although I wasn't counting, and it's very blurry anyway, and testing manually wouldn't have taken all that much less time anyway.

Note that that's simple code execution - that's not including anything like "100% coverage of possible inputs" - and the LOC is counting pretty formatting lines, etc. I'm at about 4x Rubocop's default maximums on function lengths, and I think abut 1.5x it's maximums on "comlexity" metrics, both for the test and the product code.

Write testable code, and make sure your test code itself is code good, and everything gets better.

(At least, in Ruby)

Edit:

Part of how I got "here", with what (AFAIK) is trivial 100% coverage, was at prior garage startup, we read Martin's "Clean Code" as a team, book-club style. Our CTO led it, so we also skipped some chapters, since we were a Ruby shop so not everything was applicable. There are some really good principles in that book, no matter your language.


100% coverage means nothing. Testing every bit is easy if you ask me but coming up with edge cases takes so much time.


Most people agree that coming up with proper edge cases for unit tests is way harder than writing original code. You're not just writing code that works, you need to anticipate in what way it's going to break.

That means that engineers writing unit tests should be better than engineers writing code.


If the data structures that represent your edge cases are simple, maybe QuickCheck can generate your edge cases.

There are equivalent libraries in a lot of languages.


Hypothesis is a useful one to use in Python. Particularly with its state-based testing that lets you describe actions, preconditions, and invariants. It then exercises the program by calling a random sequence of actions and seeing if everything holds as expected. Oh, trough some sequence the "save file" option ends up not saving the file? Good to know, record that seed and track down the problem using the supplied steps which you can now repeat automatically and manually to see where the problem occurred. Maybe add another invariant after doing some research and find it is breaking earlier in the sequence.

It's like having your QA/V&V/Test team doing exploratory testing but automated.


Was part of a team some time ago where we had 100% FE and BE test coverage. Good suit of E2E tests, functional tests and contract tests. In most cases it took longer time to get the test coverage done than to write the code itself.

Overall it was tremendously valuable as it becomes pretty trivial to refactor the code or even move to a new major library version that was core of the architecture. I cannot imaging nowadays without good tests suit as code needs to evolve and you want to sleep at night as well.

Still, you'll never cover the complex business logic and interactions between many many different services in a large complex system that can be only tested doing manually by a person who knows/looks the domain in a higher level.

Bugs still appeared in the RC, live environment when all the systems started acting together.

From my experience 80/20 rule is pretty good one to have. Getting it to 100% meant to run the code coverage analyser and looking at all the code paths that are not yet handled that in many cases were code paths that never caused us problems in live and were not even crucial when doing major refactoring as it was already covered by the 80%.

Looking back at least for that specific project, I'd say having little bit less coverage would have been much better and would have enabled us to move tad bit faster and test new features on the field to validate if they benefit the business or not.

Nowadays I am leaning towards having a pragmatic view on writing tests.


We've definitely got some test suites that slow us down, but...

...those suites would not pass review elsewhere in the codebase. Near as I can tell, most of the time that it's the case "our tests slow us down" is because the test code isn't held to the same standard (ie, the standards are massively relaxed)

That all said, what I actually want to ask -

In my head, having a good test suite - particularly a BDD-style one, like Cucumber tests - mean that it's easy to add tests to cover things uncovered from manual QA.

Have you found that to be the case? Or, have you found that it could be the case, if the test suites were different in some way?

> 80/20 [and not 100%]

Totes. I'm actually surprised that I've been able to hot 100%; lately it's been like 95% after I bang out the obvious tests, and then there's like one branch that's missing and it's easy to add. If/when it's hard to get that last 10%, totes agree - don't.


I have not done Ruby but I found a lot of my <100% stuff to be weird and unlikely errors - logic error "I violated an invariant somewhere else" type of thing, or syscalls / standard library functions that could error out though one would never expect it. So say I `open()` a file and then immediately `fstat()` it- I don't ever expect the `fstat()` to fail, I'll write the check, log, and failure/crash just in case it does come up, but rigging up a successful open() / failed fstat() for the test seems a bit much.


> Either you're writing tests or you're manually running the software (manual testing)

There's nothing wrong with manual testing. In fact, unless you have good set of E2E tests (most companies don't), you still need manual testing.

At some point manual testing may become tedious and it's can be worthwhile to automate it to certain degree, typically with a few integration tests.

But well-architected systems need very few tests to be reliable. Having a lot of tests is not something you should celebrate, it's a signal your architecture sucks.

And if you find yourself writing a lot of unit tests, please read this: https://rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste...


I used to have the same attitude toward unit tests for my solo projects… But a few years ago I was working on library that had to do a lot of vector math, and I was doing SO much manual checking after every change. I wrote a bunch of tests so that I could see what was breaking when.

What I didn't expect is that the tests gave me so much confidence to improve, extend and refactor the code. Before it felt so fragile, like the whole thing could crumble if I don't quadruple check my work. With the tests in place, I changed the entire underlying data type from a struct to a native simd vector in an hour. Made the changes, fixed a couple of bugs, merged the branch, boom.

I don't strive for anything close to 100% coverage, but the handful of tests that cover my critical code have saved me so much time.


"Confidence" is a good word for it.

When teams don't have good test suites (meaning mostly automated, if not entirely, and reasonably comprehensive), then you can watch as over time you get either increasing error rates or a massive slow down in team performance. Even for small changes, having no tests or primarily manual end-to-end/integration style tests means that you have little chance, unless it's a particularly good code base, of determining the actual effects statically (by analysis) or dynamically (because your tests will take too long and are likely mostly ad hoc so you will miss edge cases).

Teams that don't care (or are pressured into not caring) will deliver crappier and crappier software over time. Teams that do care will eventually stop delivering software at all, or at least no more meaningful updates.

All for lack of confidence.


Unit tests can never give you "confidence".

> you can watch as over time you get either increasing error rates or a massive slow down in team performance

I hear this story repeated over and over and yet I consistently witness the opposite of what you're describing: teams start investing in unit testing and they noticeably slow down and ship more bugs at the same time.

What can give you confidence is simplicity. Being able to fit the entire system in your head. Being able to say what are the side effects of your changes. If you feel like you're losing those, it's time to take a step back and rethink your architecture. But of course, nobody has the balls to take a step back and think, to admit that you failed at something. It's so much easier to just slap a bunch of unit tests on your big ball of mud and hope for the best.


If you can fit the entire system in your head, it's probably a relatively simple codebase.

I've written a Photoshop file format parser twice, and trust me—between byte alignment changing constantly throughout the spec, at least three different string encodings, sub-formats and errors in the spec—it does not "fit in your head." Certainly not enough to confidently add features without worrying that you've broken the blend mode parser.

And of course, if you can and do fit a complex system in your head when no one else on the team can, congratulations: you've made yourself into a bottleneck and flight risk.


> If you can fit the entire system in your head, it's probably a relatively simple codebase.

Or it has good boundaries and I never have the need to load the whole system in my head.

You're not loading the whole OS kernel in your head every time you call fopen(), do you?

> Photoshop file format parser

Would you say that PSD format is an example of good design? Is complexity inherent to the problem domain, or is it accidental?

You are forced to replicate complexity created by other engineers. Woe is you.

My bet is that Adobe also has a lot of unit tests, and contrary to what unit-testing advocates claim, it resulted in a terrible design you are dealing with.


My friend, half of software engineering is handling complexity created by other developers. If you're implementing a Mastodon client, you don't get to just refuse to deal with OAuth because its design offends you on a philosophical level.


But I get to refuse to turn OAuth into a cross-cutting concern in my application. You won't see refresh tokens referenced in the UI code. And I get to use a 3rd-party library that solves 90% of the pain for me.

I strongly disagree on the ratio. If you're building Mastodon client, most of your effort should be spent on features and UX, not on OAuth and the like. If half of your time is spent on those, either you're at the beginning of the project, or you're at the beginning of your career. Or maybe you just care about engineering itself more than the end product.


If you peek at that third party OAuth library, you will discover… it's complex. This is what I mean about pushing the complexity around on your plate. You can make it someone else's problem (and become dependent on them), but the complexity is still there. There is a minimum level of complexity that any OAuth system needs to have in order to function. QED


OAuth library is a perfect example of lousy engineers inventing complexity.

Here's a talk where Jack Diederich mentions it: https://youtu.be/o9pEzgHorH0?t=870

It is indeed complex: 200+ classes. Plenty of unit tests and mocks.

And then there's his implementation:

- 0% unit test coverage.

- Lightweight at less than 200 lines, including blanks and docstrings.

QED.


I didn't mention unit tests, so maybe you meant to respond to someone else? Or you're just shouting into the wind about something that wasn't said for fun?

I was writing about tests generally, in case you were wondering.


The comment you responded to says unit tests give confidence, you seem to agree.

"Tests generally" and unit tests are pretty much the opposite camps in testing world.


I agreed about the word "confidence" and the importance of testing for confidence. You read something into my comment that I did not write nor did I intend. Good for you, I guess.


This is great example of unit tests, but I don’t think it’s comparable with OP’s scenario. Math laws are ‘fixed’ and well known, you can just write fixed tests to find the discrepancy between the laws and your implentation. Where OP’s point is about ‘startup business logics’, the ‘laws’ here are constantly changing all the time, which means you will constantly throw out most of your tests along the way.


I agree with the sibling comment that says confidence is a good word, because it is.

I've done three or four major refactors on a FOSS project of mine. The refactors went without any hitch because of an almost excessive test suite.


Not having tests for complex things make such constructs feel fragile.

If you're building simple things, tests can be overkill.

If you're building complex things, tests pay for themselves rather quickly.


If you're building complex things... most likely you should be building simple things.

Most software in the world implements business rules defined by people who are way less smart than your average engineer. And yet our software is more complicated than ever.


There's a difference between accidental and necessary complexity.

Not everything can be simple.

Mathematical algorithms are an example where there's necessary complexity and unit tests backstop defects when you refactor code.


And that's why I said "most likely".

That said, mathematical algorithms:

1. Rarely need refactoring. If you often refactor your math, you probably mixed it with something else (unless your business is math).

2. Rarely have state. You're always dealing with values, not entities.

3. Rarely have dependencies. There's nothing to mock, so the boundary between unit tests and integration tests is very blurry.

4. Usually have a formal oracle of correctness.

So yeah, math is a place where unit tests might be ok (if you truly have a lot of complex math).


^ THIS, 100%. Some SW engineers truly believe that complexity is the enemy, and that it can always be eliminated. Sometimes it can, but other times, you can watch developers push complexity around on their plate to avoid eating it.

They move logic to a different library, break it into 10 libraries, introduce layers of abstraction, try increasingly complicated View/Model architectures… And ultimately the code is no easier to understand or manage than when they started.


But complexity is the enemy. Where it can't be eliminated, it can be contained. That's what software engineering is about.

It is nearly impossible to find a complex domain that doesn't naturally break into smaller subdomains.

That's because "domain" is human concept. The ones that are too complex and don't have natural boundaries within them just don't stick.

There are two main reasons software becomes complicated:

1. It is over-engineered at the beginning of it's lifecycle.

2. It is under-engineered in the middle of it's lifecycle.

Interestingly, unit tests contribute to both of these.

For an engineer it's always hard to admit that your system is complex because you failed at engineering. It's much easier to claim that you're working in a complex domain: you get to shift blame and boost your ego at the same time.


I get it, and this is why a lot of SW engineers think they could solve any systemic issue in the world by breaking it down into sub-problems. Unfortunately it isn't true.

I'll give you a concrete example: AML (anti money laundering). If you build a fintech with zero dependencies, you will need to deal with this. It's not manufactured complexity—it's an authentically complex problem which stems from 1) the actual law and 2) the risk profile the business is willing to assume. Now, you can come up with clever ways to make the rules of your AML policy composable and easy to change, but the reality is that you will immediately hit an ocean of edge cases that require ever more nuanced mitigations. You can make keep subdividing and decomposing the problem, but you will end up with an architecture diagram which is (wait for it) complex. So yeah, you shouldn't over-engineer before you launch, but eventually, your AML system will take on a certain amount of unavoidable complexity.

Now try to do all of the above WITHOUT unit tests, and see how far you get.


I fail to see how unit tests are going to help. If rules are independent, overall complexity is low, even if there are thousands of rules. If rules are dependent, unit tests are useless.

> introduce layers of abstraction, try increasingly complicated View/Model architectures

> you can come up with clever ways to make the rules of your AML policy composable and easy to change

These are the exact design decisions that are typically accompanied by unit tests.


Those are the design decisions accompanied by developers who think complexity is the enemy and seek to "contain" it at all costs—even when it adds hierarchical/organizational complexity.

And yes, if we're being pedantic, an AML system would need unit and integration tests—but the question is whether or not tests as a whole are useful. So unless you're arguing that an AML system can be made so simple that no tests are required and it "fits in your head," let's agree that testing has its place.


> Those are the design decisions accompanied by developers who think complexity is the enemy and seek to "contain" it at all costs—even when it adds hierarchical/organizational complexity.

Not in my experience. Overengineering and unit-testing always come hand in hand, likely because both are the "enterprise" way of doing things.

> the question is whether or not tests as a whole are useful

Huh? Nobody ever asked this question.

Questions I'm always debating are:

1. Whether unit tests are useful (no in 99% of cases)

2. Whether more tests = better (no)

3. Whether you should invest in automated testing before manual becomes painful (no)

> unless you're arguing that an AML system can be made so simple that no tests are required and it "fits in your head,"

Some tests will be required. But yes, it should fit in your head, otherwise you cannot reason about it. Building AML system that you can't reason about is a road to failure.

It doesn't mean that you need to load implementation of every single rule in your head before you make any change. But you need to fit, for example, the idea of rule engine and enforce constraints on rules working together. If you build rules that can do anything with no constraints (depend on each other, override each other), then you're screwed. You're going to be playing whack-a-mole with your tests, and while it will feel productive, it's a death sentence.


I'm surprised people still think tests _slow them down_. You do tests _all the time_ when you write software. It's a matter of making said tests repeatable.


You probably repeat a large subset of them manually anyway. Automate them and save time.

For instance, how many times did someone from Rovio manually shoot an Angry Bird, instead of code doing it?


Is it really that hard? Write tests where you think they add value.

Anyone who wants 0% or 100% are nuts and you can ignore them. Either they’re noobs or work at big tech and they want to get promoted.

Money is made somewhere in the middle.


I appreciate the polemical framing -- not many people are comfortable taking the "wrong" side of this issue publicly and it's easy to preach purity when you have a nine-to-five.

I think, though, that even if you grant that speed is more important than all else, and even if you are willing to accept a 200% interest rate on tech debt to get your releases out, you still want unit tests from day one.

Done right, writing unit testing in parallel with your code speeds things up rather than slowing them down. Personally, I find this is the case for even tiny projects like a Project Euler problem unless they can be done in under about an hour.


Personally, I have found that unit tests speed you up a lot, while integration tests often slow you down. In other words, if you're writing a "mock_[X]" class/function, it's probably slower than just testing with X.


Typically people refer to those by the opposite naming convention - unit tests just test one thing and mock any dependencies, while integration tests integrate multiple things and generally involve less mocking


Disappointed to see that most comments here missed the point in the article.

All these comments claim that automated tests make their code stable, fast, easy to refactor, etc.

But the article is not talking about those characters of the code, it’s about pivoting fast and making money. Does your stable, easy-to-refactor code directly make you good money? If yes, then please talk more about it, this is the correct response to the article, please don’t just claim your code is stable and easy-to-refactoring, and in the meanwhile have nothing to do with making money.


I don't want to maximize money up the food chain. I want my job to not be a pain in the ass so I don't have to work so hard. That effectively makes me money.


That’s alright and a very valid point, I too will appreciate it if I’m working for an employer.

But the OP’s context is about creating your own startup, then things work very differently now, and that’s the reason I feel disappointed, because people are not thinking and replying in the OP’s context, but rather in their own, this makes their point pointless, including yours I’m afraid.


I've founded my own startup and even then I wish I had spent more time on good code. I wasn't alone in it, and definitely got pushed away from that by cofounders. But the startup lasted a long time and it would've been worth it.


I don't think I've ever seen a harder missing of a point than this one - the point of writing automated tests is to actually test scenarios that are hard to manually test. Don't be hesitant to delete tests that are no longer useful, but if you're not writing automated tests, you're not testing ENORMOUS swaths of your functionality.


It says "hard to manually test" which obviously includes easy test cases that you don't want to spend time on automating.


Yikes hard no on this. I see the point it's trying to make, but, anecdotally, for me, it's very much the case that:

1. Writing tests is a forcing function on writing better code. 2. Better code is significantly more maintainable and adaptable 3. Tests are a cheap way to invoke your code to just simply see it run. Definitely cheaper than you clicking/tapping around your product. 4. Tests suites are an extremely effective way to coordinate work with (at least) yourself. Pending and failing tests make for a great TODO list.

It is absolutely possible - and easy - to write tests that get in the way. It's also easy (once you find The Way) to write tests that a force multiplier. It is a combination of writing both good code, and good test code.


Agree.

Testable code is inherently more decoupled and easier to swap out and toss away.

Exactly what you want as a startup.


> Testable code is inherently more decoupled

I.e. more abstract with more layers.

Exactly the opposite what you want as a startup.

As a startup, your pivot is not going to be switching from MySQL to Postgres. Your pivot is going to be switching from consumer app to business APIs.


I used to think like the author, until I realized the road to $1M was paved in forward progress, and not constantly fixing regressions.


Automated tests are not a solution to constant regressions. The are a solution to lots of manual tests.

If your team ships lots of regressions, it means your team prefers regressions over manual testing. It's a culture problem. You can't fix it with tests, only hide it temporarily.


We did.


>Software Tests Are for Millionaires

Sounds good to me, if you're already a millionaire you can afford it, if you want to become a millionaire through software, how else is it going to be more reliable?


I largely agree with the sentiment of the article.

For startups, feature tests - and mostly happy paths.

Integration tests for anything super complex that you cannot reason in your head easily.

The feature test catches everything breaking the happy path.

If it breaks, write a spec. If you find yourself manually retesting things, SURE add a test!

But 100% agreed, dogmatic TDD will kill your startup.


Some how I agree with the author.

In the last decade I’ve known and worked with dozens of one-man tech startup founders, they basically fall into two categories, the ‘product’ group and the ‘engineer’ group.

The ‘product’ guys tend to use boring technology like PHP or Python to duct tape their product and ship it ASAP and pivot very quickly if things didn’t work out, they NEVER write a single test.

The ‘engineer’ guys tend to use bleeding edge technology like meteor/nextjs/svelt/cypress/terraform/k8s/nomad/netlify/cloudfunction/(insert a hundred tech here)/… and polish their product alot before launching, including writing a lot tests.

To be honest, the ‘product’ group are way more successful financially than the ‘engineer’ group in my small sample dataset.

I’m in the ‘engineer’ group myself, and I tend to polish a lot, lately I’ve been thinking about the two different mindsets, and I get to the same conclusion with the OP’s article, that code quality and maintenancibility are great ideas, but unfortunately *for a startup* they are not directly related with making money, at least not linearly directly related.

Startup is about exploring undiscovered market opportunities, and it’s full of uncertainties and assumptions, your job is to find out whether the assumptions are true or false, not to write the most stable and easy-to-refactor code.

Guess what? Tests are about certainty, about known rules. In week 1 you write lots of tests to cover your 80% business logic, and in week 2 you find out your original idea does not work and need to change course, and all the written tests are garbage now, then week 3 same thing happen again…and all your precious time are wasted chasing the wrong target.

One thing I disagree though is that you don’t need to be literally rich to begin writing tests, as soon as your business logic and model is proven, like making positive cash flow or user growth is promising, then it’s the right time to write some tests, to improve the code quality and your confidence of future refactoring.


Absolutely not.

I'm no millionaire. I wrote tests. A lot of them.

I have a project that ships with FreeBSD, Mac OSX, and several Linux distros.

I get probably one bug report every three months on average. Why? Because I test the snot out of that project.

Writing tests means that this FOSS project of mine does not consume my life like so many can.

I'll spend the time upfront. Saves gobs of time later.

Oh, and time is money, so maybe this philosophy will actually help me become a millionaire.


> Claiming everything must be polished and tested

Polished, perfect software with 100% coverage is something that you may not want at the beginning of a business, I can agree, because of opportunity costs.

But that’s quite different from zero tests. 0% coverage will impact an MVP or even a simple POC sometimes, because software is “never done”, and requires constant change.


I first read "Software Tests Are for Millenials" and had to giggle. And then I thought about it, and wonder if there is a point to be made.

Anyways… Yes, tests are expensive. However, once the first million pours in, you are more busy ensuring that more millions come in by building features over features. And a solid foundation will help you here.


> Yes, tests are expensive.

Software is expensive. But do tests increase the cost?

In my experience adding tests to a legacy project not designed for testing is never worth the cost, but a new project designed with testing in mind can be developed significantly quicker having tests providing valuable data about the work in progress.


> Yes, tests are expensive

Yeah, but so is manual testing. You can save a fortune by eliminating the entire QA team!


I know this is partially a joke, but for those who may be taking it seriously:

The QA team's main value shouldn't be just repeating the testing that dev team should do in unit testing. QA's value should be to do exploratory testing. This involves trying to break things in as many ways as possible and finding things that the dev team didn't think of.

Is there a text field? Try putting Chinese characters and emoji in it to see what happens. Try to add null characters. Try adding the entire text of "War and Peace".

Can I upload files? See what happens when I try to upload a 1TB file, or millions of tiny files.

Should the dev team be writing unit tests to handle these cases? Sure, but it helps to have someone who is explicitly trying to find all the cases the dev team missed.


> once the first million pours in

That was the point of the article, saying that it’s harder to get to "once" if you write "sufficient test.


People blamed they can't write tests is an excuse for some configuration issues i think: It's harder to configure correct testing than testing itself. Writing tests should be the simplest thing to do.

If you're onboarding a well testing setup project, then it's hard for you not to follow the TDD.


In another thread just today was the question "what's the best advice you ever recieved about anything?"

"If you don't think you have time to do it right, what makes you think you have time to do it over?"


It's apparently this guy:

https://www.forbes.com/profile/james-steinberg/?sh=211dfcf5b...

And he can build businesses without worrying about testing. Except slow, manual testing end-to-end. Cool for him. Obviously, based on comments here, not everyone agrees.

I'm a test dev. I live my work-life in this stuff: figuring out how to efficiently, effectively give non-test devs the best feedback at the right times and in the right ways to speed them up. They are individual flames and I am gasoline.

Many companies start like Steinberg. And many of those automate those exact manual steps through the UI, testing everything end-to-end. That's where it stops being cool and starts being a really stupid waste of time and money. E2E tests are the least effective, slowest, flakiest way to test, it's often done really inefficiently, and I see all kinds of companies invest ridiculous sums in it. Most of those efforts slow down development. Boo!

More power to Steinberg for cutting waste and doing only the testing he needs. When your next company has millions, look me up. I'm a tester. I solve business problems.


This is a stupid article. Writing tests makes me code faster AND prevents bugs. Maybe this is true if you hire junior coders?


What good is velocity when the final output is a mass of bad code and broken coders? Well, it's good for _somebody_, certainly not the poor soul who was forced to mechanical turk forevermore.


With the tools of today, writing automated tests is a very fast process especially with the help of Github Copilot and similar. Running the tests is also fast and you can alter the code while debugging the tests in some langs making it a fast process. I use it for POCs all the time.

At the end of the day automated tests is a tool among others. Use it when it makes sense. Just don't fall into the trap of maintaining large sets of more or less useless tests. Delete them.


The startup story is a cliché now for a reason. The guy who throws up a 3-day MVP every week for a year gets rich. Rovio made 51 failed games before they got billions from Angry Birds

This is a sample size of 1. Buggy projects can lower trust, cost users. Getting users costs money so it's cheaper to make a good first impression. I always test thoroughly.


For small projects and startups that are still figuring things out, tests are useful to prevent regressions, and I would spend my energy doing exactly that.

Code coverage should only be an after thought. Whenever a bug shows up, writing a test to cover that edge case is going to help you ship with more confidence as more bugs are found and fixed.


* Regulatory Industry walks into the bar *


Yeah tests are less expansive than a regulatory body shutting you down or facing a wrongful death lawsuit.


This article is insight free. It’s essentially saying that being responsible and caring is for suckers. Of course that is true. So what? I am a sucker because I want to do good work instead of sticking it to my clients and users. Yep.

Testing is about learning what your wishful and arrogant ambitions have produced. Of course you can shrug and say you’d rather gamble and get rich before your users and investors find out that your product has terrible performance or security or functionality problems. This is not news.

I can make more money by mistreating my wife and family, and cheating on my taxes, too. I don’t do this because I have self-respect, and the respect of others matters to me.

Companies never test more than they think they need to. There is no pandemic of over-testing. This article is a waste of good disk space.


Half the time I can't even get the bastard thing written without tests


I’d love to see this attitude in a fintech application.

You will lose your users very quickly and not to mention potential fines from the SEC or <insert_other_regulatory_body>


I mean I get that it’s supposed to be a provocative statement, but it really is just a very shallow, false dichotomy. If writing tests slows you down -this much- then yes, you’re spending too much time writing tests. But they often save you time, both as a tool for modeling your thinking about the software (being explicit about how it should work) and also in, yes, testing it.

Here’s my provocation: If you don’t know how to tell the difference and make the frankly boring judgement call of whether or not you’re wasting time writing some tests, then maybe don’t work on software. “Pivot”, as they say.


Usually if I don’t write a test for the core functionality it just means I’m testing in prod and end up wasting 3x more time dealing with the fire


You may skip writing tests when building a prototype or experimenting with new ideas. Otherwise always write tests. I think it really is that easy.


Maybe he's hiring and just looking to see who disagrees and articulates why. Ok no but it was fun to imagine.


“Software tests are for Millionaires” Yep! So if you want to be a millionaire, write tests. :)


To make a lot of money as a startup, you have to leverage debt.

Weak code or weak testing are just a loan that you have to repay later (or never if the product shuts down or gets replaced by another one).


Just remember that there is nothing more permanent than the prototype.


I mean I guess it's one way to close your tickets haha.


For this argument to follow, you'd have to know that Angry Birds did not use tests.

I know I would have gotten tired of manually launching Angry Birds before getting the physics right.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: