Developer's dont write tests if writing tests is hard. Simple as that. If writing tests is hard because you never invested in setting up a good test infrastructure with helpful utilities, you fucked up. If writing tests is hard because your architecture is a cluster fuck of mixed responsibilities, you fucked up.
This is were good tech leadership matters. Leadership must push back on product to make room too build out test infrastructure. Otherwise you'll see individual engineers who do the right thing get punished for it because get aren't delivering tickets.
Word. I work for a reasonably large online retailer that fancies itself a "tech company" and you'd be amazed at the pushback I kept getting for insisting on a proper test infrastructure for their mobile apps - dependency injection for mock API endpoints with captured data, fuzzing to capture UI weaknesses and the like. But no, they need to go fast because... well, because their product/market fit isn't that stable and they're using "data driven" decision making to justify chasing one rabbit after another to try and juice the stock price.
It's also really hard if you don't do it right from day 1. You'll be spending substantial developer time to put things in place and achieve minimal coverage which means the positive results might not materialize till much later. Worse, the positive result might just be slowing down the rot and you'll never be credited by much of the org for things not having gotten worse. Even worse, if you get replaced by a yes-man who stops enforcing testing and clean up, a temporary "boost" will happen. It's all totally broken and it's honestly burning me out about this industry.
Some things cannot be tested. I work in distributed systems a lot and you can unit/integration test simple functionality, but there is nothing which can test how your system will behave in prod outside of just trying it out.
I find having really good metrics and a tight development cycle allows for quickly iterating on distributed systems problems. Obviously the best situation is to have all of the above: unit tests, integration tests, and a tight development cycle in prod.
If I had to pick one because I am time constrained, I would choose testing in prod with good metrics which is maybe what the article is getting at.
You can get quite close to reality with really good integration tests. There's all sorts of real life scenarios ive hit like flaky networks, email appearance in various clients, etc. that can be integration tested with a bit of creativity but most people wouldnt even think to do it.
The investment in this stuff can be quite high...unless you've got premade tooling for all of this that you can just drop in.
Yes, unless you are doing a clean room implementation of paxos or the raft protocol it probably isn’t worth the effort to create harnesses to simulate packet loss, thundering herds, split brain, out of order responses, etc. Even then, if you are writing some distributed synchronization primitives you might be better off with formal proofs than some sort of test harness.
To give an example: You can unit test your OBD response parser all you like, at some point you have to actually get in the car and see what gibberish the OBD adapter spews at your app, what timeouts it needs, how reliable it is, etc
Writing tests, if you have to mock everything each time because you can't duplicate the production, isn't really hard. It's long and boring.
If I have to ballpark an estimate on a tool/project, I say upfront something like: it'll take me 6 days olus a day for very limited tests, so 7. Twice as much with almost complete coverage. But each time we'll have to improve/refractor, a bit of that time will be recovered.
Basically I ask how important the new project will be,without asking that (because 99% of the time the response is 'very').
I'm realizing I just say I was managing my managers. Should I ask for senior devops position now? :)
I was unclear, I apologize. The 'quick testing' isn't really testing imho. (basically no unit test, small integration tests with the data I could steal from the prod DB). It takes around 20% of the time (or 1/6 in my example taken from my last project). The 80% coverage that include mocking and unit test double that time.
I am always extremely clear with my estimates being just that, estimates, and that a complete testing double dev time. Most of the time I'm told 'we don't care right now', but on some projects, management accept longer dev time for more stability and less bugs (we build internal tools).
This is absolutely it. I've worked on multiple projects simultaneously where I wrote great tests on one project and few to no tests on the other. It wasn't the developer that changed, it was that one codebase was designed with testing in mind, while the other was not and was therefore extremely cumbersome to write tests for. That also meant that in the second codebase whole UI flows had no tests, so if I made one tweak to the UI flow I wasn't going to spend a week figuring out how to test it and everything around it.
The worst thing is how often developers almost fatalistically accept that testing sucks. I don't blame them though because improving the test infrastructure has little short-term business value or so they say.
You could spend your entire life improving test infrastructure. There's clearly a cut off point where the investment stops making sense but it's hard to know when.
The investment calculation is quite complex and many of the variables require guesses. A lot of returns on automation work are not positive.
The irony is that is absolutely does have business value. Unfortunately, it's easier to quantify the problems you've had on prod than the problems you've prevented on prod, so people tend to measure against the former.
Maybe not perfectly framed, but the point that engineers don't have time to write tests is salient. Minor disagreement with this point though:
> Tests can only tell developers they made a mistake. There is no gain at that moment.
The value is locating the problem code faster, and that generally can't be done efficiently in production. Outside a testing environment, you rely on logs alone to debug, or if you're more proactive you might grab session data but that means potentially sensitive user data is being cached and passed around to the support/triage team. At a security minded organization this should be impossible or at least difficult.
I agree it's not worth chasing Test Driven Development until you're actually testing during development.
This is a real problem, and it falls squarely on management. Testing needs time and resources, it needs priority and planning.
That's the reason I started speaking at conferences, started an online testing training course specifically for software managers, and am currently writing a book on the subject.
> potentially sensitive user data is being cached and passed around to the support/triage team
An excellent point for most cases that ultimately cannot be satisfied for others. For instance, this most recent HTTP/2 Rapid Reset vulnerability, could it be solved without shared PCAPs?
>> Tests can only tell developers they made a mistake. There is no gain at that moment.
This is the dumbest thing I have seen on HN in a long, long time. Full test coverage supported by good mocks is one of the quickest and most efficient ways to find regressions that I know. Don't discount the value in helping developers see when they've made mistakes, especially when you're rotating junior and mid-level developers on and off your team.
When we have business logic changes or a bug fix, I require our developers to write test cases for the new behavior and commit that. I want to see those test fail in the CI systems. THEN they can implement the changes until the tests pass.
When reviewing, the first thing I do is look at the tests. That is how I make sure developers understood the new requirements. Only after verifying that, so I then do a code review to critique implementation.
I found that a suite of useful tests speed up my development by a lot.
Tests that test services that do interesting thing in my code. I find that I can get 90% of my work done without ever firing up a web browser. It's cool to know that by the time you hit up the ui the code already works perfectly and I spent less time coding than otherwise.
So idk what this article is pointing out. Tests for test sake is probably that, but a big chunk of unit testing is definitely not bad for developers.
I don't think you're getting the point (you might be in a small, or in a great team).
Tests are great for the codebase, for stability, for onboarding. But for devs who have management/exec pressure, and especially if the team is 'agile', they will slow your sprint, discover edge cases or bugs you never put in your cards and ultimately 'slow down' initial development, and if your company is especially bad, you'll get performance reviews. If you let the edge case/bug happen in prod, it'll take more time to correct it than if you caught it prior to that, you might have to modify an loop or a data structure and introduce more bugs, but at least the software was delivered in time, and you'll get paid for that time spent correcting those bug if you're a contractor.
The central hypothesis is that developers are not writing tests since they prefer bugs popping up on production instead of during development, and it seems quite absurd. I've been in many dev teams in many companies, but haven't seen incentives to be that much messed up.
One reason this doesn't make much sense is that this requires hoping that the bugs pass QA. If QA finds those bugs, sends them back to dev, then they still have to fix that + very significant time overhead of this round trip which doesn't make it "worth it".
I mean I've seen it happen with unrealistic stakeholders. Business forces the team to make a false commitment to a specific ship date with a specific set of features. It's obvious that the team can't hit that mark but telling the stakeholders that will just result in them refusing to budge and telling you that you need to come up with a plan to make it happen.
Eventually it's easier to just agree with them and do your best than waste even more time arguing with them. Eventually you fail to ship or you ship a buggy mess. The software engineers know the code quality is shit but they don't have time to think or set anything up.
But it's always "we don't write tests because there's no time" and not "we don't write tests since they might reveal bugs, and we don't want that" as the article claims.
So they lie about estimates because they afraid to miss their chance to delivery buggy mess. But doesn't it do more harm to the reputation in the long term?
The QA model you're describing isn't universal. In most of my roles there hasn't been any sort of separate QA team that stands between the dev team's code and production. So if you have a small enough set of customers that you can be confident they won't hit certain categories of bugs, or won't care too much if they do... it's tempting, and I've seen it done on occasion.
Yes, that's somewhat common, esp. in smaller companies. But the article's thesis still doesn't hold up since in that case it's the dev team which is given blame for the prod bugs. Dev team never prefers for the bugs to pop up in prod as opposed to during dev/QA. (accepting low impact bugs is something different which IMHO the article did not write about)
Isn't accepting low impact bugs exactly what you accomplish by skimping on testing? "I know there may be some issues lurking here, but the blame for customers finding them will be less than the blame for missing this deadline, so I'm not going to go looking for them." I've seen multiple teams where the incentives are structured that way.
> When found during development, developers need to write a fix; the time spent counts against “development time.” They are blamed for missed deadlines (Newspeak “Sprint and Goals”). They are asked why everything takes so long. Ironically, they are blamed for creating high-quality code. Sad, I know.
Hahaha, I have been penalized for going steady enough times that I reached a point where I have no incentive to make services any better. I found it better to sweep issues under the rug and switch teams before it blows up.
It pains me but I am not going to care about it if I am going to be penalized for it.
You have to start with test first -- just make them "the task".
Don't give estimates separated by test and features.
New feature at hand? First thing you code is the scaffolding to call or trigger new feature (this may be trivial on some projects, or impossible on others).
So long as you start out that way when asked, how long? The tail end always contains the feature, and they can't say cut it.
I worked at a place once where I was setting up a minimal CI/CD pipeline for the project -- I took the first few weeks to do this. In a meeting I was called out by another engineer that had longer tenure at the company. "Are you working on 'devops' stuff because you don't know how to do the primary objective". I responded with a very long triad of why all this other stuff IS the primary objective, and was never questioned again (by that same engineer anyways). All this to say, part of being an engineer is also to educate fellow engineers on ways of doing things better than status quo. It can be hard, and you can put your self at risk. But it's a choice you have to make for your self, the engineer who cuts corners to make mostly arbitrary deadlines, or the one who build robust software. You don't have to go in 100%, but at least have a leaning to one side or the other and make the better choice when it really matters.
Yeah +1 on not even talking to anyone in management about tests.
I've never talked about solving problems like that, always just talked about the time to fix the bug / implement the feature, and test were implicitly part of that. I've also never had a manager start micromanaging me over how many tests I was writing. That's always been my own judgment call to make.
that assumes you have the leeway to take longer. a lot of issues arise when there is an unforeseen deadline and you get just enough time to get out the code with some bugs and then can add tests and refactor afterwards. usually this happens when a critical component suddenly gets sunset and you need to work in a work around temporarily until you can do it right
I am trying to convey it's not about leeway, it's a yes or no. If you are asked can you get this done in 2 days, and you think "yes, but if I skip test", then your answer is "no, you can't get it done in 2 days". You are simply shipping unfinished work.
> So if tests are bad for developers, they won’t write tests, duh. Paradox solved. There need to be tests written, no exceptions, for some time to gain the benefits and make tests good for developers. Make tests work for them and they will write more.
This doesn't actually fix the incentive problem, though. If the reason developers don't like writing tests is because finding bugs puts them under additional pressure and isn't rewarded, requiring this so doesn't actually help them. Unless, of course, management comes to understand and accept that more time will be spent fixing bugs/improving code quality as a result.
"bug fix sprint" sounds like a horrible inefficiency. Is that the goal? Is the goal to constantly weasel more time out of your manager?
That's not my goal. I like delivering high quality stuff to users. I like 'moving the needle'. I like coming up with a plan to produce business value, and doing it. I like getting stuff done.
This article perpetuates the 'developer vs manager' concept which is a ridiculous mindset rooted in primitive human flaws. Fix your relationships with your colleagues and work together to make quality stuff. No deception. Only cooperation.
Sounds like the problem is engineers aren't accountable for quality. Rather than prescribing a solution, these leaders should make sure incentives are correct in their organization.
As an engineer I made myself responsible for quality for my whole career back when I had the freedom to guide my own work. Under agile/scrum I'm too unengaged to care what works in production and don't accept the idea of responsibility without authority and autonomy.
Agreed. Scrum, in most cases, robs engineers of the agency they need to deliver maximum value. My teams did better work without it under the condition that the talent was motivated, technically proficient, and had the right incentives.
> Sounds like the problem is engineers aren't accountable for quality. Rather than prescribing a solution, these leaders should make sure incentives are correct in their organization.
A low effort management practice is to make engineers move fast at the expense of quality.
An even lower effort practice is to then turn around and hold engineers accountable for the choices made by management in the first place.
The obvious needle thread here is to not "make" engineers do anything, but hold them accountable to the results the business needs to see. The best teams are composed of empowered, accountable engineers who have the flexibility to do what they're paid to do.
We have an alternative path that works for us: get a terrible, ugly, but functional version of the feature up as quickly as possible. Preferably, the uglier the better. Skip the tests for now. Put it behind a feature flag or some other way to "get it into production" without any real users able to see it.
This lets you try the feature out end-to-end. Click buttons that call APIs and see if the right actions occur. Fix all the stuff that breaks when you first try this out.
Then show it to a friendly. Product manager maybe, engineering manager, another developer... whatever. Someone who understands you're just looking for someone to try out this feature with you.
It works now? Great. Put unit and integration tests around it. Make sure that this happy thing you have running won't accidentally break. Now make it pretty. Give it the design product actually asked for. They'll have feedback, which you can now incorporate safely because you have tests.
> Tests are bad for developers. That is why. They can only detect bugs. Tests can only tell developers they made a mistake. There is no gain at that moment.
When test-driven development is applicable, it’s so damn nice. If for no other reason, just because it requires so many fewer keystrokes and clicks, just have a process watching for changes and rerunning the test suite. Less chance of developing RSI, it’s good for developers.
I added the caveat “when applicable” because for straight-up UI behaviors it’s not, but the longer I write code, the more I see ways to separate out pure logic from UI and other external effects and I’ve never regretted doing it for reasons including: general reasoning, readability, TDD, future changes, and bug fixes.
I worked on a project where we handled the results of an API call. For testing we mocked the various API call results. When we went live, we discovered they had changed the results of the API call. All the testing was worthless.
Still, I hate having bugs in my code and strive to make sure they don't happen. Still I have had managers release my code when I tell them there are active bugs being worked on.
I disagree. Unit tests, when done right, are heaps of ROI in my experience.
Most of the integration tests are for happy paths, i.e. ones where you can catch the obvious regressions. Testing corner cases is much more troublesome, because you have to set up a whole lot of specifics in multiple services. These are much easier to simulate with mocks.
Mind you, I hate unit tests for specific functions: I much prefer them as component tests, where you start with the inputs to the module and check the outputs and calls outside of the module. These ease refactoring where you actually change the underlying code and without changing the unit tests, you know that you also handled all those pesky corner cases.
Unit tests are valuable for certain types of code, mainly library code in my experience. For example if you are developing a 3d math library, unit tests will be invaluable.
Unit tests are less valuable for testing applications because the unit tests end up just being mirrors of your application classes/functions and you have to make changes in 2 places now any time you need to tweak application logic.
For applications I think black box tests have the highest ROI. Your application should have a headless mode through which you can send all of your test cases (real life inputs) and verify the output of your app has expected shape/value. As you encounter issues from user feedback, simply add to your list of black box tests. The beauty is that down the road you can "rewrite your app in Rust" or whatever and you'll have a huge set of black box regression tests you can use to validate the rewrite.
Horses for courses. In an environment where a compiler has nodded approvingly to tight type constraints that limit mistakes to a rather high level, I'm very much in the same boat. Bugs that get through those checks are often wrong assumptions about the environment outside of the unit, and chances are those wrong assumptions would also go into the unit test. If that happened, the test gives nothing but false confidence, which makes its value a net negative even before you start considering the effort that went into writing.
But at the other end of the spectrum (say, php), things are very different. You want that code exercised, just to be confident that it will actually do something, anything. Even if the question wether those things it does are right or wrong is left to higher level tests.
If your class needs lots of mocks to be tested it may be a sign that there is too much abstraction/complexity.
You should prefer to instantiate your class with real objects, if you can’t do that then use fakes, as a very last resort you can use a mock but at that point your test is probably useless.
(The exception) If you have an class that is a wrapper around some HTTP requests, then it is fine to mock these HTTP calls. If you have a test that depends on this class, then you can use either mock these calls again, or upgrade to a fake if the mocking is too complex.
I think people disagree and argue about this so much because each preference is true in that developer's context and environment. Everyone who voices a general opinion should indicate what kind of development they do.
How could it possibly be that unit tests are "good" or "bad" in general, for all software development?
Integration tests are better at accurately reflecting the real software. There is less of a leap of faith between "this test passes" and "the software actually works".
Unit tests are better at telling you exactly what failed. There is less of a research project between "this test fails" and "this code right here needs to be fixed".
I don't really love anxiety-inducing leaps of faith or time-consuming research projects, but I don't know of one form of testing that avoids both.
Unit tests (which have to mock dependencies) have a very bad signal-to-noise ratio. Most of their breakages are caused by forgetting to reflect the latest implementation details in the test code (like mocking new dependencies).
I write unit tests only if the logic is completely encapsulated by the "unit". The less you need to mock, the higher value the tests bring. That's one of the reasons I prefer to work on monoliths, it's much easier to test almost-end-to-end.
I disagree. Testable code tends to make more modular and easier to refactor code. It is easier for someone who is not writing tests to make things way more complex than it needs to be.