Hacker News new | past | comments | ask | show | jobs | submit login

yes but if you follow that too strongly you end up having to rework tests _every_ time you change something which is absurd.

You should be testing input/output and results as opposed to testing how the internal gubbins works. That's the line we have to carefully tread when making a test. The test shouldn't force the item to behave in directly the way it expects; more that the I/O is correct.




this is essentially the argument for functional tests over unit tests and while I generally agree, I think a mixture of the two is important.

Unit tests should be used for extremely small and isolated mission critical objects while functional tests should generally cover the entirety of the I/O chain. That's how I do it at least and it works extremely well for a fraction of the cost!


I've never seen any value in unit tests. They never fail. What's the point?


They're useful if the code changes, or if the code doesn't change but the runtime/compiler under it does and breaks it. This can definitely happen in large projects.

If your project breaks because of local changes I think regression tests with real data and bisecting is better and less work though.


There are a few scenarios:

1. Your unit tests fail basically every time anything changes. This is the scenario where your unit test is something like "the command line arguments are -abcd" and every time you add one you need to change the test. This makes the unit test worse than useless, but actually a source of extra work every time you change something.

2. Your unit test never fails. It just doesn't fail ever, at all, under any circumstance. It's so obvious that it should work, but someone wrote that test anyway. It's a waste to run it every time.

3. Your unit test fails when you refactor because it tested some internal functionality. You need to throw away your unit test every time you refactor. It's a waste to write one every time.

The only tests that ever show that a refactor broke something are integration tests. The 200+ unit tests in my project either NEVER fail. Except for that one that you have to keep changing every time.


How do you test error conditions? In my experience, you are typically not mocking hardly anything (ideally nothing) with functional tests and some error cases require mocking. I find unit tests helpful in this arena.


I mock things all the time with functional tests. It makes it easier to reliably test your code's response to unusual conditions (e.g. error conditions) and it eliminates a source of brittleness in the test (you can write functional tests that hit the real paypal API and run them every day but every 2nd Friday they will fail because paypal is shit at keeping their servers up).

This is in no way a benefit unique to unit tests.


> yes but if you follow that too strongly you end up having to rework tests _every_ time you change something which is absurd.

it depends how brittle your tests are. You can write tests that make sure internal stuff works at a unit level without them being so brittle

to your point testing I/O or behavior is the way to accomplish this, but it can still be done at the level of internal functions/methods


This is why I generally prefer demos and saved REPL sessions to tests. I really care about avoiding regression, not about whether or not a certain function returns or errors on certain values. It's all about not getting lost in the weeds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: