Hacker News new | past | comments | ask | show | jobs | submit login

Nothing stopping you. I don’t recommend it. Tests need to be fast or they don’t get written.

Adding property tests into integration tests will make things slower and flakier.




It depends. For a lot of applications, the integration is 99% of the app. Trying to unit test that just ends up testing whether you believe the other end of the integration behaves the way you believe it behaves. It's like having a test that verifies the hash of the executable binary. It tells you that something has changed, but not whether or not that change is desirable.


Test speed fetishism is a bad idea. It leads people to write unrealistic tests which end up passing when there is a bug and failing when there isn't.

Treating flakiness in integration tests as a fait accompli is also a bad idea. It's a bad idea - a bit like treating flakiness in the app itself as a fait accompli.


Tests that are slow to run don't get run often. People don't write tests first. They write fewer tests. They tend to test the happy paths.

I'm not saying we should avoid integration testing. It's necessary.

It doesn't mix well with PBT for a main test suite, in my experience.

You're better off making sure your domain types and business logic are encapsulated in your types and functions and don't require a database, a network, etc. PBT is great at specifying invariants and relations between functions and sets.

Use your integration testing budget for testing the integration boundaries with the database and network.

Update: there's no cure for writing bad tests. you just have to write better ones.


Tests get run as often as they get run. There's no rule that says that you have to run them less frequently if they are slower. It's not 1994. CPU time is cheap.

Realism is much more important than speed. A test that catches 20% more bugs and takes 5 seconds instead of 0.05s is just more valuable. It's simply not worth economizing on CPU cycles in 2024.

Yes, DI can make sense sometimes but definitely not all of the time.

The people who have ultra fast unrealistic tests inevitably end up leaning on manual testing as they experience bugs which their unit tests could never have caught.

>there's no cure for writing bad tests. you just have to write better ones.

Better means more realistic, hermetic and less flaky. It doesnt necessarily mean ultrafast.


> will make things slower and flakier.

If running tests with different data makes them flaky, your system is bad.

Regardless of property based testing, I've seen too many systems where there were hundreds of unit tests and which wold fail for the most trivial reasons because the actual integration of those units was never tested properly.


I love PBT and use it frequently.

So integration tests are generally IO-bound. They'll be slow.

With PBT you will generate 100, 1000 slow cases.

Integration tests are flaky in the sense that your components might not generate just the right inputs to trigger an error. They may not be configured under test the same way as they are in production. And so on.

PBT can be flaky in the sense that, even with carefully selected distributions, you can get a test that ran fine for months to fail on a run. Add that on top of integration test flakiness.

There's no silver bullet for writing bad specifications.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: