Hacker News new | past | comments | ask | show | jobs | submit login

I'm going to push back and say that test is not a valuable automatic test. The phrase "relies on an unreliable system" captures that lack of value.



When the code your testing is a client for some remote API, and the sandbox/development/Testing version of that API doesn't have the same resources and uptime guarantee as production, then what are your options? as far as I can tell they are:

Don't test it.

Only do unit tests with the connection mocked out.

Test against production.

Try it a few times with a delay, and if it works then you know your code is good and you can move on with your deployment. Which is what flaky and pytest-retry do.

Maybe I'm missing something, but out of those 4 options retrying the test seems like the best one, with the big caveat that it is only viable if the test does indeed work after trying a few times. I really don't see any downside.

edit:

Maybe another option is to put the retry functionality directly in the client code, which would make your code more robust overall. but that is definitely more complex than using one of these libraries just for testing.


You're on the right track. It's a perennial favorite of devs to abhor flakyness, whereas after spending enough time as a tester, you come to terms with the fact that you have to take your tests as a statistical probe because most places test systems are simply not that reliable; sometimes, this is even a design feature.

This experience as a tester is in fact a normalization of deviance from the ideal computation model of a developer. Everything should work the first time everytime from their point of view. The tester sees reality as it is. The Emperor won't fund my test systems sufficiently to service all my customers, so we make do ss best we can. Bonus points in that we get to exercise the edge cases.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: