Hacker News new | past | comments | ask | show | jobs | submit login

> After reading a few smug comments...I've concluded that some of the folks in this thread have never worked on an application where the production scale is many, many orders of magnitude greater than preproduction environments.

I think your comment is smug as well. The author of the article has considerably overloaded the term "testing", reasonably having a lot of people have a knee-jerk reaction of "Don't do that". I get the impression that this is by design, and the article is intended to provoke discussion via flamewar.

I think the best way to combat this is to simply avoid too much discussion on such articles until they are rewritten to be more clear and less liable to cause flamewars. I have some views on the thesis of the article, and practical experience backing those views – but I simply won't express them because I don't want to get embroiled in the numerous minor arguments caused by the confusing terminology, with little to no actionable information. An example of a pattern that you'll see repeating across all the comments in this thread:

Person A: "We take our time and test the application thoroughly under all sorts of load in a preproduction environment. We run end-to-end tests on every build. We value work/life balance, and try to minimize testing-in-production."

Person B: "When did the article say not to do that? It never says you shouldn't test outside prod, it says you should also test in prod and that's a superpower! You're totally misunderstanding the article!"




> The author of the article has considerably overloaded the term "testing",

I would disagree. There is zero overloading of the term "testing" as that is an already extremely broad term of art that would seem to clearly apply to every example of production testing provided in the article.

> with little to no actionable information.

The article absolutely provides aome actionable points and breaks them up under "Technical", "Cultural", and "Managerial"

> An example of a pattern that you'll see repeating across all the comments in this thread:

The article repeatedly covers the exact points mentioned in your example exchange so the people you see having that exchange are those who at best only skimmed the article. I find that the light readers can sometimes dominate comment threads early but those comments eventually do become out outnumbered by the more interesting discussion. People who take the time to read carefully and think respond slower and thus tend to be back loaded.

> I have some views on the thesis of the article, and practical experience backing those views – but I simply won't express them because I don't want to get embroiled in the numerous minor arguments

That is unfortunate. The only way the discussion improves is when people do take the time to state their views, even when they don't have the time to follow up on replies. The only way to combat vapid discussion is to plant the seeds of better conversation.


The article also mentions a few things that are usually not tested pre-production (e.g. timeouts, race conditions) but certainly could be with better integration test tooling.

The thing that bugged me was that these were not treated as a cost/benefit trade off but simply as a fait accompli.

Over time I've come to believe that an appreciation of nuance and cost/benefit tradeoffs are at the heart of effective testing, but culturally the practice is steeped in dogmatism and absolutism. This exhibits all of that - e.g. "control freak managers", "only one represents reality" and "saying not today to the gods of downtime".


I think your reply to my smug comment is also smug! It’s smugness all the way down!


It's possible, I did feel a bit self-satisfied after making it!

Maybe it applies upwards as well? The article is kind of all smug about "I test in prod" too :)


Full-stack smugness




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: