Hacker News new | past | comments | ask | show | jobs | submit login

One of the ways to apply this inverted thinking is to conduct a "pre-mortem" at the start of a project. By deliberately imagining that something has failed, and speculating about the reasons, you can sometimes uncover useful steps that prevent those imagined failures from actually happening.

I've found this can be quite useful, both for minimizing risk, and also (interestingly) as a source for new product/feature ideas.




It's also a useful way of making people not feel like party-poopers.

Typically, everyone's excited at the start of a project and people are reticent to share their fears (especially if there are bosses around).

A pre-mortem gives them the mental freedom to share their fears as they are asked to imagine they are in a future where the project has turned out to be a disaster).


Conducting a pre-mortem, as you describe it, is almost precisely what STPA (Nancy Leveson) is about. You think of the system's behavior and present design and the things that can go wrong. Then you try to determine what would lead to bad or erroneous outcomes, and build in controls based on that analysis. Sometimes it's things that should be blindingly obvious, but we've demonstrated over the past 60+ years of higher technology use and development that we aren't good at spotting those things. Even simple things like, "The lawn mower should have a dead-man switch" is often forgotten.


I think it depends on the scenario selected. I’ve found pre-mortems annoying, and given any number of risks that could materialise how do you choose the right one for the pre-mortem for maximum value discussion?

Plus I generally dislike the idea and feel like its a trend that should go away.


> how do you choose the right one for the pre-mortem for maximum value discussion?

Isn't that where domain expertise comes in? It sounds pretty sensible and important to me to try to imagine various realistic failure modes and preemptively try to prevent them. To not let the website go down, pre-empting hard drive failure or DDoS makes a lot more sense than worrying about network cables spontaneously disintegrating, or the outbreak of nuclear war.


I always try to aim for the "Most Likely Worst Case Scenario"... not the worst thing that can happen, but the most likely bad thing that can happen.


Then in reality the website is down due to the simplest things that's so common we don't reconsider it, such as user input some special characters that makes the server error.

Edit: I'm not downplaying the importance of prevention


> server error

Seems to me that bugs is then a high risk in that project. And to prevent or reduce the number of failures, the project needs an auto test suite




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: