Hacker News new | past | comments | ask | show | jobs | submit login

I think there's an asymmetry that you're not quite capturing here: when you add an item to the checklist, you have a specific concrete failure at hand (the process just failed, you had everyone on hand to do your root cause analysis, and you know exactly why). Later when you want to remove it, you (a) lose context because time has passed and memories are fallible and/or a different person is dealing with it, and (b) you have to have some reasonable assurance that you've covered other ways the same failure can occur. No amount of process can completely eliminate this asymmetry so at some point you're forced to make a tradeoff based on how risk averse you want to be.



The problem is not lost context. The problem is that adding an item to a checklist protects you from a category of outcomes. If you have a checklist item to make sure a release is signed, you will never push an unsigned release.

Removing an item from a checklist is done in response to a change in inputs. Sure, you may have automated release signing - but unless you are 100% confident that you are aware, and have mitigated 100% of the ways in which this can fail, you cannot, and should not remove the 'check that the release is signed' step.

Lost context has nothing to do with this. Unless you are an omniscient god, you probably cannot reason, with 100% certainty, that you have mitigated every possible input that produced a bad output.

So, check your outputs.


So it could be an automated test to check if the release is signed as well instead of a checklist item.

Having long checklists for comparatively simple tasks really hurts productivity, plus they're often used as an excuse to put automation in place, because 'the process is already defined' and 'people are already used to it'.

When designing a process, it is of utmost importance to keep checklist lengths minimal.


I can definitely concede there can be some asymmetry, but I think it's system dependent.

As the Boeing 737 Max 8 shows, adding new items (in this case a new design element) is also fraught with risks. You have to get the root analysis correct when adding or removing elements. Adding has risks of unknown (not accounted for) like removing has risks of unknown (accounting forgotten).

In the end, I guess I still believe the best strength is in good analysis at the consideration stage.


Adding a design element is completely different from adding an item to a checklist.

The asymmetry with checklists is that it's completely risk-free to add a new check, but it's risky to remove one. For example, someone might say "we're not totally sure that our system won't fail when we do X, so let's check that in QA, or at runtime, or at takeoff, or whatever." Now that the check is there, it protects you from failures when you do X. And now you're in a situation where you can't safely remove that from the checklist unless you can prove that your system won't fail when you do X. Adding requires only a suspicion, removing requires rigorous proof.

The case you describe with the 737 Max isn't the same at all. There's an actual risk when adding a new component to the system, but no risk when adding additional verification. That's not to say that there aren't other costs, but it can't directly make your system less reliable.


Just one example: If I add a step of, If patient enters with heart attack symptom X, was patient injected with 100cc of <drug>?

It's not a risk free check at all. It will likely increase the rate at which the drug is administered, with all types of plusses and minuses associated with adding a component to a system.


I think the case of 737 max is they game the checklist approach - use the guy who should be checked to be the guy to check it. Hence the argument to remove an item is so dangerous. (But then it is also may be bad blood. Life is more complex than a checklist. )




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: