Hacker News new | past | comments | ask | show | jobs | submit login

People get tired, sick, frustrated, panic - part of being a responsible engineer is accepting you're as fallible as the next person and building in protection against your own errors.

However, if "the engineer" that caused this happens to read this, the above is not a sign that you should quit the profession and become a hermit. A chain of events caused this, you just happened to be the one without a chair to sit in when the music stopped.




One of the coolest things I've read about is how airlines do root cause analysis. If you get to a point where a human can mess up the situation like this, it's considered a systemic issue. Mobile now, but can try to find it later

EDIT: https://dvikan.no/ntnu-studentserver/reports/A%20Human%20Err...

> That is, much like falling dominoes, Bird and others (Adams, 1976; Weaver, 1971) have described the cascading nature of human error beginning Human Error Perspective 39 with the failure of management to control losses (not necessarily of the monetary sort) within the organization.


In general, in aviation, the existence of any single point of failure (SPOF) is considered a systemic issue, be it a single human who can fail (anyone can faint or have a heart attack), a single component, or a single process. That's why there are not only redundant systems, but redundant humans and even redundant processes for the same task (you can power the control surfaces hydraulics through the engines, then through the aux power unit, then through the windmill... ).

If a design contains an SPOF, then it's a bad design and should not be approved until the SPOF is removed by adequate redundancy or other means.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: