Hacker News new | past | comments | ask | show | jobs | submit login

> The article suggests using a probabilistic failure model instead of a large a safety factor

That wasn't my takeaway from the article, and the article discusses the same problems you've brought up. To quote:

> On the flip side, it requires more information as well. There must be good data or the result will be as arbitrary as the factor of safety, without the benefit of decades of experience... A bigger issue, and the one I think has prevented more widespread adoption, is that probabilistic design doesn’t account for fluke events -- the unknowables. If you don’t know what could happen, you obviously can’t assign that event a probability... The ideal approach might be a hybrid. Probabilistic design could be responsible for covering simplifications and a reduced safety factor could cover the unknowables. Of course, there’s no simple way to determine how much of the current factor covers simplifications, so reducing the factor would still be a risky endeavor... For my projects, I intend to embrace the empirical nature of safety factors and not think too hard about it. If a factor already exists for the area I’m exploring, I’ll use that




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: