Hacker News new | past | comments | ask | show | jobs | submit login
The most expensive number in engineering (surjan.substack.com)
501 points by as89 on May 31, 2021 | hide | past | favorite | 167 comments



The article suggests using a probabilistic failure model instead of a large a safety factor, and explains how a safety factor established in the 1930s affected the cost of the Space Shuttle. But spacecraft might be a special case, where any additional weight is so expensive, and you also expect the models to be especially accurate and manufacturing to be extremely precise.

For more everyday civil engineering, I think the safety factor "covers up" a lot of systemic inaccuracies everywhere in the system, from modeling to design to manufacturing to unintended uses. Some of those you might account for in a probabilistic model. But it's very difficult to probabilistically model errors in the model itself, as the financial industry found out the hard way.

When driving over a bridge built the way suggested here, how comfortable can we be that certain stresses aren't correlated in ways that the engineers didn't anticipate? Or that a certain distribution isn't actually as well approximated by a Gaussian as it was assumed to be? Intuitively, it's a lot harder to be wildly wrong with the factor of safety approach.

To put this another way, a more complex way to reason about safety necessarily has more moving parts, and is thus more likely to be wrong. So in effect, adopting more complicated safety models introduces a safety risk all on its own. I think that needs to be considered as well.


Most modern bridges you drive over are designed the probabilistic way that is suggested in the article. Bridge design followed vertical construction as material science and manufacturing got better for steel and concrete. The probabilistic approaches haven’t been adopted much in engineering fields that deal with too many unknowns. I’m actually incredibly surprised that there was no mention of the technical terms for these approaches: Allowable Stress Design, Load and Resistance Factor Design, and Yeah That Looks Right Design. LRFD is highly probability and materials testing based. ASD is a hybrid approach of old factors of safety and some probabilistic theory. YTLRD is based on the long and storied history of guys who have been doing it this way since before you were born, no matter when that was.


And they're all used, to some extent. If your Wizzy design based on the latest everything doesn't pass the grumpy old partner's YTLRD review, you're going to redo it till you do.

(In my case, that was Bill. He was one of those guys who knew where to put the $50k mark)


I'll go out on a pretty small limb and say that the vast majority of Civil Engineering failings are not a matter of an incorrect safety factor, but are things that are explicitly not part of it.

1) Blunders. (Many places. You do the math wrong, or approve the wrong shop drawing, and no factor of safety is going to save you). (See the Hyatt Regency Walkway Failure) 2) Inadequate Geotech Info. (Basically every dam failure ever) 3) Genuinely new behavior. (Tacoma Narrows) 4) Contractors. (I-90 Bridge Sinking) 5) Deferred Maintenance. (Fatigue on bridges, Minneapolis)


6. Corner-cutting. I consider that distinct from blunders or bad contractors. See: the Pal-Kal construction method.

I consider that not part of the safety factor because IIRC it was basically fudged or outright ignored, often times with building inspectors paid off.

https://en.m.wikipedia.org/wiki/Versailles_wedding_hall_disa...


Agreed. Although I'd probably also argue that a safety factor probably papers over those kind of problems in many many cases.


But the question is, if you used a probabilistic approach, and tried to model, even very roughly, those things (probability contractor bodges the job in a way inspection doesn't notice: 10%), then would you end up with a safer bridge for the same money spent?


That can be an unbounded black swan event. There are distributions where there is no mean value.

The difference between a square section and welded channels. The difference between 53F and 27F. The difference between putting the waterproofing on before or after the post tensioning anchors. Leaving watertight doors open during a storm.


It feels like you're picking numbers out of the air (or basing them on historical experience) either way. Unless you actually have historical data on certain types of problems--but then it seems like you're pretty much back to a safety/fudge factor.


But a large number of guestimated fudge factors all added up will approach the true value as long as there is no bias.

The same does not apply for factors of safety - a 1.5 FoS is always between 0 and 50% too much.


I suppose a subset of 1 is: failure to consider stresses during construction, and only doing the calculations on the final product (FIU pedestrian bridge failure).


> The article suggests using a probabilistic failure model instead of a large a safety factor

That wasn't my takeaway from the article, and the article discusses the same problems you've brought up. To quote:

> On the flip side, it requires more information as well. There must be good data or the result will be as arbitrary as the factor of safety, without the benefit of decades of experience... A bigger issue, and the one I think has prevented more widespread adoption, is that probabilistic design doesn’t account for fluke events -- the unknowables. If you don’t know what could happen, you obviously can’t assign that event a probability... The ideal approach might be a hybrid. Probabilistic design could be responsible for covering simplifications and a reduced safety factor could cover the unknowables. Of course, there’s no simple way to determine how much of the current factor covers simplifications, so reducing the factor would still be a risky endeavor... For my projects, I intend to embrace the empirical nature of safety factors and not think too hard about it. If a factor already exists for the area I’m exploring, I’ll use that


Or more simply: planes that get hit with massive unexpected turbulence shouldn't just drop out of the sky and kill everyone on board.

And to paraphrase the great philosopher Donald Rumsfeld, it is all about the unknown-unknowns.


Massive turbulence is taken into account in the loads calculation (JAR 25.341). It's not supposed to be in the safety factor.


Any plane that can land missing half a wing on one engine is over-engineered, from a certain point of view.


Unless it's meant to take that sort of damage and survive intentionally due to its usage such as the A-10


So... you'd rather it crash and kill everyone onboard?


Aircraft have a smallish safety factor, because of the weight. They make up for it with much more careful design, manufacture, and maintenance along with frequent inspections.


Yes. I linked below a slide set by a Lee Petersen, JPL engineer who was instrumental in setting up the uncertainty accounting for the Mars lander EDL system - both Curiosity and Perseverance. (Among other uses of UQ for engineered systems at JPL.)

In the slides he’s deliberately contrasting a “MUF” (model uncertainty factor) approach that uses safety factors, possibly stacked as indicated in the OP, versus a “BE+U” (best estimate plus uncertainty) approach.

The latter uses verified physics-based models, validation of model predictions versus experimental tests, and uncertainty quantification of sources of error, to get a best estimate of whatever quantity is critical to safe operation, and an uncertainty. A final margin can then be added onto that, see slide 13.

The mess at the system level that can result from a stack of ad hoc MUFs applied at the subsystem level is shown on slide 15.

Incidentally, many of the comments nearby seem to think the application of domain specific safety factors is still state of the art. This just isn’t true any more for high risk systems.

https://cstools.asme.org/csconnect/Filedownload.cfm?thisfile...


Did you get to the end of the article? He addresses this:

>A bigger issue, and the one I think has prevented more widespread adoption, is that probabilistic design doesn’t account for fluke events -- the unknowables. If you don’t know what could happen, you obviously can’t assign that event a probability.

>The ideal approach might be a hybrid. Probabilistic design could be responsible for covering simplifications and a reduced safety factor could cover the unknowables. Of course, there’s no simple way to determine how much of the current factor covers simplifications, so reducing the factor would still be a risky endeavor.


Grossly increasing the factor of safety is a subtle way that science fiction stories connote a feeling of very advanced technology.

For example in the JJ Abrams movie Star Trek Into Darkness we see the Enterprise operating at depth in an ocean, then dramatically zooming away into space. Then later another ship falls from orbital height and plows through San Francisco without losing its hull shape.

In Star Wars the Millenium Falcon is constantly doing things that would seem to be outside a normal design for a spacecraft, and it survives (aside from the radar dish).

Even as far back as the movie 2001, the monolith is made out of a material that humans can’t dent or cut. Why so strong? It’s basically just an automated radio.

The idea is: this advanced civilization has such command over physical technology, that they can effortlessly engineer unnecessary strength without losing any of their designed performance.


Things JJ Abrams touches tend to border on the absurd, but the principle is true in pre-Abrams Star Trek too.

O'Brien sheds a little light on one of the aspects of this: https://www.youtube.com/watch?v=UaPkSU8DNfY.

  GILORA: Starfleet code requires a second backup?
  O'BRIEN: In case the first backup fails.
  GILORA: What are the chances that both a primary system and its backup would fail at the same time?
  O'BRIEN: It's very unlikely, but in a crunch I wouldn't like to be caught without a second backup.
From a different franchise, I keep thinking about the Ancients of StarGate universe, known for their technology, which could remain fully operational for millions of years. That's the true over-engineering.

(But then I'm thinking, a species that mastered "stasis technology", commonly present in many sci-fi franchises, should eventually be able to make artifacts that can survive indefinitely, at least when not in operation.)


I don’t think this is science fiction doing this to connote the future in many of these cases.

The cars in the “fast” franchise also survive ludicrous damage. The bodies of action heroes survive falls from ridiculous heights and blows and stab wounds that would kill somebody with ease.

This is more a property of action movies (and adventure movies, to a lesser degree).


The loose industry term for this is “plot armor” [1]. There is no explicitly stated reason for the hero(es) being nigh invincible (be it person, spaceship, car, etc.). The only reason why the character survives is that it has a reason to continue existing for sake of the plot.

So yes, I suppose you can rationalize in your head that most ships are made out of super strong materials in science fiction, but unless that’s clearly laid out, you may just be rationalizing writer’s convenience.

[1] https://tvtropes.org/pmwiki/pmwiki.php/Main/PlotArmor


Seems disingenuous to lump a science fiction movie where the focus is future technologies to an action movie with exaggerated physics and lack of real damage. In action movies, bad guys die with one bullet while the hero finds his way to safety (and survives!) despite having 10 or more lethal bullet wounds.

For the record, Star Wars is not science fiction nor has it ever been portrayed as such. It's very much an action adventure set in space.


> For the record, Star Wars is not science fiction nor has it ever been portrayed as such.

FWIW this is literally the first time I’ve encountered someone claiming that it isn’t science fiction.

(Soft SciFi, to be sure, but still: https://tvtropes.org/pmwiki/pmwiki.php/Main/MohsScaleOfScien...)


And Vulcan are space elves, Klingon are space orcs and Borg are space undead.

All of this stuff has inspiration from classical fantasy. That doesn't change whether or not something is SciFi or not.

There is an intersection of SciFi and Fantasy for sure, and the line gets blurred. Artificial human stories can be cyborgs, androids, clones, golems, or chimera, or even explicit fantasy races like minotaurs.

If it's an android, then it's SciFi. But if it was a clay golem made with magic (but otherwise the same story) it's fantasy.

See golem stories for instance. They explore a lot of sci Fi themes but are basically a fantasy theme. https://en.m.wikipedia.org/wiki/Golem


Certainly many tropes are shared, but that is true beyond the scifi-fantasy intersection. (Although: I would say Klingons are the avatar of Russian-ness in the eyes of America at the time any given episode gets written. Beardy humans in TOS, impoverished but hard as nails in TNG, anarchic and self-destructive in ENT, sneaky and dangerous warmongers in DIS).

I regard the superhero genre as a modern version of the old divine pantheons — heroes and villains, supposedly far beyond human, yet oddly well-balanced against each other (Hawkeye should not be in the same battleground as Thor in the same way and for the same reason that Choi Mi-sun should not be in a battle featuring an attack helicopter).

Sufficiently advanced technology being indistinguishable from magic and all that.


I see the similarities for Klingons and Borg, but I don't see much similarity between Vulcans and elves beyond the pointy ears. Vulcans are more like androids: logical and outwardly emotionless, but not necessarily wise. The human captain (e.g. Kirk) tends to figure out a winning solution despite the resident Vulcan often complaining it isn't logical.

That doesn't detract from your point.


IIRC, Spock's appearance was intended to look vaguely demonic or Satanic, rather than like an elf. The original concept had him with red skin and deeply arched eyebrows as well, but they had to tone the makeup down a lot to avoid offending religious viewers.


In the original series, the Vulcans were the Japanese, the Klingons the Soviets, and the Romulans the Chinese. The series was bathed in the politics and social struggles of the times (the late 60's).


Lucas himself has stated that although science fiction inspired him to create these stories, Star Wars is indeed not science fiction but rather science fantasy:

https://scifi.stackexchange.com/questions/46481/did-george-l...

Because all the stories are " a long, long time ago in a galaxy far, far away," it's not even clear if Luke and the gang are humans or some other species. Of course it's easy to think of them as humans, and most people do, but it's not necessary. This ambiguity is also by design.


When Star Wars was released, it was understood to be science fiction. See this interview with Alec Guinness in 1977:

https://youtu.be/0qxcEBI1iKI

They both call it science fiction.

Lucas was wary of that label because science fiction had a (well-earned) reputation at that time as a ghetto of heavy-handed allegory with hammy acting and terrible special effects. Lucas was striving for more than that.

He was trying to create a modern myth, or modern fairy tale. But genres are defined by their conventions, and Lucas chose space opera conventions (space ships, lasers, aliens). Space opera is a subgenre of science fiction so Star Wars was considered science fiction.

J.R.R. Tolkien was like Lucas in that he also tried to create a modern fairy tale. But he chose wizards and dragons, so his stories became fantasy.


Any fantasy movie in a futuristic setting (even one that's technically in the past, like Star Wars) would be widely considered science fiction. Granted, that is a common definition of science fiction.

But science fiction can also be something like The Twilight Zone: a genre that explores "what-if" scenarios. A genre that takes some fictional premise and attempts to explore the consequences logically. An example premise could be, how might modern society have evolved if the Greek gods were real? A sci-fi story would try to approach that question in a scientific and logical way. Star Trek often followed this pattern. Star Wars doesn't.

A weakness of defining sci-fi by a futuristic setting is that you can change the genre with only superficial changes to the story. Other major genres - comedy, horror, mystery, romance, action - are defined by the feeling or thought process they try to evoke. But take Star Wars and replace the planets with kingdoms, the Jedi with wizards, and the force with.. well, that's fine as it is.. and you can create a story with the exact same plot points but without a futuristic setting. There's no actual science in Star Wars; only a futuristic setting.

Setting doesn't feel like it should be the feature that defines a genre. You should be able to have a comedy, horror or mystery take place in a futuristic setting and a sci-fi that takes place in the present (and we often do, eg, The Handmaid's Tale).

I'll note that there aren't many movies that stick to just sci-fi; most incorporate a lot of action (like The Matrix). This is probably because filmmakers feel 2+ hours of science fiction would be boring. The movie The Martian heavily cut from the science of the book and focused on the action (while the book had little action and a lot of science). You see a strong sci-fi focus more often in literature and TV.


I don't know if Arrival has a book, and if it does I didn't read it. But the movie seems pretty solidly in what you're describing as "sci-fi" here. Takes a single premise and then builds the world around that "what-if". Same with Contact and Jurassic Park, with touches of drama and action / horror, respectively.


Yeah, Arrival fits well and I enjoyed it. There are a number of lower budget films you could add to that list like Primer, Moon (2009), or Gattaca. Ex Machina is sci-fi + drama. I'm sure there's plenty more. There's just a tendency for sci-fi films to lean heavily into action (or occasionally horror) like The Martian film did when compared with the book.

Jurassic Park feels more action/horror focused to me, but it has a sci-fi premise.


> I don't know if Arrival has a book

Yep. Arrival is an adaptation of a Ted Chiang short story, Story of Your Life (https://en.wikipedia.org/wiki/Story_of_Your_Life)


True but Lucas doesn't have final say on classifying his movies. People who make things have an incentive to claim that their work is somehow special.

I'd just say Star Wars is science fiction with lots of fantasy elements, as well as lots of influence from different genres (Earth-based war films, adventure serials, etc.)

I see the fact that humans in Star Wars seem to be the "same species" as Homo sapiens to be similar to the fact that movies that take place in ancient Greece have the characters speak modern English. We are expected to understand that humans in Star Wars evolved there, not in Africa on Earth. It would be more realistic to have the main characters look very different from us, but it probably wouldn't make for as enjoyable a movie.


Why isn't Lord of the Rings "science fiction" but Star Wars is? They both have technology unknown to us.


LotR magic isn’t supposed to be technology, it’s literal-god (Eru Ilúvatar) created everything by singing, literal-devil (Morgoth) has a different and incompatible song, some legendary things happened (only in-universe they actually happened and aren’t mere tales like our myths, and the elves being immortal can remember them), and then the literal spirits of literal minor angels (Maiar) took human form and did magic as the wizards.


Things like this can lie on a spectrum.

That said, Lord of the Rings has a lot more traditional things like elves and fairies and wizards, and a lot more "pure" magic. They are also completely lacking much of technology from today or even the time it was created... no telephones or automobiles, for instance. (Star Wars has things that are more advanced versions of these)


> FWIW this is literally the first time I’ve encountered someone claiming that it isn’t science fiction.

Well, there's Science Fantasy [1]: Jedi and the Force are very much Wizards and Magic.

I've heard it called Space Fantasy, too.

[1]: https://tvtropes.org/pmwiki/pmwiki.php/Main/ScienceFantasy


A bunch of classic “golden era” science fiction novels feature characters with unexplained mental powers, like Asimov’s Foundation series, Dune, Niven’s Known Space series, etc.

Those seem like obvious fantasy now, but from about the 1950s through the 1970s, a lot of serious people believed that there were undiscovered powers of the human mind that science was on the verge of discovering or confirming. Mental powers are therefore a common anachronism of sci fi from that era.

Most science fiction stories are going to feature some elements that are essentially unexplained and therefore act like magic in the story. I think most folks would consider 2001 to be science fiction but the powers of the monolith are at least as crazy and unexplained as what the Jedi can do.


>Jedi and the Force are very much Wizards and Magic

Star Trek canonically has telepaths, telekinesis, and godlike beings that can alter reality with mere thought.

Why don't people call Star Trek space fantasy as well, when its universe is even less grounded in realism than Star Wars?


Three movies were referenced in the comment I responded to: Return of the Jedi, Star Trek: Into Darkness, and 2001: A Space Odyssey. Two of those movies are definitely not focused on future technologies.

Many people call Star Wars "science fantasy", but I'm extremely confident that this is used less frequently than "science fiction" to describe it and I am absolutely confident that I'd be able to find marketing copy by LucasFilm or distributors describing it as science fiction, even if that aggravates people who are really into more cerebral sci-fi.


It's space fantasy or space western, not science fantasy.


> For example in the JJ Abrams movie Star Trek Into Darkness we see the Enterprise operating at depth in an ocean, then dramatically zooming away into space.

Subverted in Futurama: https://www.youtube.com/watch?v=7GDthiBGMz8


What about a craft like the space shuttle? You have it powered through the atmosphere by a set of powerful rocket engines. The forces on its hull while it is accelerated through the Atmosphere by rocket engines or gravity would seem to be analyzable as atmospheres of pressure. So the Enterprise and Bessie both need to be designed for more atmospheres than 1.0, due to the fact that both ships encounter gaseous environments while under acceleration.


What your're talking about is called the max q condition[1]. It's definitely a significant design consideration, but I believe that the loading would very different when the rocket is plowing through the air in a particular direction, compared to an "equivalent" hydrostatic stress applied uniformly over the surface, so even though the structure might be fine with the first, it wouldn't survive the second. For instance, think about corrugated or honeycomb materials - they often have a "strong" orientation and a "weak" orientation.

[1] https://en.wikipedia.org/wiki/Max_q


> the monolith is made out of a material that humans can’t dent or cut. Why so strong? It’s basically just an automated radio.

The monoliths were supposed to be able to assess and monitor the activity and level of development of local species, possibly telepathically, catalyze evolution to develop intelligence, and encourage technology use. A monolith also transformed David Bowman into the Star Child. And of course, they had to maintain themselves over evolutionary timescales. So they were much more than just radios.

In sequels (which perhaps don't count), they became ridiculously overpowered (spoiler alert), replicating exponentially to perform Kardashev level II feats of solar system engineering, and hosting multiple uploaded intelligences like Bowman and HAL.


> The idea is: this advanced civilization has such command over physical technology, that they can effortlessly engineer unnecessary strength without losing any of their designed performance.

Are you sure the idea isn't just "we never thought about it"?

Old cars are sturdy and robust while new cars are extremely fragile and will total themselves at the drop of a hat. That's not because the 50s possessed ancient technology lost to the modern day. We put a lot of effort into designing our cars to destroy themselves under stress. We do that so that more of the energy of a crash, should it happen, will go into deforming the car (since that's easy now!) and less of it will go into deforming the passengers.


> Are you sure the idea isn't just "we never thought about it"?

There are two sides to it. On the one hand, the new cars are "fragile" on purpose, because they're literally a big, single-use inertial dampener, with wheels and an engine.

On the other hand, there are great many appliance categories where you can clearly tell the old models, made in 70s and earlier, are indeed sturdy and robust, and new models are just flimsy. Appliances that don't experience the kind of forces like cars have to account for. It's hard to give any other explanation here than the obvious one: the old models were made before the market optimized the quality away.


Old cars were not sturdier or more robust than new cars. That’s a myth. You can Google “crash test old car vs new” or something like that to see examples.


90s volvos were absolute tanks . My friend had one and it got rear ended while parked and both the car that rear ended it and the car in front of the Volvo got totaled. The Volvo was pretty much fine and just needed a new bumper (which was easily done as the bumpers were distinct from the body work).


I believe it's at least partially true, because new cars are designed to be damaged in crashes, to better protect the people inside them.


True. Old cars has real bumpers though.


About the Abrams movie: Too bad the same factor of safety wasn't applied to the buildings, where that should have been even cheaper.

Your point is good though. Just like we're now starting to put wi-fi chips into absolutely anything just in case, why wouldn't an advanced civilization simply use their super strength nanoparticles for everything? Why go out of your way to use worse materials?


We use super-strength thousand year plastic for one time uses. It's a big problem.


It is only a problem for single planet species. Advanced multi planet or multi system civilizations may worry less about that than we do.

Also if the bio diversity dropped, and people are used to living in closed ecosystems to limit damages from such harmful chemicals, over long enough period everyone will forget what was there before , further making bad things will not impact their quality of life anyway.


It's also very unlikely that nature won't adapt to plastics. They have great amounts of chemical energy, and they're everywhere.

Already you can see great decreases in plastic lifetimes, and they're decreasing faster and faster. The lifetime advantage plastics provide won't last.

e.g. https://en.wikipedia.org/wiki/Ideonella_sakaiensis

Without microbes breaking it down (which can be achieved in a number of ways, and generally it is at least slowed down) wood can last a millenium.


In fairness, a building-sized battleship was crashed into the buildings in Into Darkness. What would happen if someone catapulted the (CVN-80) USS Enterprise through a bunch of skyscrapers?


> why wouldn't an advanced civilization simply use their super strength nanoparticles for everything? Why go out of your way to use worse materials?

For the same reason we've spent decades making our cars structurally weaker?


> Even as far back as the movie 2001, the monolith is made out of a material that humans can’t dent or cut. Why so strong? It’s basically just an automated radio.

It's an automated radio that is supposed to operate unattended for millions of years. You want a strong case for that.


Back in the old Star Wars RPG from Westend Games, hull integrity was amplified by ballistic shields that were always on. Reason being, that hull had to withstand debris hits at the rediculous speeds in space. Given the speed and energy, I would assume kinetic impact would matter much. Pressure is different so, as even the hardest space ship only has to hold 2 bar, give or take, of pressure in.


I can sort of buy this.

In Gravity things are probably a lot more robust than they actually should be, but collisions with debris or other objects tend to be immediately fatal for the vehicles involved. Debris at high velocity goes straight through stuff.

Similarly, in For All Mankind, most stuff is pretty fragile and volatile, as you’d expect for Apollo-era tech. It’s only the later stuff that the show makes up which starts to show much resilience.

In the Expanse, technology is considerably more advanced, but still vastly inferior to any Star Trek or Star Wars vehicle. And in the Expanse we see the hulls of ships routinely get punctured, and a single torpedo or rail gun hit is often fairly decisive. Even the Donnager is forced to shut down its reactor due to a torpedo hit.

And even in non-Abrams Trek, in an alternate timeline Voyager still survives a hard crash on a planet reasonably intact, even if it was fatal to everybody on board. Possibly only because the inertial dampeners were offline.


> in an alternate timeline Voyager still survives a hard crash on a planet reasonably intact, even if it was fatal to everybody on board

Similarly, in the prime timeline, Enterprise-D ends its life with is engine section exploding, and the saucer section crash-landing into a forest. Externally, it looked pretty salvageable, but the insides were completely smashed. The crew survived, possibly because inertial dampeners still worked somewhat (I don't remember now), or maybe just because it flew on almost flat trajectory and used the aforementioned forest to slow itself down.


I think we need to distinguish between must-have and nice-to-have safety. To give an example, a car must not sponaneously disintegrate during normal high-way driving. That's what safety factor covers. If you floor it during heavy rain, and start skidding, crash into a tree, then that's kind of on you. Nevertheless the car will try to save you with abs breaks, crumple zones and airbags and what not.

So the future you speak of is already kind of a reality with cars. But maybe expressing it as a pure factor is the wrong way to think about it.


And we're also guilty of this when we use multiple ghz/GB computers with advanced GPU firing excel to make simple computations, or a multi-ton car to drive one person half a mile.


I wonder if the strength of the monolith isn't targeting durability on the order of eons?


"""A non-empirical alternative to the factor of safety has been around since the 1940s, but still doesn’t have widespread adoption. I think the image below describes the concept, called probabilistic design, best.

"""

This is _exactly_ LRFD (Load Factor Resistance Design) which has been in the Civil Engineering building codes since the mid 80's, and became common in use the 90's when I was an Engineer (in training).

(It's the difference between the older green book and the newer (at the time) silver steel design handbook)

It was absolutely drilled into us in school though, that Safety Factors and LRFD factors covered material and other uncertainty, they did not cover blunders.


Did you do civil? I did mechanical and barely just heard of LRFD and probalistic design. Not very common in mechanical, maybe in aerospace.


A large concern in civil design loads is rain, wind and earthquakes.

These are all probabilistic in nature, you do not generally seek to build something to be flood proof. You instead design it to survive perhaps a 1 in 100 year flood, or perhaps 1 in 1000 if it's important.

There is a trade-off being constantly made, between the price of a project and the (estimated - of course) chance of it still being standing in a year.


Yep. I did Civil/Structural.


Factor of Safety or F.S. for short was something us civil engineers were taught to never forget. You got grades deducted if you solved the problem correctly but forgot to include it in the very last line.

It makes sure we calculate the loads correctly and use appropriate materials. You can't fix a bad design.

The Arkansas bridge that has been in the news lately probably would have collapsed if it wasn't for the F.S. https://www.ardot.gov/divisions/public-information/40-ms-riv...


>It makes sure we calculate the loads correctly and use appropriate materials.

No, it make sure that nobody dies when you calculate the loads incorrectly and use inappropriate materials.


I think the idea is that, if you calculate the load incorrectly enough, or use inappropriate enough materials, the safety factor will not save you. But, if you have done those things correctly, then the safety factor should be enough to save you from normal unknowns, unexpecteds, etc.


Yeah four things going on.

  Design errors
  Probabilistic nature of the loads applied.
  Material defects
  Fatigue
  Deterioration
All structures have a service life and it's the service lifetime an experienced engineer is trying to hit.

For the impeller in a rocket turbo pump the service life is like 5 minutes. For the impeller in a hydro electric dam it's 50 years.

The other thing that one of my professors pointed out was that 80% of engineers end up designing once off designs. Where the NRE cost is a lot more than the material costs. Shaving the safety factor is false economy.


Fabrication errors. Off by one errors.


In what discipline? In Electrical Engineering a fuse circuit breaker or receptacle may be used thousands of times and in dozens of redesigns.


Much of civil engineering is one-off designs.


Also helps when management decides to defer maintenance for a decade or two and someone drives a truck over it that's just a bit over the weight limit what could it hurt?


Perhaps implicit institutional knowledge of large safety factors is why management feels safe deferring maintenance.


Can’t tell if this is sarcasm.

Because of goal of civil engineering is building man made objects with public safety in mind.


Another way to look at factor of safety is margin for error. Implementation variance, material variance, etc can all go wrong if something is designed to be exactly safe.

You need to know that something is redundantly safe, and which parts.


Material variances are included where the capacity is calculated. materials with hIgher variance like concrete (implementation variance) and wood (material variance) get their capacity lowered more than more consistent materials like steel.

The factor of safety is above and beyond material variance. You calculate the worst case load combinations for that component, then you check your factor of safety. Civil engineering is relatively conservative in its estimations for everyone's safety.

In my experience serviceability requirements (like reducing uncomfortable deflections that dont threaten the safety of the structure) often govern, rather than the ultimate capacity.


Aren't all the major bridges in New York built with ridiculous safety factors? It's why these century old bridges built for carriages and small trucks in a city of a million, can deal with 2021?

It's fascinating to me, and I feel like over-spec'ing certain chokepoints in infrastructure makes sense like this.


Then there's the factor that, in the times such a bridge was built, having it collapse would've been a larger catastrophy than today. Today, we can quickly fix things and build another. In the old times, that bridge might've been the only bridge making trade at all possible and it might've taken years to stack stones.

An unnecessarily strong castle takes you more time and resources to build.

A slightly too weak castle means you die, your family dies, and you lose all wealth and power.


> An unnecessarily strong castle takes you more time and resources to build.

> A slightly too weak castle means you die, your family dies, and you lose all wealth and power.

You're really glossing over a lot of important concerns here. There are many, many ways to lose all your wealth and power, and lots of them might have been avoided if you'd had a few more spare resources.


Actually one of the reasons big infrastructure projects were undertaken in the middle ages was just the opposite: to provide a reason for an economy to exist. Something for the population to do.

Most spectacularly this is seen in Cathedrals, but castles, certainly some castles, definitely show this. I would argue pretty much everything the Romans built is more than a little overspecced, certainly at construction time (do you really need 3 story POOLS, meaning pool on -1, pool on ground floor, pool on first floor, pool on 2nd floor buildings ? Yes the higher ones were apparently rented out to very rich Romans, or more often provided as favors to them, so there was some function, but ... come on. They were constructed to mostly look very convincingly to be 3 story pools without actually being that. Despite that, none of the buildings survived for very long. But "over the top" can be said about many Roman structures, from the Pantheon to the Aya Sofia)

So having an unnecessarily big infrastructure project has it's own advantages that actually increase your odds of survival, and that didn't start with the space race.


> one of the reasons big infrastructure projects were undertaken in the middle ages was just the opposite: to provide a reason for an economy to exist. Something for the population to do.

This isn't really compatible with anything I know about the middle ages. Can you show someone of the time writing or otherwise demonstrating that he doesn't want the cathedral, but he's afraid of what the population will get up to if they're left idle?


The funny thing is a safety factor is a factor after all. Just one needs to be small enough, and the whole construction (pun intended) may collapse in the worst imaginable manner.

This happened recently in Italy: https://www.bbc.com/news/world-europe-57219737. "Engineers" didn't consider the safety brake essential (i mean, why do you even need it?), and Murphy took his chance.


That was the original article which didn’t have the cause, and I hadn’t seen that they’d decided it was disabling the brakes [1]. A couple days ago it was “not sure which was first: support cable snap or emergency brake”.

It seems like they’ve decided that the support cable was functioning after the main cable broke.

[1] https://www.nbcnews.com/news/world/blame-italy-cable-car-dea...


I'm liking the Robert Norton chart about 2/3 of the way down, showing how safety factors need to be adjusted quite radically once we think about how reliable or rickety our estimates might actually be.

Particular kudos to thinking harder about whether we've truly tested the actual environment where our product might be used.

I wish social scientists would do the same in controlled studies of human behavior -- which are then extrapolated to the ways that people make real-world decisions. A particularly vexing examples involves the way that psychology students make decisions in short experiments involving small amounts of money or other rewards. (Endless variations on the "marshmallow test," etc.)

Knowing what a college student will/won't do for a whimsical $5 reward says almost nothing about how an adult on the brink of poverty will balance bigger, more difficult decisions. Yet we apply a 95% confidence level to the college-student experiment and think we've learned something about the power of all financial incentives


I agree completely regarding the social sciences. I think the devil is in the details, though, and to return to the original article, why 1.5 and not some other number? The author provides an answer, but the answer is only partially resolved.

I feel like some empirical study is needed, of how deviations from models occur in different fields, in a way that's applicable across fields. Maybe that's the same as the probabilistic analysis being discussed in the article, but what I have in mind is higher-level than what I understood that to be. I'm thinking of some meta-analytic survey across disciplines of what the safety factor would have needed to be to avoid various catastrophes of different sorts, or how much models are off in different areas. Maybe there is a field of study like this?


I'm sure engineers across geography and time all use a Factor of Safety.

I'm almost as sure that everyone keeps using the number people used before they joined the profession. Because if you decide to lower it, and a disaster happen, you are in very deep shit.

So once set, the number will tend to stick until forced to change by something extraordinary.

Which makes me very curious about how the number varies between independent domains. Do Japanese, Norwegian and US bridge builders all use the same number? Do builders of bridges, skyscrapers, and dams use similar numbers?

The answer would tell us something about how arbitrary these numbers are.


In the 90s, I was talking about this problem to a structural engineering professor. He observed that they now had computers fast enough to do Monte Carlo simulations of buildings where the strengths of the beams and fasteners (and the number of bolts correctly inserted) can be varied. Then you see if it falls down under the design load.

I asked whether it gave different answers than the standard 1.5 safety factor. As I expected, the answer was yes. It turns out that in a conventional skyscraper, there is a tiny proportion of the structure that needs to be done right. This is good news as you can x-ray those beams, and check and double check that all the bolts are installed correctly. The cost to do this is tiny. The rest of the building can be built with an effectively smaller safety factor, and it will be fine. This leads to overall cost reductions.


"Safety factors started being formalized in the mid-1800s for bridge building, where factors as high as 6 were used to cover for the massive inconsistencies in the quality of early cast iron."

I'm not a real Civil Engineer but I was a graduate one from 1991 - I'm now a IT bod. Anyway, Civ Eng uses established factors of safety or safety multipliers or safety factors or whatever. Structural steel uses 1.2 I recall - so you work out your worst case (in 100 years - look up tables) bending moment and mult by 1.2. Civ Eng is one thing and despite our bridges still failing after 2000 years of really solid knowledge. Tacoma Narrows (who knew the bloody things fly and shake) or London Millenium bridge - lol - shake, shake, shake the room - BOOM.

The thing about safety factors is that they need to be derived conclusively. In Civ Eng - wood is a bit wayward so the safety factor for it is quite large compared to steel.

I have no idea what you do for space thingies (yes I do) but I would expect my first 50 experiments to blow up - I need to explore the extremities.

If I ran a Space Agency I would say something like: "Soz, we are going to make some cracking firework displays first and then we will know what to avoid."


> If I ran a Space Agency I would say something like: "Soz, we are going to make some cracking firework displays first and then we will know what to avoid."

This is the modus operandi of SpaceX - they just keep tweaking their rockets and launching experimental tweaks as much as they possibly can without risking bankrupcy, as failures teach them more than successes.


When it comes to the Artemis program it's kinda funny listening to all of SpaceX competitors.

"All of SpaceX's prototypes have blown up"

Meanwhile they don't even have a design that's gone past paper.


I used to work for a place that built fast cars. We had a mate that used 5.0 or more for the factor of safety everywhere. Everything he designed was about 30% heavier than it needed to be, but we could easily adapt his parts for prototyping because it never mattered if you drilled a hole through the middle or cut them in half. They were plenty strong and reliable.

We called this the “Factor of Lloyd” and we had a few sayings about it.


I'm not a civil or aerospace engineer so this could be built into the safety factor models already. Reading the post had me wondering:

If safety factor adds mass and additional mass requires additional force to accelerate is a lower safety factor safer since you'll lower the amount of force required thus increasing the structural safety?

Calculating safety factor for a given scenario feels like a complex multivariable equation. Is that the case?


Also not an engineer but watching a real-world example of that thought process was fascinating during NASA and SpaceX's design process for Dragon Capsule, that contained a requirement that the capsule needed to have a statistical probability of loss-of-crew less than 1:270 flights, which is the alternative design measure in TFA.

One challenge was NASA's modelling of in-orbit micrometeorite strikes was complex, and there were concerns that extra complexity to provide redundancy and armor would make an overall less-safe vehicle.

“Blindly striving to achieve a statistical loss of crew number may drive you to design a system that is less safe" -Bill Gerstenmaier, NASA associate administrator for human exploration and operation [0]

[0] https://spacenews.com/commercial-crew-vehicles-may-fall-shor...


Yes, all (essentially all) engineering design ends up being multivariable. For even something as simple as a cantilevered beam supporting a load, if you can change the shape, material, material treatment, length, width/height, all of which affects cost. Usually due to limits of manufacturing and availability of standard parts, the exploration space can be greatly reduced.


Probabalistic failure analysis is certainly something engineers do for determining system risk, else how do you determine how many redundant components to include? I seems like having a higher safety factor just means having a lower probability of failure, and these two concepts are very compatible.


The probabilistic failure analysis (as practiced in LRFD) is essentially a pencil sharpening exercise where the margins can be reduced a bit. For example, some loads are better known than others (e.g., dead load vs live load), some materials have better QC or a more uniform quality than others (think concrete vs steel).

The end result is generally in the ballpark of the old factor of safety, but might be up to 10% less in some cases.


(To the OP title: More expensive than disaster?)

Many comments here relate more to one-off design. There is also the medium-high-volume manufacturing end. There, a prototype run might be in the dozens or hundreds of units, more than the entire manufacturing run in other heavy industries.

As the OP hints, "safety factor" is not the only term to use. A design margin (including reduction in margin) can be planned with one or more motivations: safety, reliability, weight, volume, reduced BOM costs, unit costs of repair, fleet costs of repair, logistic and warehouse costs of parts for repair, planned obsolescence, and so on.

Probabilistic design, also realized through "Monte Carlo" analysis, can take into account multiple simultaneous non-linearity in various models, where symbolic or formula-based analysis might fail.

For example (and roughly speaking), if one has millions of miles of over-the-road data, say of wheel-to-road forces or geometric road or track profiles, then one might manage to calibrate the following together: 1) a specific vehicle physical model, including parts tolerances and probabilistic discrete flaws; 2) material cycle-fatigue damage properties; and 3) some set of Weibul-distribution-like parameters as an intermediate in predicting failure rates and "lifetimes." .. AFAIK the kind of business analysis one might do could include predicting how many parts one should overproduce and warehouse (in a one-time batch) to service in-warranty and post-warranty repairs out to N years.

At that scale it can also become sociological. "Safety margin" is a loaded term when it comes to liability and imprecision in intent. You reduced the safety margin, as it says right here?!!

Not a bad article, but there could be a whole article on ramifications of different margin-related wording, high-N statistics, and explicit accounts of simultaneous goals.


> For example, a NASA document is explicit in saying that a factor of safety only covers #1 and manufacturing tolerances and does not cover #2 - #5

Why do different fields have different definitions? Because they're different fields! Aerospace doesn't really worry about material imperfection because they do very intensive quality inspections that aren't feasible for bridges (e.g. X-rays that can't effectively be done outside). And early nuclear design use focused on "imperfect theory" because (of course) the theory at the time was somewhat uncertain.

I think this article is overly dismissive of a proven way for a whole industry to learn over time about risk management. e.g. Boeing doesn't want to share the distribution of their material strengths. But they're happy to share some safety factors that don't reveal a lot about their business but help out the whole industry.


It’s kind of mentioned in the article, but to be more explicit: reducing safety factors has asymmetric risk vs reward. Reducing the factors “just” lowers cost or improves performance. But if your field is padding by 50%, then you need to tradeoff an “up to 50%” cost reduction (or similar) versus “had a catastrophic failure”.

So, reducing the padding from 5x to 1.5x was already most of the benefit. If you were at 1.2, there are probably better ways to shave costs than reducing your unexpected force multipliers. It’s definitely attractive to lower cost / increase speed / whatever if you truly think it’s “free”, but the benefits are diminishing.


I feel like that's what was so unconvincing to me in this article. The only argument they gave against over-engineering was cost. At that point you have to decide how much risk is worth how much in savings. Is a 1% increase in the likelihood of the bridge failing in extreme conditions and killing 10 people worth a savings of $100k? What's a human life worth? What's a low increase in risk to a human's life worth? How much of an increase in the tax rate is reasonable to reduce the likelihood of someone dying due to a structure failing?

On my end, it's pretty easy. Am I personally willing to pay an extra $100 in taxes a year to measurably reduce the likelihood of another resident dying due to structure failure? If there's a quantifiable advantage to the increased cost, then absolutely. Will the increases in my taxes reduce a 4% chance of failure to 3% over 25 years? Heck yeah. Even better, how about we find some other facet of the budget that does not benefit the populace? What business is being subsidized by my taxes that does not benefit anyone who needs the help? There's lots of that here.

Trying to decide whether to cut the weight on an airplane? How much money will it save? What does that do to the price of a ticket? Is a savings of $10 per ticket worth a 1% increase in the likelihood of the plane crashing into a cornfield in the next 10 years? It sure doesn't seem worth it to me.

Honestly, those cost savings don't usually go into decreasing the price of tickets anyway. In my pessimistic view of reality, what actually ends up happening is that I pay that $10 per ticket either way and the reduced cost leading to that reduction in safety for the passengers ends up going into some executive's pocket. So even the argument that reduced costs are a good thing for the average person isn't really an argument at all. The person who might die from the decision is never going to see the benefit anyway and at that point, this starts looking like a pretty terrible deal for the average person.


Spacecraft seem like one of the few cases where a small decrease in safety factor can result in a big decrease in cost. The thing about rockets is that they have to lift their own fuel, so if you decrease the mass by 10% you can leave off not just the fuel needed to lift that part of the rocket, but also the rocket fuel needed to lift that amount of rocket fuel, and so on. Conversely, every additional pound for payload you get would otherwise have to be achieved by a much more significant increase in the size of the rocket.


Not just 'stress safety' but all the other things.

A NASA project I was related to we logged every single bolt that went on the device, where it came from, the batch number, and had to keep all the old software around in the event we had to reconstruct something.

The amount of overhead was pretty amazing.

Most of that is for safety.


Overhead that saves lives isn’t really overhead


The factor of safety probably doesn't have a specific definition because it's application and part specific. Its an axiom like the 5-sigma rule not a property of the system.


Exactly, it buffers against modeling inaccuracies


This is so, so, so similar to equity risk premium / default spreads, and even the interest rates themselves. There's no particular reason why the equity risk premium should be at 4% or 6% or 8%. But we do know that if it dips too low bad shit happens. Taleb wrote quite a bit on this topic, most clearly about the specific case of realized-vs-impled volatility gap, aka the VIX is too expensive and at the same time you can't short it to make money, you'll lose money, and yes did I say it's too expensive at the same time.

And yeah, the mentioned finance "risk factors" also generally keep going down over the decades/centuries. In a similar fashion: the markets dare to use a slightly lower number, over time nothing too bad happens, more people jump the bandwagon.

The "cost" of using higher than needed "financial risk factor" is easily in the trillions per year.


For context, the FS of elevator cables is ~10 (depending of the country).

EDIT: What's usually limiting in elevators (and that's why they say "max 4 people") are the breaks.


> EDIT: What's usually limiting in elevators (and that's why they say "max 4 people") are the breaks.

Even here there's a safety factor (self-limiting packing density of people in Western countries... "Uh... I'm gonna take the next one").


Or viewed another way, the least expensive number in engineering.

Because in many applications, the full cost of failure can be unimaginable.

Also fudge factors have a tangible benefit: time. It permits declaring a design "good enough" sooner.


I've actually used this concept at an old job. When I was given a new project the business people always wanted it done at a particular date, but it was always an unrealistic timeframe. I'd then spend sometime thinking about how long I thought it would take me, but I would always add 2 weeks or 25% to the estimated time, which ever was larger, just to deal with the human element.

This could include changing requirements, poor communication, illness, being blocked by other changes, etc.

I learned that you can get away with giving people extended deadlines as long as you hit them.


You could just square your quantity before applying the 1.5 factor. Instead of "Our shuttle is safe up to 150% of the required speed!" design for "safe up to 150% of the required kinetic energy (1/2 m v^2)". Then you only need to design up to sqrt(1.5) = 123% of the required speed.

(My point is that the scaling of the importance of quantities is arbitrary so a single safety factor doesn't make sense to be applied to every quantity.)


The definition is

> breaking force divided by the expected force

Note force being used here, not energy or speed.


It depends on the application, but usually yield force (the point where deformation starts to become permanent), not breaking force.


If you had the luxury of throwing away bridges or planes or space shuttles to test every possible circumstance, then I guess eventually the safety factor could conceivably come down to 1.0, right? You would've satisfied yourself that nothing in the real world was not in your simulations?


I don't think so.

- You'd also need to let them stand for 100 years or something to get a better view of all possible weather events. Oh wait, weather events are becoming more extreme. - Materials of production are imperfect. We're well past poorly made cast iron, but maybe something wasn't quite perfect when that bolt was cast. - Improper usage or external emergencies may still impact usage.


A safety factor of 1.5x does not guarantee that nature will not throw 1.6x the expected force at you. That's why the author of the article calls it a "libation," because it isn't related to anyone's knowledge about the uncertainties in the situation at hand.


The author's point of view here is somewhat undermined by his own article, in which he points out that the figure has been adjusted downwards over time in response to experience - in other words, it has been empirically determined. It might not be the most efficient or flexible way to handle risk, but it is not just a faith-based number, either.


> If you had the luxury of throwing away bridges or planes or space shuttles to test every possible circumstance, then I guess eventually the safety factor could conceivably come down to 1.0, right? You would've satisfied yourself that nothing in the real world was not in your simulations?

Isn't that the main advantage Space X has over NASA?


It's not really Space X vs. NASA. NASA doesn't build rockets. ULA is probably the relevant comparison. NASA's also not really into exploding rockets so saying Space X is about shaving safety factors is pretty simplistic.


The real number engineers should be considering is not the factor of safety, but the probability of failure.

The probably of failure should be calculated considering material defects, forces larger than predicted, simulation errors, and all other causes the factor of safety is designed to protect against.

Then the engineering process can allocate those probabilities in the most efficient way.

For example, in a rocket it might make sense to make the engine bells stronger (decreasing probably of failure) while making the fuel tanks weaker (increasing probability of failure). The overall probability of failure remains the same, but perhaps the craft ends up lighter/cheaper/better than it would be if all components just built in a fixed factor of safety.


Unfortunately, NASA was really bad at estimating the probability of failure. Feynman famously dissed their lack of mathematical rigour in this regard.

I'm guessing most engineers' grasp of proper statistics math is worse than their understanding of factors of safety.


Indeed, though my recollection of Feynman's most pertinent criticism is that he suspected that the probability of failure of the thousands of individual components - probabilities that are all very small numbers - were picked with one eye on how they moved the overall risk. He felt that the analysis, which should have been bottom-up (and nominally was) was actually conducted in a top-down manner.

The space shuttle program demonstrated that if your analysis overlooks just one scenario (such as a rigid O-ring or ice on the bipod ramp), the risks can be much greater than you calculate.


Are we sure this is the most expensive? I would guess the number which represents the bit pattern of a Windows 10 iso cost Microsoft more than $1.5 billion to find. I am sure you can find other examples of numbers that were expensive to find.


Is your username a play on charcuterie?


No, it isn't


How is it possible that this number that costs billions of dollars doesn’t have a clear and universally accepted definition

Because design constraints are not universal across all projects. Human error for something like the space shuttle is fairly minimal considering the mind boggling amount of training that's done. If you have an unreliable or unproven materials provider then that present a design constraint that must be accounted for. Etc.


Director of United Launch Aliance called FS a factor of ignorance in the tour with smarter everyday. He claimed there is no such thing for new vehicle designs, zero.


Any idea where I can learn some simple structural engineering?

For example, I want to build a small building. I want to be able to calculate things like the snow load of the roof, make a suitable truss, take into consideration things like the safety factor, and build with the materials to meet those requirements.

I can buy plans for similar constructions, but i don't see how they decide what size wood to use where, etc - and I want to gain an understanding of that.


You may want to try reading your local building code. This may not get into engineering principles at the level you’re seeking, but it may effectively provide the functional understanding of how (if not why) a certain material/size has been specified in its application- esp. if you’re working from (or modifying) similar plans.


As the other guy mentioned local building code is great, usually county or state level. I would take designs and just play with some alternatives. What if you use 2x4 vs 2x6 vs cinder block (or ICF) construction. Different roof pitches, larger rooms, etc. Pick up a statics book, that will give you some load calculations as well. Disclaimer: I'm a software engineer, not civil


In finance and gambling, the kelly criterion is used to evaluate maximum bet sizing while keeping risk-of-ruin near 0. Using it correctly requires understanding your own expectation and variance to a high degree of confidence. Everyone in these industries uses kelly to figure out the maximum size they can bet based on these careful expectation and variance calculations, then just divides by 2.


Clickbait title, not what I expect from an engineer.


This article is definitely written from the perspective of a novice without real-world experience. Empiricism is not a dirty word!


My intuition is that a safety factor is neither as safe, or as expensive as it seems; since people know they are working with safety factors, they start cutting corners.

It might be a good idea to lie to the contractors for a project about the margin of safety so they take better care to make it right. But that may not be possible.


Professional ethics is taken very seriously by practicing engineers. Not to mention that there are very serious liability consequences for falsifying this sort of thing, taking shortcuts is a good way to end up in prison.

In my Country (Australia) as an Engineer there are potentially very serious legal consequences if I certify something or sign off on work some other engineer has done (like the contractor you suggested in example) without doing appropriate due diligence.

In my workplace an industrial plant, for example before we adjust any safety factors there is at minimum a documented risk assessment process carried out. My work justifying the change will need to be reviewed and signed off by two other engineers.


I like the “most expensive number” hook, and enjoyed the read.

However, I’d hazard the most unnecessarily costly variable in engineering (over time, in aggregate; as well as on most any given substantial project) is the number of days later a project starts than it could have if it had just gone ahead and started.


If your program doesn't need to care about public opinion, you can test to failure, and thus can estimate the limits much better.

The public sees destructive flight testing as failures so if your program relies on public money, that's a problem.


Is there a software equivalent of safety factor? How do you/would you calculate it?


My work uses SiL https://en.wikipedia.org/wiki/Safety_integrity_level

But I think this is more an electrical engineering thing (for control systems and interlocks and such) not sure how applicable it is to general software.


In aeronautics there's the concept of DAL for software: https://en.wikipedia.org/wiki/DO-178C#Software_level


Reading this it seems like something like this could be used for software estimates as well?

Bake in a factor of safety into your estimates depending on the type of work, the track record of the team that’s doing the work etc


I have seen attempts at it. One is to multiply your estimates by the number of different pieces you're estimating. So, if you have estimated for three different pieces, multiply those estimates by 3 when deciding how much the whole thing will take. If you have estimated for five different pieces, multiply the sum by 5, etc. The idea is that the more estimates you have made, the more likely that at least one of them will "blow up" and take far longer than expected.

Generally speaking, though, software is far less advanced than civil or aeronautical engineering in this kind of thing.


That strategy seems hopelessly sensitive to the exactly granularity you calculating things at. If you have four tasks each with four subtasks and you think each subtask will take 90 minutes, should you really be budgeting an entire quarter for the project?


it's interesting that it seems to be sort of unitless.

So, if the unit of your design is linear but the unknowns operate on the square or the cube of that then the effective safety factor is much smaller.

For example I could design a 12 inch bucket to hold water and then say I'd better make the handle twice as strong in case somebody fills it past the line indicating it is full but of course that will be super-linear in its effect and my safety margin is way less than a factor of 2.


Obligatory Calvin and Hobbes comic: https://www.gocomics.com/calvinandhobbes/1986/11/26


Calvin's dad is my role model as a father. My 7-year-old children believe that in the past the world used to be black and white.



Uh. I stopped at the Shuttle example. It was expensive because it was a bad design (or a good design meeting bad requirements)


ISO 9001


Apropos of nothing really but I always loved this story because it tells of building something so that all of its components were perfectly matched in longevity:

http://holyjoe.org/poetry/holmes1.htm

The Deacon’s Masterpiece

or, the Wonderful "One-hoss Shay":

A Logical Story

by Oliver Wendell Holmes (1809-1894)

Have you heard of the wonderful one-hoss shay, That was built in such a logical way It ran a hundred years to a day, And then, of a sudden, it — ah, but stay, I’ll tell you what happened without delay, Scaring the parson into fits, Frightening people out of their wits, — Have you ever heard of that, I say?

Seventeen hundred and fifty-five. Georgius Secundus was then alive, — Snuffy old drone from the German hive. That was the year when Lisbon-town Saw the earth open and gulp her down, And Braddock’s army was done so brown, Left without a scalp to its crown. It was on the terrible Earthquake-day That the Deacon finished the one-hoss shay.

Now in building of chaises, I tell you what, There is always somewhere a weakest spot, — In hub, tire, felloe, in spring or thill, In panel, or crossbar, or floor, or sill, In screw, bolt, thoroughbrace, — lurking still, Find it somewhere you must and will, — Above or below, or within or without, — And that’s the reason, beyond a doubt, A chaise breaks down, but doesn’t wear out.

But the Deacon swore (as Deacons do, With an “I dew vum,” or an “I tell yeou”) He would build one shay to beat the taown ’N’ the keounty ’n’ all the kentry raoun’; It should be so built that it couldn’ break daown: “Fur,” said the Deacon, “’tis mighty plain Thut the weakes’ place mus’ stan’ the strain; ’N’ the way t’ fix it, uz I maintain, Is only jest T’ make that place uz strong uz the rest.”

So the Deacon inquired of the village folk Where he could find the strongest oak, That couldn’t be split nor bent nor broke, — That was for spokes and floor and sills; He sent for lancewood to make the thills; The crossbars were ash, from the straightest trees, The panels of white-wood, that cuts like cheese, But lasts like iron for things like these; The hubs of logs from the “Settler’s ellum,” — Last of its timber, — they couldn’t sell ’em, Never an axe had seen their chips, And the wedges flew from between their lips, Their blunt ends frizzled like celery-tips; Step and prop-iron, bolt and screw, Spring, tire, axle, and linchpin too, Steel of the finest, bright and blue; Thoroughbrace bison-skin, thick and wide; Boot, top, dasher, from tough old hide Found in the pit when the tanner died. That was the way he “put her through.” “There!” said the Deacon, “naow she’ll dew!”

Do! I tell you, I rather guess She was a wonder, and nothing less! Colts grew horses, beards turned gray, Deacon and deaconess dropped away, Children and grandchildren — where were they? But there stood the stout old one-hoss shay As fresh as on Lisbon-earthquake-day!

EIGHTEEN HUNDRED; — it came and found The Deacon’s masterpiece strong and sound. Eighteen hundred increased by ten; — “Hahnsum kerridge” they called it then. Eighteen hundred and twenty came; — Running as usual; much the same. Thirty and forty at last arrive, And then come fifty, and FIFTY-FIVE.

Little of all we value here Wakes on the morn of its hundreth year Without both feeling and looking queer. In fact, there’s nothing that keeps its youth, So far as I know, but a tree and truth. (This is a moral that runs at large; Take it. — You’re welcome. — No extra charge.)

FIRST OF NOVEMBER, — the Earthquake-day, — There are traces of age in the one-hoss shay, A general flavor of mild decay, But nothing local, as one may say. There couldn’t be, — for the Deacon’s art Had made it so like in every part That there wasn’t a chance for one to start. For the wheels were just as strong as the thills, And the floor was just as strong as the sills, And the panels just as strong as the floor, And the whipple-tree neither less nor more, And the back crossbar as strong as the fore, And spring and axle and hub encore. And yet, as a whole, it is past a doubt In another hour it will be worn out!

First of November, ’Fifty-five! This morning the parson takes a drive. Now, small boys, get out of the way! Here comes the wonderful one-hoss shay, Drawn by a rat-tailed, ewe-necked bay. “Huddup!” said the parson. — Off went they. The parson was working his Sunday’s text, — Had got to fifthly, and stopped perplexed At what the — Moses — was coming next. All at once the horse stood still, Close by the meet’n’-house on the hill. First a shiver, and then a thrill, Then something decidedly like a spill, — And the parson was sitting upon a rock, At half past nine by the meet’n-house clock, — Just the hour of the Earthquake shock! What do you think the parson found, When he got up and stared around? The poor old chaise in a heap or mound, As if it had been to the mill and ground! You see, of course, if you’re not a dunce, How it went to pieces all at once, — All at once, and nothing first, — Just as bubbles do when they burst.

End of the wonderful one-hoss shay. Logic is logic. That’s all I say.


`null`?


"First rule in government spending: why build one when you can have two at twice the price?"


Ok let's wing the entire thing from cardboard then!

* "poorly representative material test data available" that's 5+

* "extremely challenging environment" 5+ again

* "models are crude aproximation" is another 5+

So we should be able to get a cardboard spaceshuttle, if we only use a safety factor of 125+! Moar cardboard! Great job team!


You’re joking but in aerospace they use factors of not 10x or 2x but 10%. On a 30m high rocket. Just 10%. Here’s a tour with the CEO of ELA, as a bonus: https://youtu.be/OdPoVi_h0r0


Sounds like a few software projects I've been involved in over the years D:


This analysis is great. It makes for a great blog post, university lecture or similiar. But unfortunately you can't have these kinds of discussions in an engineering meeting, or other shared context because anything that could be perceived as arguing for less safety will attract opposition because there's tons of people who want the cheap virtue points and ass-covering that goes with being the guy who's always in favor of more safety.


If you use such an argument in a real design meeting, you might be asked to leave. Either you have data to support changing the parameters of the assignment, or you keep your silence. Accusing everyone else in virtue signaling is at hominem attack that brings nothing to the table.


I think it tends to come up more in fields, such as aeronautical engineering, where there is a safety tradeoff. If you make the plane heavier, it may be safer from material failure, but now there may be less margin for error by the pilot because the plane does not respond the same. You have traded one kind of risk against another. I remember being present when a friend who was a civil engineer heard that they used a safety factor of only 1.5 in aeronautical engineering, and she was kind of shocked it was so low; when you don't have to fly the thing, you can afford to make the factor significantly higher.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: