I rip the FAA for that. The level of air safety we have is because we have traditionally protected it fiercely. Within minutes of the second plane going down, the type should have been grounded until the FAA could figure out whether they were related. If not, they can unground them later.
There is a good reason we teach kids the allegory about the boy who cried wolf. If the FAA panics that easily they will destroy their creditability just as quickly as if they take too long. If they really did get new evidence yesterday and it prompted their decision, then I think they deserve more credit than the other world agencies that freaked out in a matter of hours. I want our regulators to be cold and analytical.
A statistically unlikely pair of crashes or the same model of plane, same phase of flight, same consequences, and similar altitude variations would seem to be enough justification for a prudent grounding.
If a company releases a new version of their software and it crashes twice in similar ways (much more frequently than the previous version), that is also enough reason to rollback the release and investigate, especially if the software has any critical use cases.
If you release a software update and only 2 out of thousands of nodes are experiencing problems, would you honestly take the whole application offline over that? Most software companies would not, in my experience. They would take a close look at those two nodes first.
Obviously commercial aviation has much higher standards than software vendors generally do. Airline tickets don’t generally come with 5,000 word EULAs with all-caps provisions like “THIS SOFTWARE IS NOT FIT FOR PURPOSE AND SHOULD NOT BE USED FOR ANY REASON AND YOU ASSUME ALL RESPONSIBILITY FOR ANY NEGATIVE OUTCOMES OF ANY KIND” etc.
Two crashes in close succession seems like an obvious abnormality. But because standards are so high, to commercial aviation providers and regulators, every single crash is treated as a significant abnormality. So the question is, why don’t they ground all models of a certain plane after a single crash? Once you understand that, the same logic applies to two crashes.
The first fallacy in this is that (as I understand it) you only think about probabilities, while in fact it's about probability (of something bad happening) times the cost of that bad thing.
Clearly, a chance for a bug losing you 300$ is much less of a problem than a chance for a bug killing 300 people.
The second thing is that a software update may render previous knowledge useless.
Just because the old software ran successfully on 100k nodes, it might still be the case that 2 out of 1000 new versions fail, so then you have 0.2% failure rate. That doesn't mean you should take out all 100k nodes but it means you might want to stop the 1k nodes in accordance with the principles laid out above.
Finally, I am not even sure why this is a yes/no question. How about: You're allowed to use the plane but you cannot rely on a defense of "the FAA said it was ok" if something happens, so talk to your insurance before.
You're missing the point. It's not just about the fact that two crashes happened. It's that the crashes look suspiciously similar.
So this is both statistically suspicious, and points at a possible common cause for the crash, making it even more worrying that there's something seriously broken in this design.
This doesn't really work, those 2 nodes of thousands have to kill a little over 300 people too when they go bad, when other software releases haven't killed people.
Risk managment is not rational at all times. If you have a reasonable expectation that the risk has increased, doing nothing is far worse than crying wolf with good intentions. Especially if you are responsible for the consequences afterwards.
Imagine the story would go like this: a girl heard the story about the boy who cried wolf. Uppon seeing a pack of wolves the girl doesn’t cry wolf because she wasn’t 100% sure that it wasn’t in fact a pack of stray dogs. Villagers die.
The original story teaches kids not to lie to gain attention, because nobody will believe you when you for once have a serious risk at hand. It has nothing to do with how we should treat real risks, where we have real indicators of increased risk.
Climate change is the ultimate example of such a risk. Better safe than sorry, because beeing sorry can easily become the end of the species here. Some airplane failing is arguably a tiny thing in comparison — but once you have been warned about a risk you can mitigate and you do nothing, you own it. Living is dangerous and everything has a remaining risk that stays dangerous even after any reasonable risk has been mitigated.
But this isn’t about totally ruling out all remaining risks, it is about ruling out risks that are easy to rule out.
If somebody told you the brakes of your bicycle are both broken, you would probably not lend it to someone until you convinced yourself of the condition of your brakes. You will do this despite the fact that cyling can be dangerous with a fully functional bicycle too.
I think a shorter way to express the concept is that there are "known unknowns" - risks we can quantify, and "unknown unknowns" - risks we can't quantify. When people become aware of the possibility of the latter, they may panic. Of course, sometimes that proves to be the right decision and sometimes not. There is no "rational" solution.
The first accident involving a 737 occurred more than 4 years after introduction. This is 2 fatal crashes in the first year with only a few hundred aircraft delivered.
And that was in the early 1970s, where crashes were much more common.
The first crash of the 777 occurred 15 years after it was launched with 1500 aircraft delivered, and even in that crash foul play is suspected.
The first crash (ignoring hijacking) of the 767 occurred almost a decade after it was launched, and the second was almost two decades after launch, and it's still not clear if it was suicide.
You're right. I was thinking about MH370, and only considered full scale crashes where everyone, or almost everyone on board dies. The major incidents were:
These two this close together with so few examples of the type flying and both aircraft so new are a satistical anomaly. The Concorde was grounded (and its career ended) after one crash. The 737 Max has the second worst fatality rate of any passenger aircraft, after Concorde.
I agree. I have qualms about the process, but I'm always nitpicky about various things. Boeing was far too involved in this decision matter IMO for example, but that's mostly an issue of optics.
Overall, I'm pretty happy with what I saw out of the FAA. It just seems like the FAA may have gotten the data last. Canada got it before we did. The EU got the blackbox first, so once they looked at the data, they banned the plane. China's response may have been preemptive, since there's no way they could have looked through the data at that speed.
Perhaps we should be pissed off at getting the data-last. But that's hardly the FAA's fault (maybe a State-department thing). In the future, Boeing needs to at least appear more neutral and less like a lobbyist when these events occur. There's significant distrust in our system these days, our regulators need to understand what it looks like when they're talking with the companies who make the plane before getting the black box...