Boeing has ~150k employees. Given that pool of people I'm pretty sure you can come up one that has any opinion you want. Just decide at first what you want your story to portray, then go find your employee.
This might be wrong, please say if I'm wrong. I think it's easier to dislike your tech product when you work inside the company. My company produces a mission-critical software/hardware (not something that'd kill people, but maybe harm them) and sometimes I feel very anxious since I know all the dirty dirty hacks that are put into our systems. At the end of the day, we all try to do our best, and our systems work, but it's still easier to be cautious when you know what's going behind the scenes. I don't think engineering can ever be 100% flawless, and this makes me anxious. When I'm just a consumer and not an engineer, I don't know what's going behind the scenes so this gives me a sense of security, since things seem to work.
I used to code for Medical Devices. One of the rules of thumb we used for our level of comfort with a design (apart from formal Failure Analysis, etc...) was whether or not we'd be OK with having it used on one of our kids.
When you review someone's code (or mechanical design, or electronics) and you have to decide if you're OK with it being used to treat or diagnose your loved ones, it forces you to confront the immediate reality of the impact of your work. Suddenly that hack might not seem like such a good idea after all.
This is a really good point and it almost feels a little Kantian, so it seems to make sense to me. But the trouble is, when I speak of hacks, I don't mean things like defining rand() = 6. I mean things that seem really weird but after some thought, it seems to be correct. E.g. you had to add bunch of if cases to an algorithm due to some weird library requirement, and after hours of debugging the reason starts making sense and after hours of stress-testing, your algorithm produces the correct result. It's always possible you need to add more ifs and you're missing something OR the fact that you had to add ifs indicate your algorithm is not correct etc. Even after you feel comfortable a hack is good to go, that hack can still be a bug. Who knows? Engineering is all about the wisdom to decide whether something works or needs more work. It's truly amazing how stable most things are; but when you're familiar with the inner workings of a system, you're MUCH more self-conscious of possible catastrophic failures. Like, when you start reading about diseases in Wikipedia, you suddenly start checking yourself if you have those symptoms...
I think kludge is the more appropriate term here. A hack isn't necessarily a bad thing, but kludgey code is certainly not something I'd like sicked on my family.
Which is the basis of one of my arguments to anti-vaxxers: the doctors and scientists have decades of education and experience, and they take this vaccine themselves, and give it to their own children. That's the most reputable recommendation.
I don't think this is a very good argument. Anti-vaxxers care about their children very much, and therefore they DON'T vaccinate them, since they think it's bad for them. From their point of view, it's the doctors who are insane.
I think nielv's point is that the doctors who proscribe and the scientist who design these drugs have many years of professional education and generally at least a decade of experience in their fields. They have all of that domain knowledge about the vaccine, AND they care about their kids, AND they decide to vaccinate their children. They care just as much as the anti-vaxxer parents, but they also have the medical understanding to make a reasonable decision, while the anti-vaxxer's generally don't have medical or bio-engineering degrees and experience with the vaccines.
I´m pro-vaxx, but I feel the need to point out the flaw in this argument from an anti-vaxx perspective:
The same could be said about 1940s German "racial scientists", who had years of training convincing themselves about the genetic superiority of their people and the dangers coming from the "Jewish race".
I work in medical devices and I am also always surprised how many "hacks" are in the system. We haven't had any field issues so I assume we are doing pretty well.
I guess most shiny things of any reasonable complexity look pretty dirty from the inside. There is just no way around it.
There's a couple of lines of code I wrote in a medical instrument that as far as I'm concerned are a dirty hack. Now, it's a documented dirty hack -- my boss called me into his office to laugh at the comment I wrote about it being the only way to accomplish what I needed due to the stupidity of the library I had to use -- but it's a dirty hack nonetheless.
The device has been performing admirably (as far as I know) for over 12 years in the field with no issues due to those few lines of code.
Hacks are everywhere in engineering. As long as the analysis behind it is rigorous and the hack is provably correct, I'm OK with it.
I realize I'm over-defining a silly term, but I'd consider that a "hack", not a dirty hack.
You understood exactly what was going on and were wholly confident it would be reliable. And more to the point, you wouldn't have counted it as a mark against the product's reliability.
That really depends on the hack. Maybe it's fragile because it is building based on a bug, and if the library changes, the code will have to be updated. Or perhaps it's there based on bad behavior of a chip and a hardware update could change the nature of the issue (say, it relies on a particular instruction that's generated by the particular compiler in use to happen just before the code in question).
A hack could be completely reliable (for the same hardware, library versions, and compiler), but still be a "dirty hack". And whether a hack is "dirty" is very subjective and, IMO, based more on how obvious the fix is (e.g. radians -> degrees -> function -> back to radians because the library documentation is wrong is less bad than a non-obvious hack like multiplying by 1 or something to avoid a compiler codegen bug because of a hardware bug).
I think this horse has been thoroughly beaten, so I'll leave it here.
While a well documented hack is OK, system, by design, shouldn't need or be built around hacks.MAX nose dive issue was a hack to counter the dive which proved fatal.
That's often times not an option. [Hypothetical] Just because I don't like the sensitivity of the airbag deployment sensor in the new Acura I'm designing doesn't mean I have the authority to switch it out. I can complain about it to management, but at the end of the day all I can do is modulate the signal going into to make it within industry-level safety specs.
This is a common argument that sounds reasonable but is actually based on faulty logic.
Airbus pilots can't safely fly their jets at cruising speed without software (envelope protection, etc) assisting them. Air France 447 crashed, killing 228 people, when that software became disabled due to bad values from frozen sensors. The pilot flying thought the software would save him from stalling but that protection was disabled, so he crashed into the ocean.
So does that mean it's a bad idea to make flight software safety critical? Probably not.
It probably just means that safety critical software must be extremely good. MCAS v1 was not good software but if MCAS v2 is good software then it could might function safely for decades. And, if MCAS v1 had been good, it would have been lauded as a great solution.
Engineering is about trade offs and maybe Boeing made the wrong trade off here. Ideally, you would have as little safety critical software as possible. But the fundamental concept of software correcting for flight characteristics seems entirely sound.
But this software hack was added only to save a little amount of fuel (and money to retraint the pilots or redesign the airplane frame). Maybe it is better to have a slightly more expensive, but safer plane?
And what if a totally redesigned successor to the 737 started crashing due to a technical issue, or training issue?
Second System Syndrome is a well known engineering pitfall. If the only thing wrong with the 737 MAX is that the MCAS v1 software is bad, and the MCAS v2 software is good, then the 737 MAX should fly perfectly well for decades.
If you look as far back as the 60's, there was a lot of resistance from pilots with regard to unsafe airframes where various forms of trickery were employed to cover up problematic behavior. Check the D.P. Davies Interview by the Royal Aeronautics Society with regards to the 727. Even plastering over minor behavior near stall was a controversial move due to fears of pilot's at the time of starting down a slippery slope where more and more non-regulation compliant airframes were allowed to be certified because there would be some system X that would compensate for the aircraft's bad behavior. The fear being that eventually there would be so much automation between the pilot and the end behavior of the plane, that you might as well have a dog in the cockpit to bite the pilot should he try to mess with anything; and that should anything go wrong, the pilot would be helpless due to the plane only really being airworthy to his level of skill with the automation in place.
Boeing historically built planes flown by pilots. Every aspect of the planes behavior had a way for the pilot to control it. A Boeing plane would follow a pilot right out of the flight envelope, at least maintaining a 1:1 binding of pilot intent to plane output behavior.
Airbus makes planes flown by automation, directed by pilots, that should that automation fail, requires a significant investment of skill maintenance to know what not to do at that time. This makes common UI for the pilot easier to implement, but comes with discipline degradation through having a computer that will largely ignore your daft commands until it suddenly has no way to measure your daftness, leaving you with a plane with a hodgepodge of reliable automation to reason through about whether or not the plane will do something daft if you order it to.
History may show who won that argument, (Airbus does exist), but the fact does remain today that aircraft that one wants to certify as flying just like another aircraft (MAX flying like a 737) should actually do so. The MAX doesn't; the technological fix intended to make it so was not developed to a sufficient standard of rigor, or communicated to pilot clearly enough, to transparently account for that aerodynamic divergence.
The problem is multi-faceted, yes; but the statement that the MAX suffered from a fundamental design flaw is absolutely accurate.
The flaw exists aerodynamically, and in the fundamental engineering process through which it was certified.
But my understanding is that other jets have a similar pitch up characteristics and no one considers them to be fundamentally flawed. That the 737 MAX design creates a new problem vs the 737-NG does not necessarily mean it's fundamentally flawed.
I'd only consider it a fundamental flaw if MCAS can't actually be designed to compensate for the problem. An unsolvable problem due to bad design = fundamental flaw.
Otherwise having to stick a windshield on the front of an airplane could be considered a fundamental flaw. But it's not considered one because of very strong laminated materials compensating for the the problem. And yet it's still a huge potential risk. Windshields have cracked, broken, clouded up, etc.
The behavior held common with the other 737 variants is a tendency to pitch up with the application of power. All aircraft with underslung engines share this characteristic.
MAX, however, has additional problems where its flyability is adversely impacted when flying at high Angles of Attack when the engine nacelles (positioned higher and in front of the wing) creates extra lift in front of the center of mass of the aircraft. This creates divergences from the normal "control feel" of the older aircraft.
When at low power, and high AoA, (such as might be experienced on descent), sudden applications of power can cause pitch up, which can put the plane at an AoA that experiences this handling divergence. MCAS, in it's second form, was intended to induce a controlled mistrim to compensate for both it's original purpose (high speed, high AoA divergence from 737 NG handling), and a low speed, high AoA handling divergence.
It was being modified to handle the low speed case that removed an extra G-Load based cross check, and greatly increased the authority the MCAS system commanded, due to the larger deflections required to induce attitude changes in the low speed portions of the flight envelope.
The fundamental flaw I mention, is several fold.
Requirementswise:
A)The MAX had to be developed yesterday to allow Boeing to maintain dominance/market share in narrow body civil transport against the A320neo. (They needed it fast)
B) It had to be cheap. They needed minimum overhead cost to appease airlines, and comparable fuel efficiency to the A320neo. This meant no blank slate redesign. This locked them into the 737 type cert.
This meant (to an engineer):
C) It had to fly exactly like a 737... Without being one (They need it to be right).
In engineering, there is a common saying:
You can get it done cheap
You can get it done fast.
You can get [everything you want] done right).
Pick two.
For success, they needed all three, and delivering the third was not allowed the flexibility to intrude on either of the other points of the triad areas of process.
The only way this project could have ended was with a breakdown somewhere in the process. That breakdown was in communication and propagation of the significance of the aerodynamics change. That failure in communication was bundled in the "fast and cheap" directive. The "right" directive required the aerodynamics divergence, and the software control system reconfiguration to compensate for it. The cheap and fast also required that the software solution not be driven by a dual sensor system, as that wouldn't have been allowed through without simulator training by the FAA, as testified to by a whistleblower as reported in the Australian 60 minutes expose.
The whole thing is just a Greek tragedy in an engineering project's clothing. Hubris, greed, equivocation, catastrophe... It's all there.
Thanks for the reply. I agree that Boeing clearly made a huge mistake by rushing the process, and in doing so, did a bad job.
I'm still not yet aware of any reason that they won't be able to eventually make the MAX design safe and functional.
It's a very interesting example of engineering/business dynamics. I'll be happy whether the MAX flies successfully or gets totally scrapped, as long as no one else dies in a MAX from negligence.
It wasn't a structural flaw. The plane flies fine. It's a regulatory compliance flaw. It doesn't fly enough like the old one without the software.
And before anyone says "hurr durr it pitches up" (or something like that) the amount it pitches up when wide open at low speed is perfectly fine from a performance standpoint but it is not similar enough to the old plane to get away without retraining pilots, hence the software.
> To handle a longer fuselage and more passengers, Boeing added larger, more powerful engines, but that required it to reposition them to maintain ground clearance. As a result, the 737 can pitch up under certain circumstances. Software, known as the Maneuvering Characteristics Augmentation System, was added to counteract that tendency.
I'd call that a change in aerodynamic or flight characteristics as compared to earlier 737s rather than a structural flaw. That might seem a picky, semantic difference, but I think that, especially with aircraft, precision language is appropriate. The aircraft is not structurally flawed or at least no structural flaws have been revealed yet. It is perhaps a design flaw but, were there no earlier 737s to which it can be compared, it'd probably just be a flight characteristic.
It's a design flaw, because they're hacking an old design to have larger engines it was never designed for. If they did a clean-sheet new design, they'd never design it this way.
The airframe should be retired and they should make a new one. To push this, the FAA and other certifying bodies should treat the plane as an all-new plane with no commonality with previous 737 planes, requiring all new training for pilots and a complete, new certification process for the plane, just as if it had been a clean-sheet new design.
Also when you read the article, its a pretty lukewarm statement the person made "I would get on it, but I dont know if I would put my family on it". That is a lot different than "THIS IS UNFIT TO FLY DO NOT GET ON".
> "I would get on it, but I dont know if I would put my family on it"
If you're going to call out a misrepresentation then maybe don't do it yourself? He didn't say he "doesn't know" if he would do that. He said "No. Not in a million years."
He says both of those things, with the "No, not in a million years" qualified by "Would I send my family on a flight right now?", while they are doing testing flights. Is putting family on testing flights for a plane you don't work on routine behavior?
The reporting from myNorthwest is pretty shoddy, it's mostly a right-wing digital tabloid run by a talk radio station. Equivalent to the NY Post.
The other thing that annoys me about the Max coverage are those polls where "X people say they would never fly on one," as if even 1% of people actually factor in the plane model when buying their ticket.
That's underestimating the impact the Max 8 problems have had on peoples' willingness to board these planes. The two flight attendant unions that have to travel on Max 8s demanded the planes be grounded and one threatened to refuse to board the planes if they weren't grounded.[1]
It's hard to measure the impact of sentiment alone since they grounded the places pretty quickly and numbers are reported quarterly, but its very likely we would have seen travelers choosing airlines that don't fly the Max 8.
Maybe not 99% of all customers but probably a higher percentage of frequent flyers and people that buy more expensive tickets as these tend to have their preferences for plane types / airlines / seats.
I have those preferences, but they tend to be "Goddamnit it, one of the Virgin A320s. Hate the exit row on those..." not "Well, I'm going to take another flight."
I find the mechanism for the door intrudes further into the cabin than a lot of others, which means the window-exit seat feels more crowded, rather than less.
It was enough of a big deal that Southwest Airlines said they would accommodated any passenger who didn't wanted to fly on the Max. My mom, not a frequent flyer, kept emailing me about the problems with it before they were taken offline.
I am sure way more than 1% of the population cares if they fly on this plane.
Yea, I have never looked at equipment when buying a ticket and still wouldn't. Price, departure, arrival, and duration - roughly in that order - are the only things that matter to me.
I always prefer the turboprops. There are fewer people in the cabin, and the flight is usually more scenic as the captain will often choose a lower cruising altitude than the jet jockey would. Additionally, there are more quality window seats, because the turboprops are normally high-wing, which means there is no wing between any of the windows and the ground, allowing a larger range of vista viewing.
Not the op, but for me the turboprops I've flown on [0] have a deeper, louder droning noise that really gives me a headache after a while. I think it's personal though, I've spoken about hating them with travelling companions and no one else shares my dislike. I've never actively tried to avoid them though, mainly because I rarely have a choice of time or route when I fly on the airlines I know use them.
Genuine academic curiosity. I wish that I hadn't asked the question because I now realize that I was dumb. I totally misinterpreted what the commentor meant. When they said "the one that was the jet over the turboprop", I somehow thought they meant "the plane with the turboprop engine having the intake on the top, as opposed to bottom, of the propeller". No idea why I thought that.... Now, though, I wonder if those exist...
Same for me and most travelers, but previously there hasn’t been much of a reason to care.
If nothing else, now I’m glaringly aware that Southwest and AA are the primary U.S. carriers that fly Max’s. Chances are it would at least be a passing thought if either had a good option on a metasearch site.
I fly reasonably frequently and factor the plane if its going to be longer than 2 hours or I am flying with kids. I've taken somewhat more expensive tickets when flying long distance with kids because it makes a huge difference when they are comfortable. On long flights, not just the equipment but the airlines makes a difference too. Under a 1 hr flight, I wouldn't care if I had to stand all the way.
I highly doubt they interviewed all 150k employees, in a setting where they felt comfortable to speak freely.
The fact that they got this quote from an employee in their sample size of "people we talked to" is notable, IMO. Not a sign that the sky is falling, but notable.
Edit: The alternate scenario in which one of the 150k employees actually reached out to a journalist of their own violation is notable too! If they felt strongly enough to actually contact the press, then something is wrong.
Boeing has about 80K employees in the Seattle area (where this publication is located), and it's pretty easy to meet someone who works at Boeing, the same way it's pretty easy to meet people there who work at Amazon or Microsoft.
Depending on the journalists' circle of friends, or friends' friends, or source pool, I'd imagine it'd be pretty hard to have all the Boeing employees you know all toe the company line, in a manner similar to the Birthday Paradox.
I see people complaining about the fact that it is just employee opinion.
I agree that just one employee opinion it's not important or so relevant by itself.
But, for most people, a human face for a problem makes the problem easier to understand and to sympathize with the situation. So, it seems that the journalist decided to give voice to Stuart as an stylistic choice. HN readers may like or not this choice. But, it does not invalidate the content of the article.
An example does not invalidate the rule. It would be interesting to discuss the content more than the style, even that it's also a valid discussion.
My take out of the article is that lack of trust on upper management is part of the problem.
> “I want to think that I work for one of the best companies in the world. I want to think that when I come home from a 10-12 hour shift that I’ve done something good. But I don’t know because I see the lies. They’re going back on everything that they’ve told us. So it’s really difficult for me to feel good about any of it.”
Most people see themselves as good persons. Most people wants to do a good job. When your company leadership fails you there is a conflict between that believe and reality. That is why company leadership is so important and it should not only respond to share holders but to all stakeholders in the company including customers and employees.
It's a problem when society keeps telling people that the only job of companies is to increase shareholder value. Recently there have been more and more people pointing out what a stupid idea it really is [0], but there are still a lot of companies focusing way too much on shareholder value to the exclusion of all other values, and there are still people repeating this harmful mantra.
Companies that only care about shareholder value run the risk of becoming parasites preying on society, looking for any way to extract value from society.
A healthy company should balance the needs of their shareholders, their employees, their customers, and society at large. Only if you balance those 4 concerns can you have a healthy, sustainable business that's a boon to society rather than a bane.
The real probably is the focus on short-term, quarterly financial reporting and how that relates to share-price. There is nothing wrong with focusing on maximizing value for the owners of a company, but if you focus on short term measure that look good to securities analysts, you'll actually undermine longer term value. The companies that build the most value ALSO have satisfied employees, happy customers and are respected by those outside in society. Neglecting the other factors might work for a while, but at a very high cost.
There are different values and styles in reporting and each serves a different purpose. When they are conflated they cause confusion.
- Interviewing one person with the necessary data and analysis to backup a certain conclusion. This would be a consultation with the Subject Matter Expert. Opinions backed by data. E.g interview with climate scientist on the impact of global warming
- Interviewing one or more people to put a face to a story. It has the effect of humanising an issue. To be viewed critically and with the realisation that it’s an emotional appeal. To be used to inform yourself only if it’s backed by data in another way/form
These two are not the same when it comes to making informed opinions.
For an opposite view: I have no relationship with Boeing, but I fly about once a month and have no issues flying on 737 Max, alone or with a family.
Current aircraft and flying is very safe. Even with the recent 737 fiascos, flying those is by far safer than driving to work. If I am OK with taking a road trip on vacation I am OK with flying Max.
This is not to say that improvements, re-certification and additional pilot training are bad. But from a purely practical point of view (which I try to practice for routine decisions), if we want to improve safety, there are better areas to spend $$ and / or hours than on endless re-polishing of a system that is super safe already. Life is dangerous; estimate your micro-deaths and extract most utility and fun from each of our 1e6 microlifes. My 2c.
Roughly: There have been about 400 737MAXes produced over two years, average time in service 1 year, so figure 400 plane-years. Figure about 8 flight hours per day, so 3658400=1.2 million flight hours, 600,000 flight hours per accident or 1.5 fatal accidents per million hours.
The U.S. motor vehicle accident rate is about 1.2 fatalities per 100,000,000 vehicle miles travelled. Figure a car's average speed is 30mph so 1.2 fatalities per 3.3 million hours = 0.36 fatalities per million hours. Therefore on an hourly basis, the 737MAX is very roughly 5x more dangerous than driving. OTOH the 737MAX averages about 400mph, so per mile it's probably something like 2-3x safer.
There might be other differences. On the road, a lot depends on your style of driving, whether you are careful, but in case with a plane, you cannot do anything to make the flight safer.
Yes, but on the other hand you can't do very much to make it more dangerous either. If you show up to your flight drunk, you're no more likely to crash.
The most dangerous parts of flight are during takeoff and landing, something Boeing didn’t seem to accurately account for in the design of the 737 Max’s safety systems.
If you’re over 10K feet in the air there is some room for recovery procedures, but if you haven’t reached cruising altitude you have much less time before a plane finds itself impacting the ground.
Per-mile travelled isn’t an adequate standard to compare aircraft with other modes of transit when talking about safety.
I don't think it should, because if I didn't fly to the other side of the world for a holiday then I wouldn't replace that journey with a drive of the same number of miles.
Besides, when people describe flying as "safer than driving", the implication is that you're more likely to end up dying one way than the other. So any measure of "deaths per x" (x is miles, hours, or whatever) needs to be followed with "typical x per lifetime".
> endless re-polishing of a system that is super safe already
Scrolling with my mouse is super safe – a system that can crash you into the ground at 500+mph and that relies on single (possibly faulty) sensor is not super safe.
It's not just probability and expectations. People also think, if I'm gonna die, it better be due to my own faults, not Boeing's greed. Just because I might be willing to die at my own hands that doesn't mean I should be okay with you killing me.
> the 737 Max plane remains grounded as the company scrambles to develop a fix for software problems that are blamed for two deadly crashes within a year.
Great, so they've managed to convince people this was a software problem...
It's not a software issue because the software was working excatly as specified. There is no bug.
And its not really an aerodynamic issue either.
Boeing have a massive procedural issue where they didn't do proper safety analysis of how the system works as a whole. The fact that they didn't identify this failure case as a major issue makes you wonder what other failure cases (on other planes too) that they missed.
The original version of MCAS only moved the stablizer by 0.6 degrees, and only in situations with high angles of attack AND abnormally high g loading. That's the version they ran the safety analysis on, it relies on two types of sensors that shouldn't fail simultaneously. Later they modified MCAS to move the stablizer by much larger amounts (2.6 degrees, 4x larger) and they removed the high g loading restriction so it would operate in more situations. This also means it was now relying on a second sensor.
Exactly. The whole thing is a gigantic failure of systems engineering, and if they made this big an error there, what error similar errors are lurking in their other planes? Then, to make matters worse, their response to the problem showed that they simply cannot be trusted.
Great link. I strongly agree that calling it a "software problem" undercuts the major engineering defects the aircraft has.
The 737 Max doesn't even have enough physical sensors to supply the software with enough data to conduct true voting logic (something typically found on "safety critical" systems). The spec also didn't even have the software use both sensors it did have to detect defective inputs.
The aircraft design was faulty before even one line of code was written. This wasn't a software bug. It wasn't a software design defect. The actual specifications and the physical design of the aircraft itself were flawed, and software was poorly used to plug the gaps.
There isn't a pilot alive that isn't now fully aware of MCAS. Is there still a risk of MCAS causing another crash?
I think the issue Boeing has now is credibility, and the idea that there may be other MCAS-type issues lurking.
> we were told certain things such as ‘the companies that bought these planes, a lot of them, their countries didn’t require them to go through the test flight process that needed to happen.’ So we were told, ‘Hey, that’s not on us, that’s on them. We have this program, they’re suppose to take it, they don’t have to take it, that teaches them how to use this thing.’”
To me (as a layperson), it seems pretty obvious that Boeing specifically didn't mention MCAS in the flight crew operations manual because it would have meant extra training/certifications to fly the MAX, and there is a pretty clear profit motive to avoid that. By attempting to weasel out of this by saying things like "a pilot should never see the operation of MCAS in normal flying conditions" and "not a separate system to be trained on" [1], as well as the above paraphrasing, it doesn't exactly inspire confidence that Boeing is really putting passenger safety above all else, which is what I as a non-stockholding potential passenger would prefer.
> Is there still a risk of MCAS causing another crash?
Yes, because it's difficult to recover when MCAS goes wrong, even when you know about it. The pilots of Ethiopian 302 turned MCAS off and still crashed[0].
They even tried reproducing it in simulators and added 1000 feet (relative to the Ethiopian aircraft) in order for the aircrew to use the recovery procedure (nose down pitch) to be able to move the horizontal stabilizers with just the handle.
At low altitude, even with full knowledge, MCAS can make the aircraft unrecoverable. Aircraft should not fly without it being corrected. The Ethiopian crew maybe have had knowledge of MCAS.
The corporate culture seems to be a disaster across the board. They’re building planes transporting millions and you’ve got quotes saying workers no longer care and don’t know who their manager even is. That’s terrifying
Wouldn’t you have said that after the first MAX crash? I mean if you’re a pilot and a 737 crashes, wouldn’t you be ALL OVER every detail of that crash?
The 737 Max issues are complicated, real and should be the object of study for generations of engineers and business students. That said, this article is useless aside from anecdotal (and not surprising) morale issues in Boeing.
Speaking as a Seattle resident, heavy skepticism should follow any mynorthwest.com article. It basically exists as a foil to Seattle predominantly progressive culture.
Unfortunately, I believe that more value can be extracted from grinding the lessons into future management types, government representatives, and regulators.
All the engineers in the world can't stop a bunch of executives capable of creating an environment so full of confusion, second guessing, and clouded communication capable of allowing these types of tragedies to occur.
The mechanics of the problem laid bare were simple. The problem ended up being the levels of obfuscation and lack of coordination/miscommunication surrounding the certification process that allowed the aircraft to be certified and flown with a clearly uncategorized avionics system, and insufficient communications to pilot's that in the end could have negated the technical need for a more robust system, if they'd only known to look out for it..
Honestly, I think it's crappy engineering, too. You can't throw this on the feet of generic execs or regulators. The behavior with AoA sensor disagreement (and the treatment of malfunctioning sensors) seems wrong. MCAS should have had failsafes around the number of trim adjustments it could do.
You can talk all you want about how regulators should have required recertification, how Boeing shouldn't have just strapped a couple of higher bypass ratio engines onto the bottom of a 737 in the first place, or how it information about MCAS should have been presented.... but in the end, it's possible there would not have been accidents (or maybe more of them because of unexpected stalls, we'll never know) if there had been sanity checking of response values from either of the AoA sensors and if MCAS capped how many times it could trim.
edit: it sucks to work on critical systems. In case it comes off differently, I feel badly for those involved. Nobody gives them credit thousands (more?) times a year an automatic flight control system engages and saves hundreds of lives without any passengers knowing. They ended up making MCAS because of decisions made outside of the realm of their control.
>Honestly, I think it's crappy engineering, too. You can't throw this on the feet of generic execs or regulators. The behavior with AoA sensor disagreement (and the treatment of malfunctioning sensors) seems wrong. MCAS should have had failsafes around the number of trim adjustments it could do.
Watch the Australian 60 Minutes Expose on the MAX. A whistleblower has come forward and stated that they had to pipe in only a single sensor's input to avoid costly simulator training, which management pushed as an absolute necessity to avoid.
I love Engineering. I love doing things right. I can't deny though that when you've got business breathing down your back, everyone seems more interested in getting what they want instead of what actually solves the problem in a sound way, that things never degenerate to the point where the engineering teams throw up their hands and say "Eff it! Take your plane and choke on it!"
This is a textbook case of toxic engineering culture yielding toxic engineered product. Richard Feynman said it best after the Challenger disaster, and his words ring as true now as they did back then.
"For a successful technology, reality must take precedence over public [or customer] relations, for Nature cannot be fooled."
The thing is, I feel like if both of those planes had crashed on US soil, each full of Americans, then it could have very well been the end of Boeing as a corporation.
Since they happened halfway around the world, it seems like we judge it differently?
"“The way management kind of works is we never really know who our managers are sometimes,” he said. “We get shuffled around so much that our job codes, our job titles, everything changes. Because they are trying to make progress. With the Max being down, they are bringing other people down, trying to correct the issues and trying to make it a better place.”"
This is a huge management mistake. Teams need a direction, consistency, and time to execute. During that time managers often need to keep things on track and keep up team morale, but also need to know that they have to step back and be more passive while the team is executing. Constantly changing things means the teams are distracted, probably a little frightened, and not operating where they could be.
Who could? They are all grounded. Once relevant authorities consider them safe I’d fly them again (and they won’t fly until then). So my answer is, I would as soon as I can.
As for choosing: Almost every single time I fly there is zero choice of plane if you fix the departure and arrival city, date, and approximate time. In the cases where there are multiple airlines it’s not uncommon to see late plane changes.
Especially between 737-800 and MAX8 because they are commonly operated by the same carriers and the swap is simple from a seating perspective.
Airbus didn't even had the chance to bid on that order so chances are the order was cooked between UK-US governments. Walsh says "I believe the aircraft is safe" though all of them are grounded and Boeing is yet to fix it...
Willie Walsh is a former pilot who has flown in a MAX 8 simulator with MCAS to understand it himself. It’s true he is an executive but he has a unique position here I thought was relevant to add
(I think we later learned the simulators are buggy around MCAS, but his intentions were in the right place...)
I was in Hawaii last month and had an interesting chat with a former Boeing engineer. He was involved with ramping up the South Carolina plant which built the Dreamliner. I asked him if he would fly Boeing right now and he also said that he would not.
Most Boeing jets that you can book a flight on today are some of the safest the world has ever seen. For example, the 777 has been in service 25 years with only a few serious accidents we can conclusively say are related to the aircraft design or operation. The 737-NG (precursor to the MAX and also in service ~25 years) experiences one hull loss incident for every 4 million or so departures. Even their other fairly new jet, the 787, has never experienced a hull loss or fatality in almost 8 years of commercial service.
The structure of the plane is solid and it will fly. The issue I have and obviously the FAA has a similar one is the fact buggy software is driving it.
Its one thing when the system offers assistance but when it takes over to the degree you cannot fix it and lives are in danger it needs to be reworked
I don't even think it is a bug. It was intended to work this way. The real issue is that Boeing flat out LIED and claimed it flew exactly the same as any other 737 so they could avoid having to factor re-training pilots on this plane. It was all caused by greed and people should be going to jail.
Last I read, the structure of the thing actually is not meant to fly, and they had to add a software workaround to make it fly (hence the MCAS), given the way they placed the turbines and due to their size.
Look, I am an ultra lightweight pilot and fly experimental planes a lot. FAA requires, and doesn’t F around with, that all airframes must be aerodynamically stable. So let’s throw that out right now, the airframe is not the issue.
The engines push the cog forward which causes more lift at high thrust. This is not an airframe issue but a loading issue. FAA requires load limits and practical methods to check and resolve loading issues. Let me pause here. We can simulate the 737MaX design by taking an older 737, filling with 1/4 passengers and load them all in the front of the plane. Tada we have a forward cog scenario. Could the plane fly? Sure but the plane will want to nose up more than usual. What does the flight manual say? Reseat passengers to fix cog over limits, or limit throttle inputs if not over limit. So would a properly designed MCAS system solve this problem? Yes. But Boeing didn’t properly design MCAS the second time around.