> I bet the engineers of MCAS didn't even have a view on the overall system
By system so your mean the MCAS, the airplane, it Boeing. I don't think you're correct if your mean the MCAS or how it integrated with the airplane. You don't think they had any sort of integration tests established? Also, one one the failings was that the system only relied on input from a single sensor, so that failing should have been extremely obvious.
Unless by system you mean Boeing as a whole. I'm sure that as a part of the requirements process they were told that the system can be deactivated (which it could be), and I'm sure they were probably unaware that pilots were not being trained in disabling the MCAS. That makes those technical failures still obvious, but nonetheless understandable and maybe even acceptable in that context.
I meant to say that few things are that obvious if they aren't specified. There should be a point in the risk analysis that handles sensory errors, possible consequences and risk mitigation and if it wasn't specified there, it would have been an error. Go up the chain from there.
I doubt that software engineers necessarily know much about the reliability of sensors. You mostly learn that from experience. That is why you should include as much experience in the analysis process as possible. A pilot or the air techs would probably have mentioned the tendency of sensors freezing or having some form of malfunction.
Only then can the softies develop a fault tolerant system. After a failure it is always obvious to everyone, but as you said, perhaps they just thought the pilots would notice the error and override the system. Also an obvious assumption that should be documented as mitigation to reduce risk.
No risk analysis is perfect, but after the accidents we can expect some diligence and discipline in my opinion and hopefully this accident leads to a review on these processes instead of finding some fall guys.
A thorough integration test would probably also have detected the issue at some point and maybe here are faults as well.
One of the main selling points of the plane was the lack of need to retrain pilots. So, Boeing was acutely aware of the lack of training regarding MCAS.
The need for training was there. The selling point was certification saying there was a lack of need. In part that's also a fault of each purchasing airline that didn't give extra training.
Regardless of whether Boeing were willing to certify the system as not being training, those responsible for pilot competence at the purchasing commands should still have ensured pilots were trained before using the system.
If a garage sells you a roadworthiness/MOT certificate when a vehicle isn't roadworthy, and you know it isn't, then we're both at fault.
Depends how the engineering teams are partitioned really. MCAS could be built as a closed system with a set of interface specs that the avionics teams would then have to integrate and test. That said, someone definitely should be doing integration tests.
By system so your mean the MCAS, the airplane, it Boeing. I don't think you're correct if your mean the MCAS or how it integrated with the airplane. You don't think they had any sort of integration tests established? Also, one one the failings was that the system only relied on input from a single sensor, so that failing should have been extremely obvious.
Unless by system you mean Boeing as a whole. I'm sure that as a part of the requirements process they were told that the system can be deactivated (which it could be), and I'm sure they were probably unaware that pilots were not being trained in disabling the MCAS. That makes those technical failures still obvious, but nonetheless understandable and maybe even acceptable in that context.