Not really. If an instruction is unclear the pilot will ask again for a repeat. Or if the instruments show everything to look normal but the plane doesn’t feel right, the pilot will correct it, a computer will not.
Think of the 737 Max whose computers crashed two planes because of incorrect sensor data being fed to them. The pilots there tried to rescue the planes while the planes were following their logic, which was flawed.
> If an instruction is unclear the pilot will ask again for a repeat.
Except when they don't, and confirmation bias themselves into doing the wrong thing. The most likely cause of a plane crash is pilot error, and runway incursions happen shockingly often [1][2][3][4]. The only reason there hasn't been a Tenerife-style disaster is sheer luck.
Look, incidents happen all the time in any industry. And for the five examples of human error you have thousands of examples where humans performed very well.
Computers are not infailable either. So while they add a lot of safety features and ease the workload of the pilots, they should not have final authority over pilot actions. 737 Max is a prime example of why.
737MAX is what happens when you have a bad plane design because of bad regulations, and corporate execs intentionally make the existence of a plane system a secret from pilots so that they don't need retraining. An incident like that should result in prison time, if not outright execution. It is not a good example of the downfalls of an added safety feature; the MCAS system wasn't a safety feature at all, it was much more like the emissions-cheating "feature" in VW's "dieselgate" scandal.
Why? Human to human leaves room for error and misinterpretation.