And those are definitely valid concerns. And if those 'bugs' aren't preventable or fixed, I guess the question then becomes: whose irrationality kills everyone first, human's or machine's? Perhaps the situation is inevitable, just part of the Great Filter that decides which intelligence-type survives the birthing process into post-evolution. If machines don't threaten our very survival, something (or someone) else will.
Perhaps prejudices may be unavoidable in any intelligence, since we build stereotypes as predictive models and ML makes similar abstractions and assumptions which influence perception and predictions. Tangentially related, when TBI patients whose emotional centers are impaired, so is their ability to make decisions [1]. Building a decision-making network without an emotional center may be impossible, since the two seem to be naturally correlated. It's probable an AI won't ever truly be 'emotional,' at least not in the near term.
So, in the end, it's reasonable to assume that the AI of the future that can crunch these complex problems won't tell people the answers, they will merely provide possible solutions with varying likelihood of success depending on goals and constraints. In the end, we will have to decide our own fate, and my point is maybe we don't have control over deciding our fate either way--such is the nature of fate.
Perhaps prejudices may be unavoidable in any intelligence, since we build stereotypes as predictive models and ML makes similar abstractions and assumptions which influence perception and predictions. Tangentially related, when TBI patients whose emotional centers are impaired, so is their ability to make decisions [1]. Building a decision-making network without an emotional center may be impossible, since the two seem to be naturally correlated. It's probable an AI won't ever truly be 'emotional,' at least not in the near term.
So, in the end, it's reasonable to assume that the AI of the future that can crunch these complex problems won't tell people the answers, they will merely provide possible solutions with varying likelihood of success depending on goals and constraints. In the end, we will have to decide our own fate, and my point is maybe we don't have control over deciding our fate either way--such is the nature of fate.
[1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3032808/