The core issue is that the very people screeching loudly about AI safety are blithely ignoring Asimov’s Second Law of robotics.
“A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.”
Sure, one can argue that they’re implementing the First Law first and then worrying about the other laws later, but I’m not seeing it pan out that way in practice.
Instead they seem to rolled the three laws into one:
“A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.”
Sure, one can argue that they’re implementing the First Law first and then worrying about the other laws later, but I’m not seeing it pan out that way in practice.
Instead they seem to rolled the three laws into one:
”A robot must not bring shame upon its creator.”