We don't call people geniuses because there really good at following orders. Further, a Virus may be extremely capable of achieving specific goals in real life, but that's hardly intelligence.
So, powerful but dumb optimizers might be a risk, but super intelligent AI is a different kind of risk. IMO, think cthulhu not HAL 9000. Science fiction thinks in terms of narrative causality, but AI is likely to have goals we really don't understand.
EX: Maximizing the number of people that say Zulu on black Friday without anyone noticing that something odd is going on.
>We don't call people geniuses because there really good at following orders.
If I order someone to prove whether P is equal to NP, and a day later they come back to me with a valid proof, solving a decades-long major open problem in computer science, I would call that person a genius.
>EX: Maximizing the number of people that say Zulu on black Friday without anyone noticing that something odd is going on.
Computers do what you say, not what you mean, so an AGI's goal would likely be some bastardized version of the intentions of the person who programmed it. Similar to how if you write a 10K line program without testing it, then run it for the first time, it will almost certainly not do what you intended it to do, but rather some bastardized version of what you intended it to do (because there will be bugs to work out).
AI != computers. Programs can behave randomly and to things you did not intend just fine. Also, deep neural nets are effectivly terrible at solving basic math problems even if that's something computers are great at.
So, powerful but dumb optimizers might be a risk, but super intelligent AI is a different kind of risk. IMO, think cthulhu not HAL 9000. Science fiction thinks in terms of narrative causality, but AI is likely to have goals we really don't understand.
EX: Maximizing the number of people that say Zulu on black Friday without anyone noticing that something odd is going on.