"how existing gender and racial biases are being encoded in our algorithms" sounds like a great first problem to work on for those worried about super-intelligence, as it's basically a concrete instance of the value-alignment problem.
IMO, as long as you accept that human brains run on physical processes (no souls) and that computers continue to improve, super-intelligent AI is inevitable; but it's reasonable to think it's more like 150 years away than 10. Given the magnitude of the consequences here, it's still worth spending some resources to work on.
IMO, as long as you accept that human brains run on physical processes (no souls) and that computers continue to improve, super-intelligent AI is inevitable; but it's reasonable to think it's more like 150 years away than 10. Given the magnitude of the consequences here, it's still worth spending some resources to work on.