Hacker News new | past | comments | ask | show | jobs | submit login

I have to admit that I'm not a huge fan of Ray Kurzweil - he's one of a large group of people who believe that accelerating change will almost certainly be good. I think the singularity could be good, but it could also be really bad, and it's important to spend some resources on making sure it goes well.

MIRI (formally the Singularity Institute) has a mixed reputation around these parts, but after reading fairly widely over the last year I think they have the deepest thinking on the topic of AI. Here's a concise summary of their worldview: http://intelligence.org/2013/05/05/five-theses-two-lemmas-an...

As I see it, their argument goes:

1. It's tempting to think of AIs becoming either our willing servants or integrating nicely with human society. In actuality, AIs will likely be able to bootstrap themselves to superintelligence extremely rapidly; we'll soon be dealing with alien minds that we fundamentally can't understand, and there will be little stopping the AI/AIs from doing whatever they want.

2. It's tempting to think, from analogy to the smartest human beings, that superintelligent AIs would be wise and benevolent. In actuality, a superintelligent AI could easily have strange or bizarre goals. I find this makes more sense if you think of AIs as "hyperefficient optimisers", as the word "intelligence" has some misleading connotations.

3. OK, well surely we can leave the AIs with weird goals to do their thing, and build other AIs to do useful things, like cure cancer or research nuclear fusion? The trouble is that even an innocuous goal, when given to an alien superintelligence, will very likely end badly. An AI programmed to compute PI would realise that it could compute PI more efficiently by hacking all available computer systems on the planet and installing copies of itself. Or developing nanotechnology and converting all matter in the solar system into extra computational capacity. You have to explicitly the program the AI to not do this, and defining the set of things the AI should not do is a hard problem. (Remember that 'common sense' and 'empathy' are human abilities, and there's no reason that an AI would have anything like them).

4./5. OK, well, we'll build an AI with the goal of maximising the happiness of humanity. But then the AI ends up building a Brave-New-World style dystopia, or kidnaps everyone and hooks them up to heroin drips to ensure they are in constant opiated bliss. It's really hard to come up with a good set of values to program into an AI that doesn't omit some important human value (like consciousness, or diversity, or novelty, or creativity, or whatever).

I'm glad that Peter Norvig (director of research at Google) is concerned about the issue of friendly AI. I'm curious to hear what other HN readers think of these ideas.

Anticipating some common objections I hear from friends:

How could a superintelligent AI have a stupid goal like computing Pi?/Wouldn't it be smart enough to break any controls we put on it?

I think this objection assumes an AI would be wired together like a typical intelligent human mind. If you think of an AI as a pure optimisation process, it's clear that it would have no reason to reprogram the ultimate goals it begins with.

If they're smarter than us, we should just let the AI take over/AIs are like our children, ultimately we should leave them free to do whatever they want

Again, this assumes the AIs are like super-powered human minds and that they will do interesting things once they take over, like contemplate the deep mysteries of the universe. But it's clearly possible for the AIs to devote themselves to really trivial tasks, like calculating digits of Pi for all eternity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: