Hacker News new | past | comments | ask | show | jobs | submit login

AI?

If it were written by a man, we would call him stupid.




AI is basically divided into three "levels": ANI (Artificial Narrow Intelligence), AGI (Artificial General Intelligence) and ASI (Artificial Super Intelligence).

We already have a bunch of narrow AIs, as in, algorithms that do a specific thing in a way that no human would have ever been capable of doing it (think: Google searches). If such AIs (well, large amount of algorithms combined) were thrown into any other scenario other than the one they were created for, they would be useless and a human could perform better than them because humans adjust easier (we don't have to change a bunch of lines in our brain to be able to drive on the left side of the road, it just takes us some time to adjust).

This is a perfect example of this. We have an ANI that was intended for one purpose, we have a scenario for which it wasn't created (imagining SciFi scenarios) and it behaves poorer in that scenario than a human would.

However, we've now explored what can it do in this scenario. We laughed at how terribly bad it behaved and we can either move away from it, improve the AI so it can do this one specific task better than humans, or improve the AI so it can do it kind of okay, but not very brilliantly (like a random person would do if you stopped him and the middle of the street and asked him to write a SciFi scenario).

Once we have an AI that behaves kind of okay, but not brilliantly, in any situation we can possibly put him in and at the same time, if it can learn from his mistakes and improve itself not to make it anymore, we have a AGI (Artificial General Intelligence).

AGI behaves exactly as a human, but, because it will be able to surpass the physical limits that we humans have (as in, brain capacity, dependence on food/water/oxygen etc.) and because it is able to improve itself by learning on his own mistakes, soon after he hits the AGI mark, he will surpass that and become ASI (Artificial Super Intelligence).

What happens then, nobody knows. It's hard to imagine how something with a higher intelligence than ours is going to behave. All we can do is to try to come up a number of certain plausible scenarios. If there are negative ones (and sure as hell there are), then we need to address them before we even create something close to being AGI, because by the time the AI hits the AGI mark, it's already too late for us to do anything about it.

There you go, AI philosophy 101.


> because it is able to improve itself by learning on his own mistakes, soon after he hits the AGI mark, he will surpass that and become ASI (Artificial Super Intelligence).

I was with you until this point. You have a great description of why ANI is not AGI, but this AGI => ASI is just hand waving.

An AGI will have some of the same issues to deal with:

1) opportunity cost. yes, it will have more time because it doesn't sleep. Although maybe it will find that spending 1/3 of its time/resources cleaning out the cobwebs is optimal. Regardless it will have to spend resources (including time) on some things rather than others. The leap from general adaptability to perfect selection of tasks is likely just as large if not larger than the leap from ANI to AGI.

2) Some problems are just plain hard. There are algorithms for learning optimal results -- even brute force. The problem is they are too complex for a realistic fast solution. Just because an algorithm becomes as adaptable as a human doesn't mean that computational complexity is reduced. Therefore, either the AGI will consume massive resources to get a single optimal answer or they will be fallible just like humans.

When we get AGI, that just means we will have adaptable general algorithms, they will still have to learn and they will still be susceptible to restricted resources. In other words AGI does not imply ASI.


We might call him Ed Wood.

"See? See? Your minds, your stupid, stupid minds!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: