Hacker News new | past | comments | ask | show | jobs | submit login

There's no reason to expect that such advances will get us closer to a true AGI. I mean it's not impossible, but there's no coherent theory or technical roadmap. Just a lot of hope and hand waving.

I do think that this is an impressive accomplishment and will lead to valuable commercial products regardless of the AGI issues.




> true AGI

What is that? Most humans have general intelligence, but do other apes? Do dogs? A quick google search suggests that the answer is yes.

If that’s the case, then this approach may indeed yield an AGI but maybe it’s not going to be much better than a golden retriever. Truly excellent at some things, intensely loyal (I hope), but goofy at most other things.

Or maybe just as dogs will never understand calculus, maybe our future AGI will understand things that we will not be able to. It seems to me that there’s a good chance that AGI (and intelligence in general) is a spectrum.


Yep, and that’s rather terrifying, is it not? Is there any good reason to assume that future AGI will share our sense of morality, once it is smart enough to surpass human thought?


> Is there any good reason to assume that future AGI will share our sense of morality

I think it would be surprising if it did. Just as our morality is shaped by our understanding of the world and our capabilities, a future AGI's morality would be shaped by its understanding of the world and capabilities. It might do something that we think is terrible but isn't because we lack the capacity to understand why it's doing what it's doing. I'm thinking about how a dog might think going to the vet is punishment but we are actually doing it out of love.


There’s a wonderful novella by Ed Chiang called “the lifecycle of software objects” that addresses your thoughts exactly. Highly recommended.


Thanks. I’ll check it out.


Intelligence in general seems to be a spectrum for animals. Future AGI may be on an entirely different spectrum which isn't directly comparable. We won't know until someone actually builds one and we have a chance to evaluate it.


"True AGI" is often used in a way that means "human-like intelligence, but faster, more consistent and of greater depth". In that case, knowing that embodied agents are the way forward is quite trivial. We've known for a long time that the development of a human brain is a function of its sensory inputs - why would this be any different for an artificial intelligence, especially if designed to mimic/interface with a human one?


That's not the right question to ask. You can construct all sorts of hypotheticals or alternative answers but all of that is meaningless until someone actually builds it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: