Hacker News new | past | comments | ask | show | jobs | submit login

“If we define understanding as human understanding, then AI systems are very far off,”

This took me into the following line of thought. If we wanted AGI we probably should give this neural networks an overarching goal, the same way our intelligence evolved in the presence of overarching goals (survival, reproduction...). It's these less narrow goals that allowed us to evolve our "general intelligence". It's possible that if we are trying to construct AGI through the accumulation of narrow goals we are taking the harder route.

At the same time I think we should not pursue AGI the way I'm suggesting is best, too many unknown risks (paperclip problem...)

Of course all this begs the question of what is AGI, how we define a good overarching goal to prompt AGI and many more...




I think the best concise definition I've run across is what I heard Yann LeCun say in his recent interview with Lex Fridman.

“The essence of intelligence is the ability to predict.” -Yann LeCun


That's just the current idea of how the brain works - predictive processing. As we advance our understanding perhaps this will be seen as only one facet of intelligence. For instance, where does creativity fit into this definition?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: