“If we define understanding as human understanding, then AI systems are very far off,”
This took me into the following line of thought. If we wanted AGI we probably should give this neural networks an overarching goal, the same way our intelligence evolved in the presence of overarching goals (survival, reproduction...). It's these less narrow goals that allowed us to evolve our "general intelligence". It's possible that if we are trying to construct AGI through the accumulation of narrow goals we are taking the harder route.
At the same time I think we should not pursue AGI the way I'm suggesting is best, too many unknown risks (paperclip problem...)
Of course all this begs the question of what is AGI, how we define a good overarching goal to prompt AGI and many more...
That's just the current idea of how the brain works - predictive processing. As we advance our understanding perhaps this will be seen as only one facet of intelligence. For instance, where does creativity fit into this definition?
This took me into the following line of thought. If we wanted AGI we probably should give this neural networks an overarching goal, the same way our intelligence evolved in the presence of overarching goals (survival, reproduction...). It's these less narrow goals that allowed us to evolve our "general intelligence". It's possible that if we are trying to construct AGI through the accumulation of narrow goals we are taking the harder route.
At the same time I think we should not pursue AGI the way I'm suggesting is best, too many unknown risks (paperclip problem...)
Of course all this begs the question of what is AGI, how we define a good overarching goal to prompt AGI and many more...