>> the persistent but deeply flawed idea that an intelligent system must somehow model the world by creating internal representations of things in the world.
OK, we don't need internal representations to reason about the world.
How do we store knowledge for reuse then? Do we have to relearn what the entire world means every time we need to form a thought about anything? That sounds a bit... inefficient.
Is it such a big philosphical problem to accept that maybe the brain does whatever is best in each situation? Maybe there are situations that are best served by an internal model- some background knowledge of a range of well-understood tasks. Maybe there are situations that it's best to intuit and fly by the seat of your pants, so that a stored model of the world just slows you down. Why does the brain have to be a one-trick pony, like most of our AI?
OK, we don't need internal representations to reason about the world.
How do we store knowledge for reuse then? Do we have to relearn what the entire world means every time we need to form a thought about anything? That sounds a bit... inefficient.
Is it such a big philosphical problem to accept that maybe the brain does whatever is best in each situation? Maybe there are situations that are best served by an internal model- some background knowledge of a range of well-understood tasks. Maybe there are situations that it's best to intuit and fly by the seat of your pants, so that a stored model of the world just slows you down. Why does the brain have to be a one-trick pony, like most of our AI?