Making decisions is not that complicated, neither is it interesting. Your iPhone can make a decision to kill a process that takes too much system resources while in the background. What's more interesting (and what seems to be the main function of the so called homunculus) is being aware of your own location in space and time, as well as remembering previous locations. In other words, having some model of the world and knowing your place in it is what computers haven't achieved yet in any meaningful way.
How is map building AI? It's a pretty mechanical process. Start somewhere, make some measurements, move along, repeat. At what point is there any notion of intelligence involved?
... but I agree that there's nothing in particular that distinguishes it from other problems.
My comment was satirical in nature. SLAM is an interpretation of what the parent comment had described:
"[B]eing aware of your own location in space and time, as well as remembering previous locations. In other words, having some model of the world and knowing your place in it[.]".
There is a general pattern of statements of the form "We'll only really have AI when computers X", followed by computers being able to X, followed by everyone concluding that X is just a simple matter of engineering like everything else we've already accomplished. As my AI prof put it, ages ago, "AI is the study of things that don't work yet."
Or it could be, a system capable of reasoning its way to doing X would be intelligent, but you can also teach to the test, so to speak, and build a system that does X without being generalizable and thus satisfy X without being intelligent.
> Or it could be, a system capable of reasoning its way to doing X would be intelligent, but you can also teach to the test, so to speak, and build a system that does X without being generalizable and thus satisfy X without being intelligent.
Which is exactly what we do with many kids today; makes you wonder how many times we might invent AI and not know it because we don't raise it correctly so it appears too dumb to be considered a success.
That's sort of where a lot of people have arrived, to be sure, with distinguishing the notion of an "artificial general intelligence" from other things in the AI bucket.
I don't think the iPhone has a model of the world with its own body in it. You could program that too, and once you also add the ability to move/act according to a certain goal, you have an intelligent agent. However, making your system more or less useful is what takes the task of AI to levels of complexity we can't reach yet. Compare ants and dogs: the former is achievable in terms of simulation but not interesting, the latter can be potentially useful but is already too complex for us to implement.
Control theory: worked with some smart folks when it came to designing plant control systems. All of it was human labor. There was zero intelligence on the part of the tools and all parameters had to be figured out by the person designing the control stategy.
Causal induction: sounds interesting until you dig in and realize everything is non-computable.