That unwavering respect for individual liberty and personal sovereignty is superior to all other systems by which man can relate to man.
That no end can be moral if the means to that end require using violence and force against a peaceful individual who has not aggressed against his neighbor.
That a free society is a strong and secure society.
That only freedom can breed sustained innovation and growth.
Human intelligence and free will is largely the result quantum interactions that are more complex than most anyone suspects. True artificial intelligence will never be possible on transistor-based computers regardless of Moore's law or software advances.
I was actually thinking about this question an hour or so ago, in the shower. I believe this is pure coincidence.
I also believe that, given a certain amount of liberty, motivation is the most important single factor in determining the success of an individual. This depends on the definition of success, but is reasonably robust. In particular, it covers wealth and happiness (commonly used to judge other people's success and our own, respectively).
Too many! But I like the second one's (STANISLAS DEHAENE) idea that we can mobilize our old areas [of the brain] in novel ways.
I think so too. It reminds me of a study that showed that when we use tools (eg drive a car), we use the same brain areas as when we move are bodies. We literally use tools as an extension of the body. That's deep sensory and motor flexibility, that seemingly could exist without intelligence.
I too have been thinking about this same phenomenon, and I believe some day (soon), BCI + exoskeleton/ artificial wings will give us the true ability to fly like birds.
I don't really believe anything I can't prove (in the philosophical sense; I believe reasonable things that I haven't personally verified, like for example that the moon is not made of cheese.).
But there are things I'd like to believe but don't know how to "prove"/justify:
-That we have free will (in the sense that requires that the universe is nondeterministic).
That the universe consists of matter, albeit strangely behaving matter, and that everything we see in the universe, no matter how wonderful, can be ultimately reduced to the interactions of this matter.
I have pondered over this for a while. When we say human intelligence or "true intelligence" as you say it, the fact is, it is full of irrationality and errors. I believe that computational AI will go farther than we can imagine for the simple reason that we cannot process information the way a computer can; its relentless.
Insisting that computer intelligence resemble human intelligence is probably the characteristic that the future will mock the most, like the way we mock primitive overdecorated Victorian machines.
I too have been pondering this and I think the field is lacking the concept of artificial _imagination_. Maybe intelligence = knowledge + imagination, and since we already have massive artificial knowledge (wikipedia for example), AI is just missing the imagination to comprehend all this data.
Hmm, okay. I don't know why you wouldn't consider bottom up models to be computational (and it doesn't seem to be a standard term; Google's second result is this comment thread), but at least I see what you meant now.
We can't be so special, that in all of the universe, we're the only living organisms.
[1] http://content.ytmnd.com/content/c/1/4/c14120ab010cd708c758f...
[2] compare http://screenshots.markbao.com/660e999564918601acab7157bbac8... and http://screenshots.markbao.com/f0698e51086874626454f576b3f99...