Hacker News new | past | comments | ask | show | jobs | submit login

I always welcome ambitious goals - the fact that a problem has not been solved for half a century is absolutely no reason to call it quits. However, in the interest of spending money wisely, it's worth it to at least sit back for a moment and think about what went wrong before, and what we can do to steer ourselves towards the right path.

The article mentions revisiting fundamental assumptions, but doesn't mention a single specific thing that the team will do differently. I've studied AI on my own for some time, and the biggest problem (as far as I can tell) is that researchers can't agree on what "intelligence" means. AI research for the past fifty years hasn't been about building an artificial intelligence machine, it's been about precisely defining what "intelligence" means. Every time there was a breakthrough, after a bit of hype people realized that the program is actually pretty dumb, is a testament to the intelligence of the programmer, not the machine, and that the bar for "intelligence" simply shifts a bit higher up.

So, what exactly is this team doing differently? How do they define "intelligence", and what do they intend to build?




It seems (to me) a good starting point for intelligence would be "the average person."

Refining this definition would involve researching all relevant objective tests that can be measured accurately, then forming some sort of matrix of comparison.

Initially I would pursue the testing on a single interface - using a black box approach with a text interface, then later extending it to other fields (such as movement / navigation, visual, aural etc)

If the AI can function on par with the average control, then it could be said that it is averagely intelligent based on the test matrix.

NOTE: This is just a top-of-the-head idea, I know it is a lot more complex than I make out (how do you define learning?) but it seems a logical starting point to me. Use current tests and results - just be careful not to feed the AI the original data.


I define information as the ability to make information, which means selecting from a low probability portion of a distribution. One precise description of this is:

1. You have a problem domain P to which the No Free Lunch Theorem applies, at least approximately. Thus, there are hard bounds on how well one algorithm can do compared to any other according to some metric.

2. Information is produced when an agent can perform significantly better than is algorithmically possible. One such metric is the compressibility of its search history.

Why is this my definition? Well, it is linked to our intuitive notion of learning and intelligence. As informally described by Hofstadter, it is an inherent ability to "step outside of the system." This means, at some time t I am behaving according to some rule set r, but at time t+c I understand the rule set and can reason about r instead of just being subject to r.

One specific result of being able to reason about a rule set is that I can take some well formed sentence, realize it can't be generated by the rule set, and use a simpler rule set to generate it. When framed in terms of Kolmogrov complexity, I'm exhibiting a general compression capability, which implies a general (though not total) capability to solve the halting problem.

Since a problem domain with structure can be compressed, this relates to my first example in that if an agent has a generally much more compressible history than mathematically expected in a (almost) No Free Lunch domain, it is exhibiting the ability to step outside of its environment's rules, reason about them, and thus compress them.

So, you can see that my definition of intelligence as the ability to create information specifies an unambiguous and measurable capability, which also happens to specify something algorithms are mathematically incapable of doing. Thus, I've have both defined intelligence and disproven the logical possibility of such an AI in one fell swoop.

BTW, this is not an original thought of mine. It is a direct result of intelligent design theory, the progeny of the absolutely brilliant William Dembski.


For one, to do things differently, I'd start without trying to define 'intelligence'. Nobody can define what a game is and still we learn, understand and say plenty of interesting things about games. Nobody can you define 'water' in a way that captures all the mental images people have when they hear that word and still we can say lots of relevant things about water. Trying to capture something like 'intelligence' with words is a foolish endeavour.


I agree with you in general (you can't get wet from the word "water"). In my experience, though, if you're building something fuzzy, it usually turns out to be a complete waste of money with no results. Human beings seem to need reasonable narrow scope to produce something useful. So while the team doesn't need to waste time on formally defining "intelligence" in a way that completely captures all its aspects, they do need to precisely define what they're building, to some reasonable degree.

Without a scope definition they can't budget their funds, their time, and their human resources. Projects like these usually result in a waste of money with nothing to show for it. Of course if they try to define what they're building, it will be intimately linked to the definition of "intelligence" (assuming they claim they're building an intelligent machine). Then someone will come along and propose a counterexample that demonstrates how the machine likely isn't intelligent at all, and cannot perform well on some problem where humans do spectacularly, thereby shifting the team's scope and definition. And so, they'll be back to square one.


intelligence is how great our ability to predict is according to this talk: http://www.ted.com/talks/lang/eng/jeff_hawkins_on_how_brain_...


How dictionaries circularly define words using other words, and how humans might learn by progressively expanding analogies starting with simple 'axioms' of body sensations like up vs. down (more on other pages): http://members.cox.net/deleyd/politics/cogsci5.htm


Wow, these three comments have just mirrored my own recent thoughts to the greatest extent I can remember. I present to you the Infinite Curiosity Loop:

http://funnylogic.com/times/txt/2009-11-infinite-curiosity-l...

And for what it's worth, the definition of intelligence has been on my mind a lot.


If you'd like to explore this subject in a structured way, then there's a large library of philosophical writings on the subject :). I think Hilary Putnam's essay on "Brains in a vat"[1] may be a nice starting point, from which can explore both earlier and later work.

[1] http://evans-experientialism.freewebspace.com/putnam05.htm


The letter 'g' is a completely arbitrary shape. Similarly an entire word 'everything' is just an arbitrary shape (though subdivided into constituent organized blobs of arbitrary shape). Letters and words seem to be symbols/code that stand for something else, and that something else might be raw sensory neuron patterns or something (imaging an apple with your eyes shut might be a trick to trigger that raw data without needing external stimulus to do so, such as seeing a real apple). Even the "sesame street" concept of up/down might require some sort of raw inner ear balance and sight sense to experience.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: