I find it very interesting that to a layperson, the idea of a computer being able to beat a human at a logic game is pretty much expected and uninteresting.
You try and share this story with a non-technical person and they will likely say "Well, duh..it's a computer".
I'd say if anything the average person's perception of what AI can do in the opposite direction. In the mid twentieth century the idea of a near future involving computers/robots that thought and interacted much like humans but never made mistakes was pretty mainstream. People have rather dialled back their expectations of feasible computers since then, to the point the average layman thinks a hardcoded easter egg humorous response in Siri is impressive because although talking to Siri is just like a more error-prone alternative to using the keypad, the response sort of seems like how a human would handle the question.
The average person isn't impressed with computers winning at Go because they vastly underestimate the complexity and open-endedness of Go and wonder why it's really all that much more complex than chasing Pac Man through a maze like their computers were doing quite happily, and even with apparent personality, in the 1980s.
That's why task based notions for general intelligence are rubbish.
Here is what humans can do: When presented with pretty much any task, specified however poorly, they can design hardware and algorithms that can beat themselve at that task.
That's a decent measure of intelligence. A decent measure of creativity is coming up with tasks that make the intelligence part as interesting as possible.
Optimizing compilers do this already and they aren't 'intelligent'. Iirc alpha go is based on the same algo that could beat all atari games sight unseen.. satisfying the 'novel situation' requirement you set forth..
Optimizing compilers do this already and they aren't 'intelligent'. Iirc alpha go is an extension of the same algo that could beat all atari games sight unseen..
That's a point of concern for using the Turing Test to assess the intelligence of a computer, but that's only because the computer has to also be trained to expose human-like features that are considered to be unintelligent.
But on one hand, in order to have a shot at passing the Turing Test, a computer has to first be able to understand and speak human language. And that's pretty damn hard actually.
Yes we can deploy statistical methods, deep learning or what have you for classification, feature extraction and so on, but natural language is context rich and ambiguous. NLP isn't a conscious process for us either, but our brains are capable to disambiguate effortlessly. And a big part of that is also the rich (human) experience we gain while growing up. Being able to speak natural language is the one big trait that separates us from animals. It's why we can cheat death, because our children can learn from the acquired knowledge of all of their ancestors.
This is different from playing Chess at least. With chess you can deploy smart algorithms, but in the end it's still a raw search for good moves in the space of all possible moves, while giving up on branches with a bad score. Raw search is what computers have always been good at. That's not really possible with NLP, because at least with chess your vocabulary is very limited and for calculating good moves you don't need extensive knowledge about the world we live in.
Computers will surely be able to do that in the future and that will be a major milestone towards "true AI", but for now computers cannot do it.
On the Turing Test, some features considered as being unintelligent, like the tendency for lying, or being sensitive to insults are actually evolutionary traits, that have arguably helped humans to survive. So while emulating some human traits will be counter productive, a "true AI" will be concerned with survival and as a consequence will end up doing whatever it takes.
So while passing the Turing Test may not be enough, not passing the Turing Test is a sign that the computer is unintelligent.
I know this is a semi-joke, but it's worth discussing. While such people are less smart than average or have a deficiency in their education or both, unless you're talking about mentally handicapped people with a medical diagnosis, all normal humans have a conscience, are capable of reasoning, can understand complex symbols, can entertain complex thought, are very good pattern matching machines, can speak natural language and can learn and acquire new skills, which makes them intelligent.
I know the kind of people you're talking about. I have a family member like that. She's not that smart, she failed her baccalaureate, she's semi-illiterate and she probably suffers from ADHD, though a lot has to do with her upbringing. But getting answers like that from such a person means that you're not asking the right questions or the incentive to answer is not there. Give a cash reward to a cash-strapped person and you will never get an answer like "im so bored this test sux".
Yes, but you also have a database of answers from people, so even if you ask things like "what color is the sky" you're still getting the right answer.
it jokes with you, it gives vulgar answers sometimes, etc.
it's NOT good enough to fool someone, but what if you just made it sound like it's a person with a disability? Is that still beating the Turing test? Is talking to a "five year old" still beating the Turing test?
Or does it have to be a 100 IQ adult person fluent in English? In which case that's just improving the bot a bit. To make it a funny and charming bot it would require even more effort. But it's just slowly improving the state of the art with some kind of techniques like reinforcement learning or neural networks or whatnot.
When a bot actually beats the Turing test it wouldn't be big news because it would just be a slightly better jabberwacky.
> So while passing the Turing Test may not be enough, not passing
> the Turing Test is a sign that the computer is unintelligent.
You could have an intelligence that's just not smart in the human
sense. Consider running into an alien intelligence evolved from our
equivalent of octopuses, you ask it a questions but it only
communicates via color changes on its body.
Similarly you can conceive of an AI that's smart, self-aware and
intelligent just hasn't been developed to talk to humans.
The Turing test is a fine test to figure out if your AI is
conversational with humans, but the OP I was replying to was
suggesting it as a general AI intelligence test, it's not meant for
that, and will give you both false positives & negatives.
I encountered this many times over the past couple of days haha. I then have to explain that previous AIs, like those in chess largely used brute force computation to simulate each move, while these AIs "actually learn, similar to our brain". Probably not the most scientific explanation, but I feel your sentiment.
They are using Monte Carlo Methods for looking around, more like a tiny sampling, it's an incomprehensibly large space. They have both value and policy (deep) neural nets. Train one to get the sense of good/bad individual board states (value) and one to get the sense of good/bad trajectories (policy, evaluates chains of moves).
Chess has a fairly straight forward ranking of pieces, fairly well established ranking of piece power. A knight is more or less always a knight.
Go, you kinda make up your "pieces" from scratch and there isn't any single common ranking. This makes the problem of even evaluating the board difficult (value), which is a sub step of making your higher plans (policy).
The output of the two dNN help cut off, or prevent the sampling of the bad game states and encourage looking into the fruitful areas.
Well, it's a bit more complicated than that. Neither chess engines nor go engines use brute-force (that is, exhaustive search). Although go does have a much higher branching factor and that does affect the search algorithm used, the biggest challenge is being able to write good evaluation and move-guessing heuristics.
Go is a lot more psychological and emotional than playing tic-tac-toe. Its a strategy game, which undecided outcomes, where ones style and vision changes how you play and see the game.
The computer was able to overcome the computational difficulty humans compensated with abstractions and strategic concepts. We still use those to understand whats going on in the game, but the computer is oblivious to them and only uses a tactical view of it.
To me, "creative" is not the right word. Maybe try "intuitive".
As Radim says, of course, the intuition is only relevant when logic fails. However, no computer, including AlphaGo, has sufficient processing power to take the logic-only approach to the, uh, logical extreme (more board states than atoms in the universe, etc etc).
So both humans and computers play by a combination of logic and intuition. Surely Lee Sedol must be incredibly good at both. Perhaps AlphaGo is better than Lee Sedol at the logic part (or perhaps not, but just suppose for the sake of argument).
In this light, what is interesting is that AlphaGo is sufficiently good at intuition, a domain we might have considered uniquely human[0], to complement its ferocious logical power.
[0] I find it foolish to consider anything uniquely human, but a humanist essentialist might make such a claim.
The other interesting bit is the combination, the intuition guiding (and evaluating the results of) the logical searching.
I'm still holding out for the discovery of a proof of winning strategy like other combinatorial games. That to me is the logical extreme, not magic evaluation of all states. There are two interesting books on analyzing Go with combinatorial game theory, they found a neat system of scoring in the endgame and can create example boards with one best move that can be found by mechanically applying their system but stumps 9p players (who use whatever systems).
It's a creative game only to the degree your logic (reading) is not strong enough :)
I mean, deep down, Go is a perfect-information-zero-sum-discrete-2-player game.
I think a more successful pitch is showing that the strategies and skills coming from Go (human or robot) can be useful outside the game, as I argue here: http://rare-technologies.com/go_games_life/
You try and share this story with a non-technical person and they will likely say "Well, duh..it's a computer".