The question (as Ke Jie says, but the headline hides) is not if AlphaGo will beat him, but when.
I was taught a computer would not beat a professional player in my life time. Now, there is maybe one player who can beat AlphaGo. I guess this won't be true for too long. When? That would be an interesting bet.
You're reaching. Constrained AI singularity isn't a thing. It's singularity or it isn't.
AI is definitely getting better, but it is all application specific. AI is not at the point of setting its own goals or fixing all of its errors. We still need humans for that.
Bah, replace "singularity" with "superhuman", and we're set. We have superhuman arithmeticians since the dawn of computing, superhuman chess players since Deep Blue or so, superhuman Go players and superhuman drivers are not too far off…
Superhuman programmer with intelligence explosion capabilities… yeah, that's a whole 'nother game.
It sounds less weird if you think: "AI has automated some jobs, and eventually it may automate away most of them, in the same way (pending AlphaGo victory) it's now automated winning at most board games... So what happens if the job of programmer gets automated away?"
(I'm convinced that programmers will take the AI-automating-all-the-jobs idea more seriously once it's their own jobs that are on the line)
There are a few ways I can think of to object to this line of reasoning:
1) You could argue that programming will be a job that never gets automated away. But this seems unlikely--previous intellectual tasks (Chess, Jeopardy, Go) were thought to be "essentially human", and once they were solved by computers, got redefined as not "essentially human" (and therefore not AI). My opinion: In the same way we originally thought tool use made humans unique, then realized that other animals use tools too, we'll eventually learn that there's no fundamental uniqueness to human cognition. It'll be matter & algorithms all the way down. Of course, the algorithms could be really darn complicated. But the fact the Deepmind team won at both Go and Atari using the same approach suggests the existence of important general-purpose algorithms that are within the reach of human software engineers.
2) You could argue that programming will be automated away but in a sense that isn't meaningful (e.g. you need an expensive server farm to replace a relatively cheap human programmer). This is certainly possible. But in the same way calculators do arithmetic much faster & better than humans do, there's the possibility that automated computer programmers will program much faster & better than humans. (Honestly humans suck so hard at programming https://twitter.com/bramcohen/status/51714087842877440 that I think this one will happen.) And all the jobs that we've automated 'til now have been automated in this "meaningful" sense.
> (I'm convinced that programmers will take the AI-automating-all-the-jobs idea more seriously once it's their own jobs that are on the line)
I assure you we do not. We would all love to be the one to create such a program. But don't worry it isn't happening anytime soon in the form of singularity.
Just like other people in other jobs, we sometimes come up with more efficient ways of doing things than our bosses thought possible, and then we have some extra free time to do what we like.
"But don't worry it isn't happening anytime soon in the form of singularity."
Probably not soon. Worth noting that the Go victory happened a decade or two before it was predicted to though.
(To clarify, I was not trying to establish that any intelligence explosion is on the immediate horizon. Rather, I was trying to establish that it's a pretty sensible concept when you think about it, and has a solid chance of happening at some point.)
>Just like other people in other jobs, we sometimes come up with more efficient ways of doing things than our bosses thought possible, and then we have some extra free time to do what we like.
Yes, I'm a programmer (UC Berkeley computer science)... I know.
> (To clarify, I was not trying to establish that any intelligence explosion is on the immediate horizon. Rather, I was trying to establish that it's a pretty sensible concept when you think about it, and has a solid chance of happening at some point.)
Well first you were saying programmers may hesitate to create an AI singularity because it may cost them their job. I said we would love to but probably won't anytime soon. Now you're saying that it's likely that it will happen some day. I'm not sure these points follow a single train of thought.
>Well first you were saying programmers may hesitate to create an AI singularity because it may cost them their job.
The line about programmers fearing automation only once it affects them was actually an attempt at a joke :P
The argument I'm trying to make is a simple inductive argument. Once something gets automated, it rarely to never gets un-automated. More and more things are getting automated/solved, including things people said would never be automated/solved. What's to prevent programming, including AI programming, from eventually being affected by this trend?
The argument I laid out is not meant to make a point about wait times, only feasibility. It's clear people aren't good at predicting wait times--again, Go wasn't scheduled to be won by computers for another 10+ years.
The fact that programming is an exceptionally ill-defined task. Computers are great at doing well-specified tasks. In many ways, programming is the act of taking something poorly-specified and making it specific enough for a computer to do. Go, while hard, remains very well defined.
I hope for more automation in CS. It will help eliminate the boilerplate and let programmers focusnin the important tasks.
> In many ways, programming is the act of taking something poorly-specified and making it specific enough for a computer to do.
Software development in the broad sense is that, sure; not sure I'd say programming is that -- taking vague goals and applying a body of analytical and social skills to gather information and turn it into something clearly specified and unambiguously testable is the requirements gathering and specification area of system analysis, which is certainly an important part of software development, but a distinct skill from programming (though, given the preference for a lack of functional distinctions -- at least strict ones -- within software development teams in many modern methodologies, its often a skill needed by the same people that need programming skills.)
There is one uniqueness to human cognition: the allowance to be fallible. AI will never be able to solve problems perfectly, but whereas we are forgiven that, they will not be because we've relinquished our control to them ostensibly on exchange for perfection.
It may interest you to know that machine learning algorithms often have an element of randomness. So they are allowed to explore failures. Two copies of the same program, trained separately and seeded with different random numbers, may come up with different results.
I'm not saying there will or won't be AI some day, I just thought that point was relevant to your comment.
Sorry, I found that a bit difficult to follow... it sounds like you think human programmers will beat AIs because we can make mistakes?
Well here's my counter: mistakes either make sense to make or they don't. If they make sense to make, they aren't mistakes, and AIs will make the "mistakes" (e.g. inserting random behavior every so often just to see what happens--it's easy to program an AI to do this). If they don't make sense to make, making them will not be an advantage for humans.
At best you're saying that we'll hold humans of the future to a lower bar, which does not sound like much of an advantage.
You ignored the "setting its own goals" part, which we haven't even started on yet. Humans define the arithmetic problems. Humans define what it means to win at chess and Go. Humans choose a specific destination to drive to.
Once humans have defined and programmed what do to, the machine happily does it much better than any human could (depending on the domain).
One day we'll be tempted to write a more serious, more complex "game", with real world consequences. For those, we'd better specify our goals precisely, lest we face unforeseen unfortunate consequences —there will be reasons why the machine does it better, ranging from faster computation, exquisite motor control, or a total lack of ethics —assuming ethics wasn't part of the programming.
Next thing you know, you need to solve the Friendly AI problem.
Well actually this neural network style of machine learning is not all that application specific. You create a general architecture and throw lots of data at it, could be go, could be recognising pictures. You will need different general architectures, but the point is that this is a fundamentally different approach to the old school chess algos.
Yeah I get that. I studied and have worked in machine learning. Neural networks are more general than previous approaches but they still need to be customized by humans for different applications. And none of these programs are going off and learning how to play other games on their own. They need to be led.
> Scientists tested Deep Q’s problem-solving abilities on the Atari 2600 gaming platform. Deep-Q learned not only the rules for a variety of games (49 games in total) in a range of different environments, but the behaviors required to maximize scores. It did so with minimal prior knowledge, receiving only visual images (in pixel form) and the game score as inputs.
Sure, the problem space is still fairly limited, but the AI did learn new games without much guidance at all.
We should rejoice in that fact. We are woefully unprepared for true learning programs as a species. Let us hope that between now and the time we do manage to create one that we mature to the point where we don't create these thinking entities for malicious purposes.
> Let us hope that between now and the time we do manage to create one that we mature to the point where we don't create these thinking entities for malicious purposes
That's a long way off and we'll face a lot of other problems before then.
For instance, fear mongering of a looming AI. We're better off focusing on teaching kids computer science and allowing them to see for themselves how theoretical and unscary true AI remains.
A singularity is a point where things are advancing so fast on different fronts it's impossible to make predictions about how fast things are going or where the next advance will come from. I think we're about there with "constrained AI", because everyone was looking at Facebook's bot which was playing with a four-stone handicap when Google skipped straight to champion-beating.
That is not the general understanding of singularity...
The technological singularity is a hypothetical event in which artificial general intelligence (constituting, for example, intelligent computers, computer networks, or robots) would be capable of recursive self-improvement (progressively redesigning itself), or of autonomously building ever smarter and more powerful machines than itself, up to the point of a runaway effect—an intelligence explosion[1][2]—that yields an intelligence surpassing all current human control or understanding
I hear there's still lots of work to be done for real time strategy computer games such as Starcraft (and even those rely on a great deal of easy to compute mechanical skills).
How much time it will take however, I cannot begin to guess.
80% of StarCraft is the APM - actions per minute. Strategy may exist but its nothing a computer can't replicate with human-coded heuristics. If a computer doesn't have to worry about a limit on APM, I'm sure it would obliterate any human starcraft player.
I'm just saying, that the APM of the best players is also in the top %. It's a prerequisite that already culls people that might be better tacticitioners.
I don't believe that Deep Blue's victory in Chess has any similarity to AlphaGo's wins. DB was just brute-forcing its way through 100s of millions of positions. AlphaGo is using some learned strategy behind its moves.
I was taught a computer would not beat a professional player in my life time. Now, there is maybe one player who can beat AlphaGo. I guess this won't be true for too long. When? That would be an interesting bet.