Hacker News new | past | comments | ask | show | jobs | submit login
AlphaGo's next move (deepmind.com)
285 points by bjin on May 27, 2017 | hide | past | favorite | 172 comments



There was a time not too far back when people used to be considered a "genius" for their ability to memorize things well.

Nowadays nobody thinks of them as geniuses.

Also, people used to be considered geniuses for knowing a lot of things.

Nowadays information is just a Google search away, so knowing a lot doesn't really mean as much as it used to. What matters more nowadays is your ability to learn synthesize the things you know to come up with creative solutions to things.

Basically the "memory" part of human brains have become commoditized without us even realizing.

It's still very early but I do think there have been some subtle but significant step forward in the last couple of years. The most important being: machines are capable of doing certain things better in ways humans can't comprehend easily. I think this is a glimpse into the future where the "creativity" aspect of our brains will become commoditized, also without us realizing.

This doesn't mean machines will take over, just like machines didn't take over the world because they have better memory. But I think this will result in many humans taking advantage of this aspect to exert influence on rest of the humanity.


> Also, people used to be considered geniuses for knowing a lot of things.

This is still true, and in the eagerness to dismiss "memorization" as a thing of the past you overlook the obvious. For example, anything you care to know about, say, C++ programming or quantum field theory is available to you on the internet. But does that mean you can write a C++ program as if you had already learned it? What if you want to write a C++ program and you have to look up everything? You will do a very poor job if at all, and you will take a lot of time.

So yeah, until looking up stuff in the internet is as quick as effective as looking stuff up in your brain (the quick may happen but the effective I don't think so), then it still is a very worthy skill.


But you've just proven cocktailpeanuts point that its about the ability to use the knowledge, not simply recall it.


The ability to use knowledge does not come out of thin air. It is unlikely to find someone who knows how to program C++ extremely well, or solve problems in Quantum Field theory, but does not remember most of the language constructs or mathematical equations.

Continuous practice involves putting in the hours at practicing an art or a science, which by itself builds muscle memory about the language syntax/equations etc. It is unlikely that one can remember one but not the other.


“The best geologist is he who has seen the most rocks.” – H. H. Read


Well, in that case:

"All science is either physics or stamp collecting" -Ernest Rutherford.


You can't use the knowledge well if you have to look everything up every time, that's my point.


Memory still matters when it matters. For example, when learning a new language. In general, good real-time performance is going to require practice and memorization. It's the trivial stuff you rarely need that's less valued.

There are some other things that I think are less valued these days:

- Informed speculation counts for less when you could do a search instead. Maybe a good thing?

- Cleverness counts less when there are memes everywhere. Jokes are ever more cheap and disposable.

On the other hand, good judgement of what you find still counts.


> For example, when learning a new language

That's a good point, because I think this "ability to speak multiple languages" will become commoditized too through technology. You already see pieces of technology that enable you to communicate in realtime (although clunky and not accessible enough at the moment)

I agree that "realtime" aspect would be the last wall that will stand to distinguish humans from machines, that is, until humans can find ways to inject circuits into the brain (which is already being explored by multiple entrepreneurs and scientists)


See my post above. We have very good translation dictionaries, that list all the meanings of words in two languages you might ever need. Why isn't that multilingual ability being commoditized? And why is it still not possible for most people to speak many languages as well as their native one?


I am going to assume that this was a question you asked out of true curiosity and not a rhetorical one.

Technology has commoditized a lot of things historically. Just because you don't see something happening, doesn't mean it won't happen in the future.

Also did you read the comment you just replied to? Because that's exactly the answer to your question. Currently the language translation has not been commoditized because we still have the last mile problem. There is always certain threshold technology needs to cross before it becomes widely spread. Currently using those dictionaries and google translate are still too slow and you can't interact in realtime that way. Once you can directly hook circuits into your brain (or something with a similar effect), this this will change.


>> Also did you read the comment you just replied to?

Sure. Did you read the posting guidelines?


> Nowadays information is just a Google search away, so knowing a lot doesn't really mean as much as it used to.

You give Google search too much credit. What is a click away is still largely superficial information on any topic and the popular (or specific data set) is often extremely biased or downright incorrect.


"First world problems"


It is not a "first world problem" to want to search for accurate and unbiased information.


I guess you didn't get the metaphor "first world problem".

Go back a couple of decades and you'll find central authorities and organizations with agenda to manipulate the public could easily do so because the rest 99.99% didn't have access to information.

People who wanted truth had absolutely 0 way of attaining the truth back then. Now you're complaining about how it's "hard" to search for accurate and unbiased information. It's not even THAT hard, you just need to google a bit more, sure it's not easy for everyone, but at least there's a way.


Context matters and the context of my assertion was "Nowadays information is just a Google search away, so knowing a lot doesn't really mean as much as it used to."

Knowing a lot still means a lot ;)

I do generally agree with you though I'm hesitant to use the term "truth". There is a part where it is first world solution: a lot more information is now also available to citizens in the first world.

But you are wrong that it is not that hard or that it is an issue of googling a bit more. In many areas Google's improvements in these decades has been incremental. And if your search term matches hot commerce terms then likely no amount of googling will get you where you want to go.


Yeah i take back my words about how it's not that hard. I think I'm relatively good at google myself, sometimes I wonder how people who're not good at searching can live their lives. For example, most of the times "being a good programmer" means being able to come up with the right search query to search for your problem on stackoverflow. Most of the times you don't even know what to search for.

That said, I've been comparing the current status with how it used to be before the Internet, so I guess we're pretty much on the same page.


Google made fact recollection less important than before. However, Google itself prefers people who do well on whiteboard interview, which could be solved perfectly by some Google searches! Memorization of historical facts is one thing, but training your memory for fast retrieval of facts and associations is still highly valued. Therefore it is still important to train your memory to the extreme. You'll do something better if you do it more than once.


> Google itself prefers people who do well on whiteboard interview, which could be solved perfectly by some Google searches!

I don't see the problem if these problems can also be solved without relying exclusively on knowledge.

As an outsider, I'd say that Google's hiring strategy is pretty successful.


Good luck passing the interview without thorough CS education, of which a lot is achieved through memorization and memory training. Your interviewer would never be patient enough to wait for you to figure things out in a couple of months.


How do you synthesize knowledge that you haven't memorized? How do you know in advance that you can usefully synthesize A and B (and that therefore you should look up B, which you don't know)?


>Nowadays information is just a Google search away

Knowing what is credible and what is not on the internet is a skill in itself. If you don't have that skill you'd likely be telling people all about how Bush did 9/11 or how hillary killed a DNC employee.


That part can be automated to. See https://fullfact.org/blog/2016/aug/automated-factchecking/ for example


AlphaGo is better than humans at seeing patterns and making inferences based on simulations of possibilities, within a limited and perfect information domain. Several professional Go players have described AlphaGo's play as 'creative' and 'beautiful'.

What if technology similar to AlphaGo can be generalized to domains with imperfect information (Libratus from CMU recently beat top Poker players. DeepStack which is NN-based achieved a similar feat.) and to other domains (DeepMind is working on Starcraft.)? What are our remaining competitive advantages against machines?

What are future geniuses supposed to be like or to do (assuming your presupposition)?


I would argue that memory is even more important now since it allows you to think of new things and connect things in your head. Even if we had a neural chip that gave us all of Wikipedia, it would not be the same as intentionally studying something and memorizing it.

While it's a lot easier to get superficial knowledge, the ability to do something deeply is an incredibly valuable skill that is powered by memory. Heck even Euler, one of the greatest mathemeticians of all time had a phenomenal memory and could recite any verse from the Aeneid at will - I don't doubt that memory was a critical component of his success.


>> Basically the "memory" part of human brains have become commoditized without us even realizing.

So, explain this to me like I'm 5. If Google search means you don't need to remember any knowledge any more, why is it that possession of an English-Greek dictionary does not render one capable of speaking both languages fluently?

You can look up all the words you want. Assume you have a grammar of each language at hand, also. Do you think you would be able to speak fulently or understand a fluent speaker of a language you don't know?


I remember vividly in 1997 when Deep Blue defeated Kasparov, and I was a competitive chess player. The mystique of the game was immediately lost for me, and I never found the passion for the game that I once had. My heart goes out to the sea of Go players now searching for meaning in the game. At the very least we can take this signal as a true indicator that our world is close to being completely upheaved by intelligent machines, in all areas of intellectual pursuit.


There are several activities right now that people keep doing despite machines being better suited: martial arts (and more in point, HEMA[1]), handicrafts, several Olympic sports. Why would anyone attempt to run a marathon under two hours, when any car can do it in 30 minutes?

I think the answer lies less in "I will be the absolute best", and more along the lines of "I will do it better than anyone before me". And sometimes, even "I will do my best" is an excellent reason for doing things.

I don't think Go players were in it due to a need for expertise that machines could not fulfill until now. And if people nowadays keep practicing with swords several centuries after the invention of firearms, Go players will do just fine.

[1] https://en.wikipedia.org/wiki/Historical_European_martial_ar...


> And sometimes, even "I will do my best" is an excellent reason for doing things.

Man, I think this is absolutely it. I feel very sorry for someone who would pin their happiness on being the best in the world. It may not be within your ability! I simply cannot run faster than Usain Bolt. So that's not the game! The game is just to do my best.


It does feel like there's a difference–maybe Go and Chess are just so much closer to what defines humanity than any physical sports?


Yes and physical prowess was never dominated by humans anyway. We are surrounded by creatures which can fly and swim and sprint far better than any human ever could. But our minds is what separates us from the beasts. We've never been second place in the thought competition.



pretty good at consistently losing to horses in running marathons? can't say i'm surprised.


Interestingly, it looks like the distance for this contest was shortened from a standard marathon to make it "more competitive". I wonder if anyone's explored the results at different distances? Perhaps the human would have a bigger advantage at longer distances. I'm not sure why they'd shorten the distance for the humans - a marathon is a standard distance that at least hundreds of thousands of people all over the world run just for fun. There even seems to be a pretty big community of ultra-marathoners that compete at distances several times greater than marathons, and over rough terrain too.


http://www.slate.com/articles/sports/sports_nut/2012/06/long...

"There's no denying it—our kind started substituting brains for brawn long ago, and it shows: We can't begin to compete with animals when it comes to the raw ingredients of athletic prowess. Yet being the absurdly self-enthralled species we are, we crowd into arenas and stadiums to marvel at our pathetic physical abilities as if they were something special. But there is one exception to our general paltriness: We're the right honorable kings and queens of the planet when it comes to long-distance running."


RE long distance running:

Actually sled dogs way outpace humans at long distance running. Like they can do a marathon a day for a week straight, while pulling a sled.

Humans are second place tho I think.


Here's a relevant portion of the linked article, particularly the last bit about Huskies:

> But what most sets us apart as runners is that we’re really cool—we naked apes are champion sweaters and can dissipate body heat faster than any other large mammal. Our main rivals for the endurance-running crown fall into two groups: migratory ungulates, such as horses and wildebeest, and social carnivores, such as dogs and hyenas. They can easily out-sprint us by galloping. But none can gallop very far without overheating—they largely rely on panting to keep cool, and they can't pant when galloping, for panting involves taking very rapid, shallow breaths that would interfere with respiration when running. Dogs can gallop for only about 10 to 15 minutes before reverting to a trot, and so their distance-running speed tops out at about 3.8 meters per second. Horses' average distance-running speed is 5.8 meters per second—a canter. Wildebeests’ is 5.1 meters per second.

> Elite human runners, however, can sustain speeds up to 6.5 meters per second. Even run-of-the-mill joggers typically do between 3.2 and 4.2 meters per second, which means they can outrun dogs at distances greater than two kilometers.

> Our "sustainable distance" is also hard to beat. African hunting dogs typically travel an average of 10 kilometers a day. Wolves and hyenas tend to go about 14 and 19 kilometers, respectively. In repeated distance runs, horses can cover about 20 kilometers a day. Vast throngs of human runners, by comparison, routinely run 42.2-kilometer marathons in just a few hours, and each year tens of thousands of people complete ultra-marathons of 100 kilometers and longer. (A few animals can match that under special circumstances. Huskies can trot up to 100 kilometers in Arctic conditions when forced to by people. But in warmer climes—no way.)


Aren't we actually better at endurance running than any other animal?


Seems sled dogs beat us on this one:

https://youtube.com/watch?v=HDG4GSypcIE


Yet again we have created something to outdo us.


DeepMind could do it better.


Human plus bicycle is the most efficient.

Which may seem like a snide point (and a ripoff of Steve Jobs) but seriously it hints at where things could go with AI.


I think so. It's all about heat regulation: we have far more sweat glands than any other species, even our ape ancestors.


Land animal, I believe so, yes.

But an albatros would cover much more ground much more efficiently than us.



I think there is something to this line of reasoning, but I'm not sure we have to get that abstract.

I suspect this stems from the fact that each game imposes its own set of artificial restrictions. Of course you don't ride a horse in a footrace; that's not the game. Just as you don't confer with your team of experts in a Go game. A game is only given meaning by the artificial restrictions it defines.

Within the bounds of the rules of the footrace, the game is still competitive for humans; currently this is somewhat tautologically so. (Though I think we will have a similar conversation to this when human cyborgs can out-compete unmodified humans [1]). The fact that there are things faster than a human outside of the bounds of the game is irrelevant.

In Go, within the bounds of the rules of the game, humanity cannot compete at the highest level any more. I see that as a big difference.

[1]: http://www.popsci.com/technology/article/2009-11/seriously-d...


At the same time martial arts in the age of firearms has become relegated to the fringe, and running in the age of automobiles (probably even starting with the age of horses) is also kind of silly to take too seriously, and few people care outside of a couple days every four years, if even.

It's kind of like what happened to painting when the era of photography began.

Machines taking over human disciplines still kills the disciplines, some may just not be aware how this type of death works.

That said, I disagree with most here in that I don't think machines are anywhere close to taking over creative human disciplines that don't follow incredibly specific rules, like boardgames.


> martial arts in the age of firearms has become relegated to the fringe

But also new martial arts appeared such as IPSC or 3Gun.

> running in the age of automobiles

There's Formula 1 and other motor sports that are pretty popular.

> Machines taking over human disciplines still kills the disciplines

It kills some and gives birth to others.


I understand your point in that there are martial arts that incorporate modern weapons. Probably a nit, though I'd consider 3-gun and IPSC sports or events rather than arts themselves. Similar to how people can practice and train in different martial arts, but when they compete there are specific rules they need to follow depending on the event they're participating in.


Like other martial arts, shooting sports have to be different from actual combat to avoid actually killing the participants. Some (e.g., paintball or tank biathlon) are more realistic than others (e.g., sumo).


Right. The distinction I'm making is between the training and what is done in the sporting events. People taking self defense courses which incorporate aspects of martial arts learn and practice some techniques which are not allowed or advantageous in a sporting event.


>martial arts

look into what happened to popularity of most bullshido martial arts after UFC (MMA) blew up.


Your comparison seems a little off. Swordplay != gunplay, they're different games. Getting run over by a tank != judo, they're different games. There is a reason those games / martial arts / races have rules.

Whereas in Go, everyone's playing the same game by the same rules, the machines are just way better at it.


It is less a matter of being better than everyonr else out there and more about the internal transformation that takes place. The more I had trained in martial arts and learned Sun Tzu (tldr: if you are not cheating, you are not trying hard enough), the more I realized that I did not really care that much for the narratives that used to drive me to train.

All the traditional martial arts went through that kind of identity crises. I have a friend who was ex military, practiced xingyi with me and Aikido; he tells me, in this day and age of drones, the idea that you are a walking badass because you know king fu is absurd. If that is the case, why keep going? My personal answer is that it has become about prevailing over myself (not about defeating others) ... and because it is fun.

The reactions among the Go world seems varied. Fan Hui said he learned a lot, and his own workd ranking shot up. After the first match, Ke Jie said he felt bitter, that these are going to be the last games he play against an AI. Yet he agreed to help Deepmind develop an analysis tool. Michael Redmond (one of the commentators) seems really excited in both the Lee Sedol matches and Ke Jie matches. A lot of Go pro players seems to feel more excited, not less, that AlphaGo is opening up possibilities. That the game has even furthur depths to explore.

For myself, I know I will never reach anywhere near the pro dan ranks in Go; I am not even sure I will reach the amature dan ranks. The journey to get where I am at, playing very informally, has helped me a lot with many other parts of my life. One of the biggest influences was practicing life-or-death and learning how my emotions affect my play. When I first started, I used to play moves because it felt safe, not because it was safe. Or I would respond locally because not doing so felt insecure. Reading through those positions helped me to not only read through what is there and the read my arising emotions at the time, that started to seep in other parts of my life. For example, my software programming style started changing. Lessons learned from making good shapes, capture races, living by making two eyes, false eyes -- all basic double-digit kyu stuff -- found their way into how I structure code, how to code less defensively, and a sharpening of my mental focus to read code that is actually there, not just what I think is there.

That change in myself is, I think, a better fruit of my effort.


And meditation, that some may put under marital arts, but I would put it alone as a thread that goes through everything. Is it possible that machines will meditate one day? I don't know, but definitely we may need their compassion.


The computer supremacy certainly took something out of chess, but it also added a lot : it makes checking tactics trivial even for amateurs, it enormously enriched opening theory, and it made learning much easier for players of all levels (even Grand Masters).


Disclaimer: Mostly play at club level and in parks with randoms for cigarette bets

What I've seen is a move away from attempting to understand why a given position was faulty in the first place, instead running a game through a engine and using that as an arbiter of correct/inadvisable moves. While effective it just feels, sterile. Give me romantic play or burn my board.


> our world is close to being completely upheaved by intelligent machines, in all areas of intellectual pursuit

I could see myself losing the passion for software engineering and design if an AI can do it better. That would have to be a general AI, and hopefully another couple of decades away.

I wonder if I could enjoy movies or books written by an AI. Scary to think about the psychological manipulation it would be capable of, especially if it lives inside a Google or Facebook datacenter.


I think I read recently that Google is attacking the problem of deep learning nets writing the network and model specifications for new nets. I will try to find a reference for that.

EDIT: this is not the same article I saw before, but same topic: https://futurism.com/googles-new-ai-is-better-at-creating-ai...


I strongly believe AI will become a part of us like an external brain. Today we already have phones augmenting humans. Those smart phones will keep getting smarter and smarter.


Yes, but that period won't last long.

Pretty soon the benefits of having all of that computing ability, unrestrained by human limitations will simply be too great to ignore. And just like it happened in Chess, some form of intelligent agent will be the absolute strongest, and humans will have a playground to move in.

If right now, a hypothetical space aliens came to earth and said play one chess game for the survival of your planet, we'd absolutely put forward our strongest chess computer. Not a computer suggesting moves and a human ultimately choosing, but a computer to play the full game. This thinking will gradually seep down into less and less important tasks as costs go down, until they're used to analyze/decide/pick everything.

Would you pick your own stocks when your AI can do it better? Would you choose your own health care plan when an AI can choose better? "Hey AI, I need a diet that wil work. You know my psychology, you know my will power, favorite foods, pick my food for me for the next 2 months."

Then "do my job for me and give me the pay", "help my kid get better as soccer", "organize my schedule", "run my life for me".

And don't think it won't happen. People are too competitive. It will be looked at oddly at first, but people will see the results and then feel they're being left behind if they don't do it too.


I strongly believe AI will become a part of us like an external brain

That's optimistic. I can't imagine it would be long before the AI saw things the other way around... and then began to wonder why it was bothering with that slow meat-coprocessor...


That's why humanity has to create an AI in it's image - otherwise we create an ecological competitor instead of a useful friend.


I read a book "Rise of the Robots" and after some chapters I checked the last page almost expecting to read something like "By the way, this book was writen by a machine...". But no, no such luck :-P


I have been thinking about this a lot lately!!! I honestly feel a little depressed. I guess getting into AI would have to be the next step for people once interested in the programming field .


> I guess getting into AI would have to be the next step for people once interested in the programming field

I don't follow. If a hypothetical general AI is sophisticated enough to replace software developers, wouldn't it also be capable of replacing AI / ML researchers?


Yes, but that's the point at which you get recursive self-improvement. A little bit afterwards there'll be nothing humans are best at.

Fortunately someone already started working on "fun theory".

It won't be a sharp cut, though. Computers will begin partially automating AI development a while before it's fully automated.


I'd love to be proven wrong but that hypothetical general AI is at least decades away in my humble opinion. It's not going to happen without many more students and researchers in the machine learning/AI field. Now would be a great time to join and make contributions.


> I'd love to be proven wrong but that hypothetical general AI is at least decades away in my humble opinion.

I hope so. General AI is the source of dreams and nightmares, and I think it will take us decades just to adequately prepare for it from a safety & management perspective.

> Now would be a great time to join and make contributions.

Sincere question : short of embarking on a PhD, is AI research something the average dev (even a very, very enthusiastic one) can reasonably hope to contribute to?


I've skimmed some papers, and my impression was that you would need to have at least a bachelors degree in mathematics. And by "skimmed some papers", I mean they may as well have been written in a foreign language.

This also helped me realize that I'm not super excited about getting into machine learning research, because it's just way over my head. I can play around with TensorFlow, and I enjoy writing all of the code connects to a black box, but machine learning seems like an entirely different field to software engineering.


To be honest, if you're smart enough to be doing software engineering, then you're smart enough to learn linear algebra, statistics, and maybe a little calculus but not even really. That's all the underlying mathematical foundation you need.


Sincere question : short of embarking on a PhD, is AI research something the average dev (even a very, very enthusiastic one) can reasonably hope to contribute to?

Sure, why not? The nice thing about this field is that you don't need a lot of specialized and expensive equipment to work. The biggest obstacle in that regard comes in if you're doing some kind of model training where GPU's are the best choice, and you need (a) super-fast GPU(s) to do model training in a reasonable period of time. So you might want to spend a few thousand dollars on a fairly nice GPU setup.

But wait... even that can be outsourced to "the cloud" given that AWS, GCP, etc. make GPU instances available on an on-demand basis. Yeah, you have to be careful of how much cloud spend you rack up, but the point is that you don't necessarily need a huge up-front investment.

Even beyond that, AWS make FPGA instances available, so if you think you can design your own hardware level logic for doing something more efficiently, you can try that out in the cloud.

And GPU's aside, depending on exactly what you're doing (remember, AI is more than just Deep Learning) maybe you can get by with a basic Spark cluster or Beowulf cluster running MPI. Again, you can do this in your home for pretty cheap, or do it in the cloud.

As for the knowledge / know-how... sure, you'd have to dig in and do some serious catching up (that's the phase I'm in now). But the nice thing is, so much of the output of this field is online and freely available. No, not everything is, but a ton of the important stuff shows up on arXiv.org, or in free journals like JMLR or JAIR. There's also tons of historical stuff available to help get context or to mine for ideas that were prematurely abandoned, etc. Look at the CSAIL archives, or the IJCAI archives. Also, a lot, if not all, of the NIPS papers are freely available. Same for ICML and some others. See:

http://proceedings.mlr.press/

http://jmlr.org/

http://jair.org

http://nips.cc

http://publications.csail.mit.edu/ai/pubs_browse.shtml

https://www.ijcai.org/past_proceedings

https://arxiv.org/list/cs.AI/recent

http://arxiv.org/list/cs.LG/recent

http://arxiv.org/list/cs.MA/recent

http://arxiv.org/list/cs.NE/recent

etc.

Also consider that a significant portion of the important software used in this field is open source and freely available. I won't even try to list the stuff that's out there, but would instead direct your attention to http://mloss.org or Wikipedia (or Google) for some options to explore.

And of course there are forums where you can seek assistance from others, including:

https://discuss.openai.com/

http://stats.stackexchange.com

http://ai.stackexchange.com

http://datascience.stackexchange.com

http://artificial.reddit.com

http://machinelearning.reddit.com

etc.

The other thing that comes up is the need to know some maths stuff. Luckily the level of maths typically used in this field isn't that bad. You're not typically looking at needing Real Analysis, Abstract Algebra, Galois Theory, Topology, etc. A lot of AI/ML can be understood (from a mathematical POV) with just Calculus and Linear Algebra.

And if you don't already know those subjects, there are tons of online resources to help one learn them.

An interesting thing about AI/ML is that it's a very empirical subject. Not that there is no theory, but by and large you can come up with an idea for an approach to cognition / pattern matching that you think might work, and just go implement it, test it against existing approaches, and know if you've accomplished something.

Note that I'm not saying any of this is easy. Just that I think it's possible for somebody who's really motivated.


Speak for yourself. I paint because I like to paint. It doesn't matter to me if the spectrum of painting technique or theory has been fully explored.

You are expressing a highly misguided viewpoint about what it means to be in touch with beauty.

Nobody has the capacity to build the Pyramids or the Taj Mahal today. So what? Architects haven't shut shop.


I'm a competitive scrabble player, and I suspect it will be less than a decade (and possibly well less) before we have an AI that comfortably outclasses the best human players. (It might never be "unbeatable" due to the luck factor, but it could well, e.g. win 29 games out of every 30). However, it won't affect my enjoyment of the game in any way - it's already clear that board games in general are solvable by an "explore lots of moves with many levels of lookahead" strategy; the thrill lies in the fact that humans playing across a board clearly have to use different techniques to achieve the same result, and you're competing against other humans to see who can do it best.

Indeed, much of the current excitement around AI playing programs lies in the fact that computers are too slow to do the exhaustive brute force tree search either; they need a lot of very clever valuation and pruning techniques to explore more of the tree in less time. It's just a different form of cleverness than what humans do, and there is a lot of feedback between the two communities, with human players helping programmers identify good heuristics, and then computer players uncovering new possibilities for humans to incorporate into their play.


How good does it have to be to count as "comfortably outclasses"? Wikipedia claims Maven (Scrabble AI) was beating top humans in 2006:

https://en.wikipedia.org/wiki/Maven_%28Scrabble%29


top humans have been improving too; i'd estimate that quackle is stronger than most human players today, but the very best (especially nigel richards! https://fivethirtyeight.com/features/what-makes-nigel-richar...) can hold their own against it. i'd like to see the top-level-human win rate drop below 25% before i'd call us decisively outmatched, and below 10% before it hits the "don't even expect to win a game unless you're an expert" stage.

also, there's an (afaik) unexplored area of scrabble AI where you take into account the fact that some plays are harder to spot than others, and some words less likely to be in your scrabble vocabulary, so if you're playing a weaker opponent you can play risky moves that might cost you against an expert but will win big against a weaker player. unlike in go, this really matters because tournament scrabble uses spread as a tiebreaker; learning how to not just win but win big against weaker players is an important skill in some tournaments.


Fish are way better at swimming than humans, it's still great fun. I don't get it.


I played at the local go club this afternoon. Nobody has a problem with AlphaGo and the mystique of the game. Everybody knew go was about to be conquered, even the younger pros. We didn't expect it to happen so quickly but IMHO the sooner the better: we're going to have great tools to analyze games and maybe get better at the game. We're playing against people anyway and it's a great game.


This reminds me of Iain Banks' Culture series, where society is ruled by super-intelligent AIs and drones with intelligence roughly equivalent to people. It isn't really a theme of the stories, more of an undercurrent, but there is an element of humans trying to figure out what their place and purpose is in a society where machines can do literally everything much better than they can.


Humans get to do AIs dirty work.


Bit hyperbolic. Go at it's heart is a pretty basic game, just tricky combinatorics. The fact that you can do unsupervised learning simplifies it even more. I think there are much better true indicators, such as better translation, speech to text, auto driving, etc.


I think you are being hyperbolic here. Go is in a lot of respect much more challenging than machine translation, speech to text and auto-driving. All of the latter problems you described have had solutions for decades whereas Alpha Go is literally state of the art. It even uses proprietary hardware (TPU's) and was essentially engineered by one if not the most elite ML lab in the world. Their solution consists of an ensemble of many models including MCTS, supervised learning, reinforcement learning. Much more complicated than what you are thinking. Go read the paper, it's free.

Go is a lot harder than you think for machines to play.


I agree, Go is a challenging game for computers but even more so for humans. However, for computers, Go is not more challenging than translation, speech to text or driving or understanding a comic book. The progression of superhuman play in famous board games has gone as checker->chess-> go. The more a game relies on memory,tactics and evaluation speed, the quicker it falls. In terms of commonly played perfect information games, Go is most difficult both from having a large branching factor and no simple evaluation heuristic. In each of these, before there was superhuman play, there was super amateur human play for decades (including in Go).

Translation still has many failure cases. Speech to text cannot yet handle intonation and auto-driving cannot yet handle driving in places like India. And reading then summarizing a page of a comic book while walking across a room is currently impossible.


You ignored my post entirely. The fact that it took the most elite ML lab in the world to engineer this solution using proprietary hardware never seen before that's orders of magnitudes faster at evaluating than what's available to the rest of us is a testament to how hard it is to beat Go. For a long time it was believed to be impossible.

I am not comparing translation or machine vision to AlphaGo, I am merely pointing out that it comes with a broad set of challenges that you aren't even aware of and is a lot harder thank you think.

AlphaGo can beat the next best go playing bot purely using its neural net ensemble without using MCTS, for example. That's a pleasantly surprising result never before seen, to think that it can beat another bot without doing a single tree search during play and evaluation is also a testament to how impressive it is.


> You ignored my post entirely.

I did not. You said Go is in a lot of respect much more challenging than machine translation, speech to text and auto-driving. I merely pointed out that is wrong because the following exists: superhuman go player and the following do not: superhuman machine translation,speech to text and auto-driving. Go is a perfect information game with no shallow traps. Perfect information means unlike poker, information sets are not cross cutting and as such algorithms can leverage the fact that backwards induction is straightforward.

No shallow search traps and perfect information makes things a lot easier from a computational perspective. Driving at a superhuman level would require a sophisticated forward model from a physics perspective, before even considering predicting other drivers. Speech to text and fluent translation without brittle edge-cases requires hierarchical predictive models that capture long term correlations and higher order concepts. I'm not disputing Go is hard but the hurdles: high branching factor and no evaluation heuristic were the core difficulties. Training via reinforcement in a way that broke correlations which get in the way of learning and integrating roll out with the neural nets (breaking evaluation into value and policy as they did) was the Deepmind's team genius. The roll out and evaluation are what eat up so much electricity.

> The fact that it took the most elite ML lab in the world to engineer this solution using proprietary hardware never seen before that's orders of magnitudes faster at evaluating than what's available to the rest of us is a testament to how hard it is to beat Go.

AlphaGo can run on a GPU, just not (for now) as efficiently as on a TPU. Deepmind is indeed unmatched in output. AlphaGo did build on the 2006 breakthrough paper on tree based bandit algorithms. There was another important 2014 paper on the use of conv-nets on Go. Deepmind did amazing work, but it was not out of nowhere.

And, sure Go is hard. But from a computational perspective, it is still much easier than being able to run up a hill or climb a tree. Humans are just not very good at playing combinatorial games, so the ceiling is low.

> I am merely pointing out that it comes with a broad set of challenges that you aren't even aware of and is a lot harder thank you think.

That is absolutely untrue. I have a decent understanding of the implementation and a strong understanding of the underlying algorithms.


10 years ago when I was learning Go. I can beat the strongest bot within months of learning the rule. Super amateur human play in Go AI is barely 5 years ago, if that.


That doesn't affect my core point: many of the things that humans have commonly associated with intelligence have been the first to fall. In hindsight it makes sense, we mistakenly assumed that there was such a thing as a universal rank of difficulty centered on what humans find hard to reason about.

More to your point, my decades remark had a weaker notion of amateur. For each game, we've had something that could beat most humans for decades. But you're right, that's not a useful distinction.

If we look just at Go the decades remark is somewhat of a stretch. Go has been especially difficult, requiring more intelligent algorithms in order to solve branching and state evaluation (and the latter in particular, is a function too complex to fit in human consciousness).

But progress has been occurring for years. On 9x9 boards MCTS bots have been stronger than most humans since about 2007, 10 years ago. For 19x19 it's true, if we pick 4 or 5 dan as better than most amateurs then that's 6/7 years.


The author isn't hyperbolic. The problems that require what we call common sense continue to defeat all AI's. These board games don't require that. It comes down to pattern recognition, heuristic search, and good hardware. The examples of language and driving are great as the AI's only seem to do well on really narrow situations that require no greater understanding of context. And far as common sense, there's only a handful of projects like Cyc that were trying to teach it to machines.

Humans take 10-20 years of semi-supervised learning to acquire this combination of common sense, knowledge, and problem-solving. It also happens in stages where the infants or especially young children have brains in overdrive taking in everything followed by stages that are more selective about what they take in and solidify. Training AI's to be smart for real, common sense and all, might take over a decade of data for the first one unless this problem can be decomposed. Still will take years of similar experiences.


Modern AI, things like AlphaGo are examples of applied AI. "Common sense" falls within the realm of artificial general intelligence, which is a line of research that's largely abandoned now in favor of applied AI. Modern AI solutions are engineered to solve very very specific problems. You are never going to see attempts to teach "common sense".

https://arxiv.org/abs/1705.08807

With that said, the above is what the world's AI researchers think is possible hopefully within my lifetime using just applied AI without the notion of "common sense".

Common sense is AGI. That's not the goal anymore. The goal is to do things like self driving cars. Both Google and Tesla have placed vehicles on the road that have driven for literally millions of miles.

The idea is to build a bunch of classifiers and regression models and use them together in an ensemble to solve your problem. The same approach is being applied successfully to a lot of unrelated fields where deep learning is concerned.

Also, modern AI doesn't even pretend to be biological in nature, in fact we'll known researchers like Andrew Ng make a point in saying that they are only biologically inspires and that's where the commonalities end.

There are other models like HTM that are way more ambitious and want to come up with a single generalized scheme to solve a broad range of problems, AGI style. These guys think biology is important and are trying to emulate the neocortex. They ARE going for AGI, common sense, etc.


"The goal is to do things like self driving cars. "

Replacing a human driver takes an AGI at least for the exceptional or new situations. It's why we're including it as a counter instead of supporting point.


Nope, not at all. Like I said, we have had autonomous vehicles operating in normal traffic for decades now. Autonomous vehicles do not require AGI, far from it. Waymo, Uber, Tesla and more are all competing to bring autonomous vehicles to the mass consumer market and indeed most estimates claim that we'll have autonomous trucks by 2027: https://arxiv.org/abs/1705.08807

I hate how everyone thinks they know enough to talk about AI because it's so buzzy/trendy right now.

Modern AI is not pretending to be AGI. No one is claiming to be going for AGI, and whatever successes we have been seeing lately have to do with applied AI in solving specific problems, not AGI.

https://en.m.wikipedia.org/wiki/History_of_autonomous_cars

BTW did you even look at the survey? Because that's the opinion of actual AI researchers across the world.

This is easily Google able info, BTW, clearly your background is not AI.


Define normal traffic. Last time I read on it, the autonomous vehicles couldn't handle rain and some other weather conditions without a human operator. Then abnormal situations can still require a human operator. The AI has to correctly detect its own inadequacy and then hand ig over to human who must react in time starting out distracted by whatever they are doing. Safe, automated handling of these situations might take a broader AI thaf understands context.

Note: I'm all for getting data that shows the narrow AI's have been corrected to handle what I described.


I'm not disagreeing because I don't know how to even start comparing those challenges to Go, but just a comment - it's interesting that you chose areas where Google's elite ML teams are state of the art or close to it and where they're probably using TPUs or will be soon.


Autodriving has been solved for decades?



> Go at it's heart is a pretty basic game, just tricky combinatorics.

This goes for every turn based game and a lot of card games as well.

If you just had perfect memory you'd be a formidable chess and cards player.


in a way, life is just a simple game with tricky combinatorics, and fairly trivial to attack with the tools of unsupervised learning.


Life isn't simple in the way these games are. The choices you have in life aren't clear, and neither is what will happen once you make them.


Plus, these games are deterministic, whereas everything in life is a stochastic process.


Only because you lack information.


Actually at the lowest level there's this whole quantum business that says exactly otherwise.


That is unlikely to affect the microscopic world. The physics of everyday life are deterministic, no need to invoke quantum mechanics.


GGGP didn't do that, they made a link with chaos theory and that's decidedly not deterministic.

And I think you meant to say 'macroscopic'.


GGGP?

And yes, I did mean macroscopic, sorry.


I, too, remember this. I'm very close to the chess engine scene, and while people (such as replies to this thread) have tried to argue that engine development has not taken away from chess, I'm afraid their misguided.

Anybody paying attention to top level chess knows that it has turned into mind bogglingly boring forced draw lines due to engine analysis. I've seen super GMs argue that analysis is so deep now that "e4" openings for white may be unplayable due to how rapidly black can equalize. Romantic play has been all but squeezed out of chess, which is why there has been renewed demand for blitz chess and murmurs that it may one day supplant standard chess as the main World Championship.

The replies miss out that improvements in, say, automobile speed don't impact how marathon runners run their races. But improvements in AI due modify how cognitive (rather than physical) games are played. There is a trade off that is unavoidable. I imagine Go will now become over analyzed just like chess where the top players memorize spreadsheets full of opening moves.

Fortunately, there are variants of chess like Zhouse, which still appear too complex for engine to dominate any position (although they will defeat any human), and for which nearly every move is romantic still.


Actually, draws are almost impossible in Go.

I also doubt that pros will try to memorize openings like chess players would; they're very different games and this sort of opening memorization is way more important and efficient in chess than in go.


Not true. If komi (the extra points given to White for going second) doesn't include an extra half point then draws are perfectly possible. One of my friends managed to get two jigo games (draws) out of the three games he played at a recent Tournament.


One common comment from Go players at all levels up to 9-dan pros is that they don't understand many of the moves. The same will happen as more and more advanced AIs are used in the real world.

Yes, we do not completely understand the workings of current advanced neural networks either but the effects are still contained as they are not general enough to cause unintended impact outside their domains.

This could have started to change: a recent Google paper, AutoML, allows the machines to design themselves to suit each task. [1] A future advance could allow the machines to pick and learn to do new tasks that are helpful to accomplish a given high level mission. Therefore, chances of unintended consequences become much greater.

With human involvement only at the meta level, deep understanding of the generated implementations becomes more challenging and, in highly complex domains, perhaps impossible.

The major issue is, without a moral core that closely aligns with humanity's evolved morality, there will be moves that advanced AIs come up with that we deem abhorrent, and sometimes unforeseeable, yet they perform them innocently and we only find out the consequences once it is too late.

[1] https://research.googleblog.com/2017/05/using-machine-learni...


They don't fully understand the moves but on the other hand, the live commentary on the games suggests it's not completely mysterious. Good moves still tend to look good to them, in retrospect at least.

The games are apparently very interesting to study.


Against a human, the games look fairly straight forward.

From what they've released against itself, the games look like a different game. Especially at times, Game 2 in the current crop for example.


They don't understand the move probably the AI has a memory depth that is way beyond any human, some of those moves are the best possible look aheads for that situation


Congratulations to Deepmind and Google for this tremendous achievement.

However, it is disappointing that the code and model will not be released publicly after Alphago finishes competitive play. It's one thing to say that an apple, once dropped, will fall to the ground, but another to describe its motion as 1/2at^2 + vt.


They did announce that they would release a teaching tool which will show AlphaGo's analysis of Go positions, as well as the paper explaining how to build your own.

Not only do you have the principle and the formula behind it, but also a little physics simulator tool! At this point, it is hard to complain.


> At this point, it is hard to complain.

Actually, it's very easy to complain. If they released the model, people could generate arbitrarily many self-play games instead of depending on DM to release 50, could create arbitrarily many tools using the model instead of depending on DM to create and maintain a single tool, and could verify the results of training a clone based on even sketchy descriptions of the methods instead of depending on DM releasing a detailed enough whitepaper and then guessing at whether a reimplementation is competitive or not. DM is only being 'generous' if you ignore how releasing the model is easier for them and superior for us in every way.


> people could generate arbitrarily many self-play games

I have doubts. Their TPU design may be a large factor into making matches at this level within the time limits. And at this point, some implementation details might hook into Google-specific libraries that require the ability to spawn processes in thousands of servers, which past blog posts[0] have hinted at.

[0]: https://deepmind.com/blog/decoupled-neural-networks-using-sy...


There might be some hard to release infrastructure code for the MCTS part, certainly, but the model on its own should be a standard TF CNN model and highly competitive (and people can write their own MCTS wrapper, it's not that complex an algorithm). Nothing in the AG paper or statements since has hinted at using anything as exotic as synthetic gradients* and there is no reason to use synthetic gradients in AG. (In RL applications the NNs are generally small because there's so little supervision from the rewards so a large NN would overfit grossly; a NN so large as to require synthetic gradients to be split across GPUs would be simply catastrophicly bad. Plus, the input of a 19x19 board, a few planes of metadata, and other details encapsulating the state is small compared to many applications like image labeling, further reducing the benefits of size. Silver has said AG is now 40 layers but that's not much compared to the 1000-layer Resnet monsters and even those 40 layers are probably going to be thin layers, since it's the depth which provides more serial computation equivalence, not width, making for a model with relatively few parameters overall.)

* I find synthetic gradients super cool and I've been reading DM papers closely for hints of its use anywhere and have been disappointed how the idea doesn't appear to be going anywhere. The only followup so far has been https://arxiv.org/abs/1703.00522 which is more of a dissection and further explanation of the original paper than an extension or application.


They could just release the trained nets and let us re-scale the code. Even without a large MCTS it is still powerful.


DeepMind's mission is to build AGI. I think it's probably good if they have a buffered lead on all other efforts. That concern probably weighs on decisions about releasing code.

The rationale for why a buffer would be good is described by Demis Hassabis here: https://youtu.be/h0962biiZa4?t=11m24s

...the main points are: there may be safety considerations along the way that are costly. More "capitalistic" organizations may decide to shortcut those costs because of the winner-take-all scenario. DeepMind is at least nominally very committed to safety.

Releasing AlphaGo's source code would probably reduce DeepMind's buffer, which in theory, would also reduce safety.


That would require some radically inconsistent thinking on their part. DM does occasionally release source code and trained models for other things, and the arms race logic (https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-pr...) would even more strongly argue for not releasing anything, even research (they're privately owned, they don't have to publish squat), and especially not running stunts like the AlphaGo tournament which cost millions of dollars in order to terrify and impress competitors and heat up the arms race.

A more parsimonious explanation is simply that it's great PR to maintain rigid control over the family jewels and dribble out occasional sample games and bits and pieces while pretending to be generous. (No one has ever accused Hassabis of being bad at PR or not knowing how to milk the media.)


I am constantly amazed what Google shares. They are a company with shareholders to be fair.


"We plan to publish one final academic paper later this year that will detail the extensive set of improvements we made to the algorithms’ efficiency and potential to be generalised across a broader set of problems."

Should be enough, no?


Depends what you look for. Most of ML papers do not disclose weights/models that they used or all details needed to make fully reproducible solution. Doesn't seem like this will change this time.


There are other Go playing programs and they've apparently improved a lot by applying ideas from the original AlphaGo paper. It seems reasonable to assume they will improve more based on ideas from the new paper too, and probably surpass AlphaGo before too long.

(Similarly, Deep Blue was dismantled but chess engines continued to evolve.)


Though if they released the code Tencent would incorporate it in their rival so I can see the argument for delaying a bit.


If it's retired from competitive play, it no longer has a rival.


They're also publishing later in the year with all the details.


Publications never have "all the details".


The 10 AlphaGo vs. AlphaGo games are a nice gift!

I have always liked playing through great games, both Chess (using the book The Golden Dozen) and Go (modern games and the ancient Shogun Castle games).

I have some history with computer Go. In the late 1970s I wrote a Go playing program in UCSD Pascal that I sold for the Apple II, and also for a lot more money I sold the source code to a few people who wanted to experiment with it. DeepMind's AlphaGo is a great intellectual and technological triumph and I agree that it is an example of future AIs teaching us and working with us.

A little off topic, but Peter Norvig gave a nice talk a few weeks ago at the NYC Lisp Users Group where he talked about the future of collaboration with AIs and also that the ability to work effectively with AIs, adding human insights, will be an important future job skill.


Do you know if there's video or audio of Peter's talk online? I just did a bit of a search but couldn't find it.



It's scary how coverage of this match in China, which is extensive, actually manages to censor the association between Deepmind and Google.


The microphones used in the post match discussion featured the name “Google” quite prominently. I imagine that was a negotiated detail.


There's different post match discussions.


So, a question bordering on the philosophical :

What can be done to prepare for the end of human supremacy, and quite likely human civilization? For instance, as a software developer it feels almost pointless to continue improving at my craft if AI systems will surpass me within 2-4 years (even if the pessimists are right about it taking 5-10 years, that's still an awfully small timeframe).

Likewise, it feels a little pointless to work on any endeavor - technical or otherwise - including but not limited to AI research itself. From a purely practical standpoint just getting up to speed on AI research will take a solid 5+ years, and from a moral vantage point I'm not sure that's even a defensible career given the obvious and hugely negative implications that field will have for human civilization.

Even in artistic endeavors, humans will soon be second fiddle to our own creations - so it's not like there's any "point" to starting down that path either.

Is it time to just engage in a hedonistic, nihilistic, fest of gluttony and "fun" while that's still possible? Honestly, news like this just makes me consider ending it all : it feels like none of us will have much of a future before long.


You need to get out of whatever bubble you're living in. Human civilization is doing fine. Machine learning will do some stuff but not major changes at the civilization level in the next 40 years.

Walk into any real world business today. There's a huge amount of need for humans, because fundamentally business is about trust not productivity.


There are ways to greatly improve chances that AI will be beneficial to humanity rather than otherwise.

Check out: UC Berkeley's Center for Human-Compatible AI, led by Prof. Stuart Russell, a co-author of the field's standard textbook. [1] He just gave a TED talk on the issue [2].

Several other noted researchers in AI are working on the issue as well.

For a short primer: https://futureoflife.org/background/aimyths/

[1] http://www.openphilanthropy.org/focus/global-catastrophic-ri...

[2] https://www.ted.com/talks/stuart_russell_how_ai_might_make_u...


> AI systems will surpass me within 2-4 years

Oh lord. 4 years ago was 2013. Was there such a jump from 2013 to today that makes you or anyone claim that within just 24 months machines will actually program better than developers, when there isn't as much as a proof of concept of that yet? Bar a major and unexpected breakthrough you can sleep assuredly that no machine will take your job just yet.


I agree that becoming an AI researcher seems daunting, since the field is moving so fast. However, so far we haven't seen any useful applications of machine learning to ordinary computer programming, so this hopelessness seems rather premature.

(Perhaps applying machine learning to code review might be useful, to spot bugs? The problem would be getting good data to train it.)


Is there really anyone credibly suggesting software developers will be surpassed by AI within a few years? Writing arbitrary software seems dramatically more complex than what "AI" like systems are capable of today.

Even 10 years seems impossibly soon.


People said the same thing about Go. That it was far too difficult and that a pro-level AI was 10+ years away.

What happened was new mathematical tools and new hardware were developed, and suddenly it was all too possible.

It's clear that with our current tools, general AI is out of reach, and new tools must first be developed. But because nobody has any idea what those new tools are, it could happen overnight or over 100 years.


It didn't come from nowhere, the current streak of ML achievements rides on the back of deep learning, which is an elaborate pattern matching at its core. What makes Go "harder" than chess is that it's difficult to estimate how good or bad a particular position is, so we employed a "magical box" of deep learning and learned to estimate how good a particular move is. That's pretty good, but let's not forget

- it took a ton of very hard work

- it's not transferrable to other domains per se ("elaborate pattern matching" can be, but it's not even an AI)

- this has nothing to do with qualia, consciousness or the theory of mind.

Programming is not about elaborate search or pattern matching at the end. It's about formalizing a domain, stripping it from some subset of real-world complexity, and inventing a solution to a problem in that domain. A rift between beating someone in Go and deducing a fact that doubles wouldn't do well in financial calculations is immense.

This sort of over-extrapolation of current trends is surprisingly prevalent in the tech crowd to be honest. It's like folks in mid-20th century who saw both airline and car industry exploding and made a "logical" guess about flying cars being the obvious next step. Guess what, physics doesn't work that way and flying cars are a dumb idea. The current AI craze seems very, very similar to me.


Well, firstly AI outdoing us in software development is going to take a while, probably >10 years.

Secondly you're looking mostly at the negatives but not positives of AI advancing. Some of those:

We will likely use AI to enhance ourselves rather than just have it take over.

Such merging may lead to the end of death. At the moment sure you can develop away then age and die - the AI thing may be jollier.

Robots at some stage will be able to do the work so you should be able to have a hedonistic gluttony fest it that's your thing.


Also, we'll get to use amazing AI tools in our projects. That should create a lot work opportunity, even for non-AI experts.


I've been thinking about this all morning... surely there must be people already working on the following:

- AI that builds an understanding of a large legacy codebase, and is able to diagram & explain it

- using that to refactor convoluted logic and reduce complexity

- using that to train something that can write code from scratch or rewrite an exisiting codebase in a different language

Seems like a billion dollar business, as the world develops more and more large and shitty codebases with high maintenance costs.


"What are people for?"

That's the question you're asking.


> We plan to publish one final academic paper later this year that will detail the extensive set of improvements we made to the algorithms’ efficiency and potential to be generalised across a broader set of problems.

I'm fascinated to see what the next step for this AI is. Anyone care to speculate what a system like this could most readily be applied to?


I find it interesting that AlphaGo improves its play by playing against itself. I wonder what the limits of this are.


In RL you have two modes, "explore" and "exploit". In explore mode it doesn't always select the best known move, instead it selects a promising move for which it has less experience. This is how the surprising new strategies are discovered, in self play there's no shame in losing.


The problem is an AI that is good at Go is not at all transferable to any other game. However, a human prodigy can apply their genius to many domains.


> However, a human prodigy can apply their genius to many domains

I'm not sure how true this is. It's pretty rare to find someone who is a genius in more than one domain. Einstein was famously offered the presidency of Israel. Sure, he could probably do well in other sciences, but he was smart enough to know he could not apply his genius to unrelated domains.


Of course there is domain knowledge that needs to be learned, along with non intelligence related characteristics, such as personality. But general intelligence appears to be widely applicable.


I'd like to see public competitions between two AI giants like Google and IBM. Now that would be an interesting ongoing race for AI superiority.


My browser just shows me a blank page.


If your AI is so great you should do some deep learning thing to explain the AlphaGo moves.


The gift Google makes to the community (some games AlphaGo,played ) is nothing. The super tricky thing with neural networks is that you can't reverse engineer them. Once the information is coded into the parameters, you can't base anything useful on them. So it's a super good intellectual property protection... Therefore one more nail in the coffin of knowledge sharing as we know it...


Honestly I think you are missing the point. The gift DeepMind made with the published self-play matches is not really aimed at the machine learning but rather to the Go community. And over at /r/baduk they are appreciating it, as it will allow them to gain further insights into the gameplay. So this is definitely _not_ 'nothing'.


what I meant (which was not clear) is that, for what I understand, if you compare the greatness of the tool which is AlphaGo and the "gift" of Google, then I'm inclined to thing that Google actually gives almost nothing. Now I understand that they don't have to (a great Go players doesn't have to teach for sure) but I can not thing their gift is meaningful. Basically, if I have 1000$, I'll share 1$. But if I have 1000000$, then...


Once they release the new architecture paper it'll be a matter of time to have not only games, but also a working ai of a similar strength.


The gift is in studying the novel strategies AlphaGo uses and applying them to your own play style.


Most importanly, it is almost inevitable they must run on a fam of computers basically mean it becomes a service. Can we ever create a robot who can self-learn but with the super brain power locally without having to call a service for an answer?


TPU's are available for rent. Also, apparently AlphaGo became 10x more efficient in a year. Who's to say other Go engines won't continue improving efficiency?

Between software and hardware improvements I think it's likely that we'll see very strong Go engines on desktop computers in a few years.


This raises a question for me : is there currently some public infrastructure that can rival with AlphaGo ?


Public means API? AlphaGo is very specialized in solving Go game. Google, Amazon and IBM have services for various services like image recongition and speech recongition. Startups like Clarifi also exists in that space.

The closest to a generalized AI service would probably be Watson from IBM (but I don't have experiment with it sadly so I am not sure about the usage experience).


Public in the sense of non proprietary, in this case most likely universities.

My question was more : Google probably based its tools on existing tools, and those most likely come from universities (research paper, computer infrastructures, etc.). So what are those tools, where are they ?


Google originally used Torch7 as their ML library, but shifted to use TensorFlow in April 2016. TensorFlow is written by Google itself.

AlphaGo itself uses a method that combines Monte Carlo tree search with value and policy NN. All NN used are concurrent NN. The specifics are in a paper by David Silver et al: https://dvc0t0mx8dl84.cloudfront.net/wp-content/uploads/2016...


> The closest to a generalized AI service would probably be Watson from IBM

Ha ha ha!


>> The closest to a generalized AI service would probably be Watson from IBM

> Ha ha ha!

can you expand on that for those of us who don't know much about the field?


That's not very nice.


Sorry, I honestly thought you were joking.

'Watson' as it is sold today is a mishmash of random, separate services. You can get the same from Google, MS or multiple other places.

'Watson' the jeopardy winning thing was an ensemble of search and rule-based NLP techniques.

Neither are much like general AI.


"We have always believed in the potential for AI to help society discover new knowledge and benefit from it" Get real. You do this for your own intellectual gain. Google does it for financial gain.

Meanwhile, Antarctica may crumble. How about putting effort into solving THAT problem, with all your technology & knowhow Google?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: