Hacker News new | past | comments | ask | show | jobs | submit login
Go master Lee Se-dol says he quits, unable to win over AI Go players (yna.co.kr)
640 points by partingshots on Nov 27, 2019 | hide | past | favorite | 540 comments



I've spent so many years playing and hearing about how Go will not be solved in our lifetime that the day AlphaGo won 4/5 games v Lee Sedol marked my personal timeline into what came before and after. I walked around all day in a daze watching people as they go about their daily lives as if nothing had happened.

I've heard it said that man landing on the moon was like that for them, but I didn't understand as it was the only world I knew.

Now I can appreciate that these were the firsts of many singularities yet to come in AI and space exploration and I hope to live to witness a few more (but not too many).


I have an opposite view: it's not that shocking that AI has advanced a lot. It's a lot more shocking to learn that humans aren't as great as we hope to be. Also, our prediction sucks. I mean, who said that go is such a difficult problem that it would take a lifetime to solve? Sounds like intellectual arrogance to me. Sure, the problem space is huge, but it's well-defined and homogeneous. There was a time that reciting a long text or multiplying large numbers is considered a humanly intelligent thing, only that it wasn't. Alan Turing used to think that AI is good to humans because it teaches us to be humble, and I think we're kind of getting there (for certain domains). On the other hand, things like self-driving will remain unsolvable because the problem is fundamentally ill-defined; we don't even know what is a good driving.

(Edit) To those who think self-driving is a well-defined problem: it can be in some remote areas, but imagine driving in bustling city streets with kids, bicycles and dogs. The driving problem becomes a communication problem.


> On the other hand, things like self-driving will remain unsolvable because the problem is fundamentally ill-defined; we don't even know what is a good driving.

Humans arrive safe and unhurt (as much as possible, especially while human drivers remain on the road) at their destination with minimal violation of the locality's established rules of the road. No?

(Though now that I've written what amounts to a utility function, I fear what sort of paperclips may come out of it.)


Drivers (both AI and human) may face problems that are essentially ethical trolley problems. While many of these choices are clearly artificial to the point of ridiculousness, the one that gets me best is "should a self-driving car drive itself off a cliff, killing its only passenger, or hit and kill some >1 number of pedestrians?". While the external observation may be "minimising deaths is preferable, so drive off that cliff", are people willing to use a vehicle that might intentionally kill them as an intrinsic part of its operation? Or will market forces result in self-driving cars that make more selfish choices being more popular, potentially resulting in suboptimal prisoners-dilemma style results?

There are also different ethical norms in different cultures about preferences (https://www.wired.com/story/trolley-problem-teach-self-drivi...). While these are edge cases, they're the edge cases people are worried about, and the source of the ill-definedness: "unhurt as much as possible" implicitly chooses some ethical tradeoff that people can easily have different answers to.


Also such meek and suicidal cars would get abused to no end. Just imagine all the assholes today that pass cars in turns with bad visibility or bike in crazy ways. Today they are still paying some attention, because they may easily get killed if the other drivers don't notice quickly enough what they are doing. With meek AIs on the road you can do anything (as long as you bunch up in large enough groups).


> minimal violation of the locality's established rules of the road

I wish. I've come to the conclusion that the only true rule of the road is: don't crash. As long as no actual collisions occur, people are totally fine with doing whatever they want and bending the rules for their own convenience. I can no longer predict the behavior of other drivers. Even something as basic as the turn signal is unreliable since people are forgetful.


I think you are right. In fact, thinking some more, it would seem that the whole point of the ‘rules of the road’ is simply to prevent crashes - we have just built up a whole load of protocols in order to achieve that


Self driving is much simpler than playing Go. The hard part is getting the sensors working properly, in all conditions, even when the car is twenty years old and dented.


I have been working on AI/ML since 2007, and this is what I had read too (repeatedly). That Go is incredibly hard to conquer for a computer, and it will be a long while. So when this actually happened, I was shocked/surprised.


I’m glad I’m not the only one. I was totally floored, and I struggled to explain the significance to non-go players.

My respect for AI increased drastically that day, and (honestly) I developed a small amount of fear due to how AlphaGo’s style of play was not understood particularly well (e.g., some of the moves would absolutely be called “slack” if played by a human).


You both speak of the day AlphaGo won 4/5 matches, yet the matches were played over a series of days. Which of those days was the switch flipped, then? For me there was a significance in day three, but it was mitigated somewhat by the (to me) surprise of day/match four.


IIRC, it was day 3 for me as well, and I had the same (minor) let down on day 4 (resign?!?!). That said, I imagined the resignation was a fixable flaw in the AI, and this turned out to be correct.

I can’t actually remember where I first learned about the match. It may have been HN, it may have been in an AGA e-mail, or it may have been some tech-oriented magazine/web site in English or Japanese. I am certain it wasn’t match 1, because I reviewed earlier matches, and I remember the let down of match 4, so must have been match 2 or 3.


For me, day 3 was the stunner even though the writing was already on the wall.

Day 1 was a great surprise but I was still left wondering if it was a fluke. Day 2 showed that it was no fluke and I started to get a sinking feeling. I guessed that Lee Sedol would lose the third game and win the fourth after the pressure of the Korea and humanity was off him.


Go still isn't solved (neither is chess), we just have a machine good at knowing which parts of the search space are worth checking.


I think achieving superiority over humans is practically solving the problem though. Solving chess or go by going through a complete search space seems more like a hardware/computational goal than a practical ml/ai goal.


It all hinges on your definition of "solved".

"Solved" in the AI/game theory has a very strict definition. It indicates that you have formally proven that one of the players can guarantee an outcome from the very beginning of the game.

The less-strict definition being thrown around here in the comments is more like "This AI can always beat this human because it is much stronger."


I think most people discussing this mean the later, less pedantic option. I mean, that’s the spirit of AI. Can we make it think like a human, or even more so. We are the yardstick.


That is a silly mis-use of the term and that is not being pedantic. A problem isn't solved just because you beat the existing solution (i.e. human players). As long as there is the potential for a better solution that can beat your solution there is work to be done.


You don't have to go through the complete search space if it turns out optimal strategies are sparse. What do I mean by that? Take a second-price auction: the dominant strategy here is to always bid your true value. Meanwhile, the search space for this would be any real number in between 0 and your true value. What does this mean for computational games like Chess or Go? It may mean while the search space is exponential, there may exist computationally trivial strategies that work. I would compare this to Kolmogorov complexity, except instead of having a program as your output, it's a strategy.


Any substandard statistical model fitted to by a simple computer program is superior to what an unaided human could achieve with pen and paper, but few of them can claim to practically "solve" the problem because they are better than crude fit heuristics proposed by humans who are not good calculating machines.

An algorithm can't claim to have "solved" Go, when future versions of the algorithm are expected to achieve vastly superior results, never mind any formal mathematical proof of optimality. What it has demonstrated is that humans aren't very good at Go. Given that Go involves estimating Nash equilibrium responses in a perfect information game with a finite, knowable but extremely large range of possible outcomes, it's perhaps not surprising that Go is the sort of problem that humans are not very good at trying to solve and that computers can incrementally improve on our attempted solutions. Perhaps the more interesting finding from AlphaGoZero is that humans were so bad at Go that not training on human games and theory actually improved performance.


We've just created a tool people can use to play Go better than a person without the tool. Until something emerges from the rain forest, or comes down from space that can throw down and win I'd say Humans are still the best Go players in the known universe.


That's like when the whole class fails a test, but the prof. grades on a curve. Someone gets an A, but not really. edit: some grammar.


"Solved", in this case, means "computers can play the game at levels no human can beat."


That's not the normal meaning of solved in regards to game theory.


I believe with respect to game theory, solving a game like Go would require finding a strategy that obeys the one shot deviation principle. The result would rather be boring to watch however, because the conclusion of every game played under this strategy would either be draw, or based on which player starts off first.

[1] https://en.wikipedia.org/wiki/One-shot_deviation_principle


But it is what "solved" means in deep learning. Common terms very often acquire different technical meanings in different fields. Or even technical terms! Whether a single hydrogen atom is a "molecule" depends on whether you're talking to a physicist or a chemist. And "functor" means something very different in Java programming patterns than it does in math.


I have to concur with the other poster here. You're not using the term in the usual way. Traditionally a game is called solved when its search space has been searched exhaustively or there is some other analytic solution that allows you to determine who wins and who looses in every constellation. Tic-tac-toe is solved, for instance.

https://en.wikipedia.org/wiki/Solved_game


Regardless, we need a term for “computers can play the game at levels no human can beat.”


Well, don't use one that has an existing meaning in that exact context.


Superhuman


This is the performance vs understanding dialectic. A bunch of humans built a machine that is superior at chess, but that machine can't teach humans what it knows.


Humans can and are absorbing some of what the machine has demonstrated.

I think Kasparov has it right when he says that the best player isn't a human or a machine, but a human using a machine as a tool. The machine can help the player optimize and reduce mistakes, but machines don't yet know how to ask questions and explore in the same way. Maybe they never will.


There's a name for this approach, they call human-AI teams "centaurs". It's a fascinating concept. I am deeply curious if eventually that will be outstripped by pure AI too. I believe so.


Optimal strategy for human-AI teams has been really close "defer to the computer for every move" for a while now. They're only interesting because they're adversarial.


Chess computers teach by sparring rather than lecture. Humans still learn.


Alpha *, at least, also learns by sparring, so there is a nice symmetry there.


Reminds me how sometimes geniuses cannot translate the way their mind works to non-geniuses. It's such an implicit talent that it's not even .. "reified" in their upper brain. It all happens in cache.


For me the two moments so far have been seing computeurs winning at Go and knowing that basic quantum computeur are out there (even if they are not useful yet).


I sympathize.

It used to be that you might be able to believe that there is some kind of art behind go, some sort of abstract beauty to it, and that the pursuit of this beauty is the path to being good at go ...

But the defeat of the tactics born from this mindset by MCTS at least for now lay bare the fact that the path to being good at go is actually to probabilistically sample the phase space of the game and perform impossibly mindless statistics on the game outcome an enormous number of times ...

To top it off — there is almost nothing “about go” to learn from watching alpha go play ... I imagine that attempting to analyze alpha go’s victories would produce an unending sequence of the feeling of never gaining any new insight “into go”.

The analysis of go is now about optimizing algorithms — which _is_ interesting — but I don’t think it’s interesting for the same reasons that someone might’ve been passionate about go in the past ...


> It used to be that you might be able to believe that there is some kind of art behind go, some sort of abstract beauty to it, and that the pursuit of this beauty is the path to being good at go ...

All but the last phrase are still true. Pursuit of Go for beauty is still pursuit of Go for beauty.

I will never play any instrument as well as a sequencer. I believe algorithms will make beautiful Jazz improvisations in real time, in my lifetime. But playing is still beautiful.

I will never run as fast as a horse, much less a bicycle or car. In my lifetime cars will drive themselves more safely and faster than I can drive a car. It will still be enjoyable.

I sincerely hope my children will live better lives and make better choices than I did. My life is still beautiful most days...

It is our attachment to being smarter than machines that makes us unhappy. We fear evidence that we’re not that special. But we do not need to be special to be happy. We do not need to be special to find beauty in our pursuits.

One day, a bot will live on HN and have hundreds of times more karma than I have. It won’t diminish the beauty of my best contributions. Perhaps the humility of seeing this happen will actually make me a happier person.


In case you are other readers are interested, what you describe is an old philosophical debate spanning mostly morals but as you point out, also the pursuit of happiness and other areas. It is the opposition between consequentialism (=what is important is the result of your actions) versus deontologism (=what is important is what you are doing, not the result).

I think that as machines become smarter and smarter, it is good that we dive a bit more in our philosophical world and reach for what makes us human.

It took me a long time to understand that if you remove the pure rational intellect, something precious still remains: the driving force that shapes our morals. There is not rational reason to like survival. To love our children, to do what we call good. The resons for that are moral and are, I believe, what defines us as human at the core.

Ian Banks was spot on in calling a future civilization a "Culture". Because when machines take over the productive work, defining our values is going to be our full time activity. Machines may participate, but why would they add anything we consider valuable to the core?

That's going to be an interesting transition and I am happy that I'll probably live to see it!


> There is not rational reason to like survival. To love our children, to do what we call good.

Of course there is and very well described in evolutionary science


This is an is/ought confusion. You can explain why something is that way, but you can't explain why it must be that way.

You can explain why our survival instinct evolved so it is tempting to jump to the conclusion that we ought to survive but it is a fallacy (called the naturalistic fallacy): saying that something being natural means it is good.


>I think that as machines become smarter and smarter, it is good that we dive a bit more in our philosophical world and reach for what makes us human.

That's a recurring theme in Ghost in the Shell, at least the anime series. That as the line between human and machine is blurred, the general trend is in the direction of homogeneity, but here and there you still see people doing "deontological" things, ostensibly as a grasp at uniqueness to preserve self.

An interesting thought occurs to me though, with the proliferation of various NN architectures, and the idea that such nets, if scaled and installed to power humanoid robots, would effectively learn different heuristics after training on chaotically different data, it's quite possible that machines will also gradually evolve individuality and something rather close to personality.

Perhaps individuality is an intrinsic, emergent property in any system of generally intelligent and learning agents. Like a 3+ body on steroids, millions, if not billions of dimensions.

Also interesting to consider that the emergence of uniqueness among a system of said agents seems to increase entropy, a presently unique to life property given that everything inanimate in the universe does exactly the opposite. The more I think of general AI, the more I have trouble distinguishing it from the only other sentient intelligence we know of.


On the other hand, insects are also kind of like NN powered robots, and they don't (necessarily?) have individuality. Especially not ones that form colonies, like ants or bees.


I don't know if we have an answer to the question of whether insects have personalities. But perhaps their intelligence is not general enough, they're not, as far as we know, sentient. Maybe that's the difference between an automaton and a soul


I don't think it is very hard to make a personality or an individuality appear. Feed GPT-2 a lot of opinnonated texts about itself and you will already see a bit of it emerge.


The difference is that this AI agent was built to beat humans. Horses were not built to run faster than humans.

You have to remember the specific comparison and the art that humans tried to partake in to master go.

Humans do not run to beat horses. We never compete. That would be silly.

Cars and humans also do not compete at the 100 meter dash.

If humans and robots compete at 100 meter dash than the olympics will similarly lose its luster.


> Humans do not run to beat horses. We never compete. That would be silly

I hate to be that guy, but:

https://en.m.wikipedia.org/wiki/Man_versus_Horse_Marathon

It is very silly indeed.


2004 2007 a human won. Wow.


The race's setup is horse-friendly as well: Shorter than a regular marathon by a few miles and on flat terrain. A longer distance on hillier terrain would be won by humans much more often.


There was an interesting article a couple of years back, positing that the years that humans won coincide with unusually warm weather. The takeaway was the humans have a superior cooling system for efficient long distance running, compared to most animal's superior sprinting capabilities.


I believe only humans and dogs hunt via stamina. You can walk a horse to death or simply chase an antelope until it overheats and dies.


'AI' is not an adversary. It is simply a tool created by hundreds of humans coming together.

It's like watching 1-v-100 boxing match. Of course the 100 are going to win.

Enjoyable sport has always been about ~similarly matched opponents. When we have DeepMind AI Go vs. MindDeep AI Go, that's when things get interesting.


I interpreted breathoften’s comment differently. As someone who’s played chess at a very high level, I feel like I understand people who play these games for the art behind it. It’s difficult to describe, but chess for me is beautiful because to win you must be patient, careful, understand what is and isn’t important, etc. playing chess is like learning virtues. An algorithm playing chess doesn’t care about any of that. I wonder if it’s similar to the difference between meditation and acid. One gives you time to understand and the other just gets you there without explanation.


About jazz improvisation: I knew people at the creative labs research center back in 2003 that told me a researcher once showed them a program that did just that. It was able to improvise « the way player x would » by just listening to it, and it would continue the impro in the same style.


Imagine a world where no human contribution matters because a different species already thought of it. No improvisation on any instrument can be novel because a machine already played it exactly like you would have. A world where your children look to a philosopher machine for guidance, because it is wiser, kinder, and deeper than you could ever be.

Human existence will still occur, however, our species's defining characteristic will be no more consequential than a meadowlark's greeting of the sun every morning. Beautiful things will occur, but they will be ghostly imitations of the creations of some other being. Humans will create nothing, except, possibly, more humans.

It's not necessarily about your success vs. the machines. It's about every human who will ever live in the future's impossibility of success vs. the machines that is depressing and inevitable.


> Imagine a world where no human contribution matters because a different species already thought of it

You are begging the question in in the original sense of the phrase "begging the question."

You start by stating your conclusion as an axiom! You state that "no human contribution matters because a different species already thought of it" as if that is necessarily true.

It isn't.

We choose what matters. I play Bach, badly. Bach already thought of that music, and many thousands, possibly millions of people have played the particular pieces I have worked on (The first suite for unaccompanied cello in Gmaj, the Prelude to the first Fuge in Cmaj from Book One of WTC).

Does my playing not matter?

I say it does. Furthermore, Bach's music can be encoded as a number. All numbers already exist. Bach did not create that number any more than I created the number that encodes this comment.

Does my comment not matter?

You find this general idea depressing, and so do many other people.

But it isn't "depressing" in an absolute sense. That's just a word we made up to describe a feeling many of us happen to have.


Your playing doesn't "matter" in the way Bach's original composition "matters." One of these things has been remembered for hundreds of years, and one of these things would never have been remembered unless you reinterpreted the work in some novel way that reached millions of people.

The fact that Bach's compositions can be encoded as a number doesn't mean that it wouldn't take a genius of Bach's level to deserialize that particular number, which could literally be deserialized into a representation of any arbitrary thing in existence with the correct algorithm, any less novel when it was created. The same holds true of your comment. Just because I declare 1001 represents a beautifully unique masterpiece, once it is decoded, does not make that masterpiece actually exist.

Bach was important. In a world with general AI no human will have the ability to be important in that way ever again. At first AI will create crude imitations of human art. Then it will create hundreds of billions of creative works that are more human than human. Then it will create artworks that surpass our ability to comprehend. I don't know about you, but the inability to do anything novel as a species, to learn anything new, is a terribly bleak possibility.


Does the fact that we were the ones who created them in the first place count? Why should we feel depressed at one of our own creations? Why be depressed by looking at a car just because it can move faster than us, while in actuality it was created for that purpose?


'AI' is not an adversary. It is simply a tool created by hundreds of humans coming together.

It's like watching 1-v-100 boxing match. Of course the 100 are going to win.

Enjoyable sport has always been about ~similarly matched opponents. When we have DeepMind AI Go vs. MindDeep AI Go, that's when things get interesting.


That's kind of what we already have though, given that these things are often trained by playing against themselves these days


Well said !!


> I will never play any instrument as well as a sequencer. I believe algorithms will make beautiful Jazz improvisations in real time, in my lifetime. But playing is still beautiful.

Having dug quite deep into algorithmic music generation of all sorts, as well as having studied Machine Learning as my uni masters, I still believe that you still need an actual artist to make a music generator program.

A machine simply isn't going to figure out "swing" if you don't tell it to. And swing is one of the easiest things. Yes if you look just at note generation, I think algorithms can go a long way. But the subtleties of timing and timbre, it can only imitate in context. Which definitely is good enough for many purposes, and I agree with your prediction that algorithms will be able to generate beautiful music, but I also think there will always remain an "edge" for the artist. If only to discover novel things that are also cool, and then working those out in order to fully express the coolness of that new thing.

I'm also thinking about all the evolutions the many genres of electronic music have gone through the past decades. New and novel "sounds" (or moods, or styles, etc) are still being create/discovered. It is that process, that I don't think we're there yet. Yes the algorithm can probably generate beautiful psytrance, lo-fi hiphop beats and if I'm generous probably eventually also really complex (and jazzy!) stuff like squarepusher.

But what I'm not seeing happening any time soon (barring any breaking general AI type of advances) is if you give the algorithm a TB-303 for the first time and see if it figures out acid house. Yes you can probably teach it the origins of neurofunk DnB (think 1999 Optical & Ed Rush's Wormhole album) and produce super awesome dance music. But I don't see how it could ever develop what happened to DnB beyond that. Wavetable synthesis didn't really exist that way back then, and the bass came from more classic synthesis like the Reese bass. Nowadays, what you can do with a wavetable synth VST like Serum, almost defines what modern DnB sounds like. That particular sound was evolved and shaped through the genre of drum'nbass and became part of it, a new style of synthesis, heavily facilitated through the particular UX controls of these synth plugins, which again caused the author of Serum to amplify by creating his vision of what this UX should be like, is almost like the birth of a new instrument, together with artists having to learn the correct style to "play" this instrument. Which has settled enough now that it is appearing in other new genres. Yet is also still developing. And that is just one genre of music I happen to be somewhat familiar with, I'm sure similar examples can be named in many other genres (for instance I don't know much about the history of dubstep).

Those evolutionary steps, invention of truly novel things, for the foreseeable future, I don't think AI is there yet and artists do still have an edge, even if it's a very thin one.


People need to experience a sense of competency and power as a part of feeling satisfaction with their lives. It is not the only route to satisfaction, but in my opinion it's one of the few roads to great satisfaction.

I agree with the sentiment that one can't be good at everything, and that there is a radically high-performance exemplar for anything, but for machines to dominate in almost every category of luxury endeavour -- yes I can see that as demoralizing. Even the useless things we spend our time on, we can be no good at that.


>> But the defeat of the tactics born from this mindset by MCTS at least for now lay bare the fact that the path to being good at go is actually to probabilistically sample the phase space of the game and perform impossibly mindless statistics on the game outcome an enormous number of times ...

This is a rationalist opinion that is attractive because it appears unbiased, unwilling to make any concessions to human nature and to recognise its limitations.

And yet- this same opinion overlooks the greatest source of scientific wonderment in the victories of AlphaGo and family against the best human players.

Which is to say: that human players play Go (and Chess, and Poker, and Magic: the Gathering etc) very differently than machines. In particular, human players do none of the extremely tedious, extremely computationally intensive maths that computer players have to do. Human players don't perform MCTS, neither do they train by self-play for many thousands of human-years.

Somehow, humans can play Go and Chess and all manner of board games _without_ having to do any of the hard work that only computers can do reliably. We are not particularly good at those games- but we can play them well enough that beating the best of us still takes huge amounts of computational power.

How we do this, why we are even capable of doing this and what other benefits it confers to us: _that_ is the interesting set of questions. That a big machine can outperform a human ... we have known this since ancient times.

  There are more things in heaven and earth, Horatio,
  Than are dreamt of in your philosophy.
  - Hamlet (1.5.167-8)


> In particular, human players do none of the extremely tedious, extremely computationally intensive maths that computer players have to do.

This sounds like a Chinese room argument.

Performing computationally expensive maths doesn’t make the computer intelligent.

But that says nothing about the intelligence of the maths itself.


I see how it seems like a Chinese Room argument, but I saw it more as a statement about how much more there is to figure out about how the human mind does things, that we need to build such particularly powerful machines to defeat it.


But I didn't say anything about machine intelligence. My comment is about human intelligence.


Is this true? From what I've read about Alpha Zero/etc, in both Go and chess, it's going interesting move sequences that people hadn't considered viable before. That certainly seems like an interesting thing to learn.

Also, sure, you can dismiss it all as statistics. But how sure are you that what's happening with humans isn't in some form? I'd also say that MCTS is something people kind of do too in games: look a few moves deep and try to judge the value of that position, which is definitely more interesting than simple RL/bookkeeping/stats.


No, it isn't. The top humans are much better at Go than they were four years ago, largely due to learning from the new engines. If it were all just about sampling the phase space zillions of times, this would not be the case.


This is interesting to me ... how exactly is this assessed ...?

Do people keep around versions of “alpha go year 2017” and play against it in order to measure human improvement over time?

If the basis for observing improvement has become “I can beat old versions of the ai more reliably than I used to be able to” or if I have learned to beat players who have not studied alpha zero I suppose that’s a form of usefully learning “about go” by analyzing the games played by alpha zero ...

I wonder if we might ever arrive at a point where human vs fixed-year x ai performance at go pretty much stops increasing over time ...?


I admit that I do not have a quantitative measure to support my claim (as you note, constructing one is difficult). But qualitatively:

1) People have learned a lot from new engines: joseki (corner patterns), general strategy (e.g., moves on the side are now considered less valuable, making large moyos (largely empty space loosely surrounded by your stones) is less attractive because AIs have demonstrated that they're more invadable than previously thought) and are able to actually explain the new principles in human terms.

2) All go professionals now play in the new style, to some degree; ones who tried to continue in the old pre-AI style performed badly.

So I am comfortable claiming that human play has improved by learning from the new engines.


Is there a name for the new style?


I'm not aware of a single "official" name for it that everyone uses, like the Hypermodern style of chess in the early 20th century. In English, people say things "AI-inspired" or "AlphaGo style" (although a lot of the ideas come not from AlphaGo directly but from the public engines that followed in its wake).


The sequences are viable because alpha go assessed state much deeper than humans do. Doesn't mean humans will be able to utilize them correctly.


Nah at least in chess all the grandmasters lean heavily on the chess engines, including studying top games between the best AI


> Nah at least in chess all the grandmasters lean heavily on the chess engines, including studying top games between the best AI

They may use it for training and analysis, but they don't play their style - they play an inferior style (not to dismiss their achievements).


I am not sure what you mean by "style", but top chess grandmasters certainly play in a different way today compared to, say, 20 years ago. Current play is much more concrete (less based on general strategic principles) and players are willing to take greater risks for rewards such as material gains, since AIs have shown that there are often many more defensive resources than was previously thought as long as the defender remains tenacious.


I don't want to break your ideal, but the description of go you make died decades ago. For a long time now, top go has been about fighting in the chuban, and there is no grand theory about that, only grinding your mind to read more combinations than your opponent.

Another correction about what you said is that computer go has brought tremendous evolutions to the pro scene. Josekis that were accepted for centuries have been challenged, and people have learned why. The value of sente has been emphasized, and pro players that have not learned are being pusshed out. In that aspect, computer go has brought change of a magnitude comparable to the shin fuseki movement.

Finally, it is expected that the availability of strong programs will bring a wave of better players, much like what happened in chess. I for one look forward to this happening.


Interesting - I definitely have only a surface understanding of the go community and had not respected that the community embraces the game’s “requirement to grind the combinations”.

I had always thought there was some element of recognizing its “impossible to ever even remotely approach a true grinding of the combinations” for go — and that somehow players _did something else_ effectively when they played at a high level. It’s that “doing something else effectively” that the defeat of humanity by go algorithms challenges ...

Would be super Interesting if there could ever be a reversal which might allow humanity to beat go algorithms once again ... is there any evidence from the strategies that the go community has learned by analyzing the new go algorithms that this could be possible? Or is there just more and more evidence over time than humans will never be able to compete effectively at go again?


It is indeed impossible to evaluate every possibility, especially for humans. Still, midgame fighting is all about evaluating as much as possible and to estimate an unfinished position.

It is not imaginable now that humans will beat a machine in the future, but it is also undeniable that humans progressed from computer go. The evolution of joseki (game patterns considered fair for both players), especially corner joseki are material evidence of that.


> perform impossibly mindless statistics on the game outcome an enormous number of times ...

But, while you sit there waxing creative, your brain cells are likewise performing a mindless task on an enormous scale.


> the pursuit of this beauty is the path to being good at go

There is art! It's just an emergent property of learning the game. Seeking art doesn't make you a better player - that's the trap one can easily fall in any sports. It's the opposite - efficient strategies solidify as art.


And this will happen to all of humanness in time. Everything we think is unique about us, and our endeavours, will be reduced to optimization and learning algorithms eventually.

The main issue in determining man or machine in each situation eventually will be which has the lower TOC.


While it may be true, this is, of course, conjecture. Certainly many skills that seemed "uniquely human" have turned out not to be. That does not mean that all such abilities will be amenable to replacement by, as you say, "optimization and learning algorithms." Machine authoring of great art, a problem which may be at least as hard to solve as Artificial General Intelligence, does not seem to be on the horizon any time soon, for example.


In the same way as the Go master (perhaps) feels disillusioned by being beaten by the AI, I think people in general will not accept humans being replaced with machines in some "special fields". In those fields, the customer/user, will not experience the same utility if a robot is doing the service compared to another human, even if they are indistinguishable.

I think an interesting issue here is that in the (far) future, many services could be performed cheaper by AI/robots and in such a way that the customer is unable to tell wether a human is involved or not. And in this future, humans will probably be a premium service.

Take motor sports for example. We can probably now/soon replace F1 drivers with algorithms and cameras, but nobody would pay 1000's of USD to watch them drive around in Monaco. If it would turn out someday that the drivers had been replaced (for safety reasons or whatever) without telling the fans, the outcry would be tremendous. And even if outcry does not always equal "true utility", I think it highlights my point: humans made of flesh and blood risking their lives or performing extraordinary feats have an intrinsic economic value that can't be replaced.


as early as the 90s, automated control systems in F1 resulted in cars (i.e. Williams FW14) that to some extent drove themselves better than any human. indeed, many of the systems used have been specifically banned since then.


Hmm, I don’t think an ai could use optimization and learning algorithms to “learn” to DM D&D. For that, the ai would have to simply be an i.


Eventually it can do your example of unique human intelligence whatever that example is.


Another example of unique human intelligence: drown in existential dread.

I can do that, you can do that, but will a computer be able to do it?


I've written this in another comment but I'll repeat here. What you're asking really boils down to a combination of whether a human brain can be simulated and whether human intelligence is merely due to the physical brain (or do you believe in the existence of some intangible consciousness that cannot be replicated by a machine). Assume you believe both to be true, then your simulated brain is surely able to drown in existential dread because it's capable of no more and no less than the human one.


I mean, you joke, but existential dread might be an adaptive response to a hostile environment.

...a situation we might want to simulate for training purposes


Ai won’t, but a machine with true intelligence will. That was the point of talking about ai needing to be i.


So you're defining "AI" to be anything that we can currently program a computer to do, and "I" to be anything we can't yet? That doesn't seem like a useful distinction to me. Unless you're using "I" to mean general (artificial) intelligence, in which case you should probably use the more well-known term.


No, please don’t straw man my point. I’ll assume you know what ai is and that you understand there is a huge difference between that and human intelligence. I am arguing that ai will never be able to DM a D&D game. For that, a computer will need human intelligence.


But AI definition is still target-in-motion, very blurred.


Why the but? I didn’t say anything that disagrees with that statement.


I feel like it's far too blurry to make claims such as "will never be able to DM a D&D game".


Re: "all of humanness"

https://xkcd.com/1875/

(If you don't want to click the link, there's a joke there that machines may have a hard time "being to cool to care about stuff.")


Except Calvinball of course


Perhaps the one 'special thing' piece humanness is that we can and tend to automate ourselves. (i.e. we're lazy) :)


That depends on whether "Everything" is finite or infinite.


> there is almost nothing “about go” to learn from watching alpha go play

That's not true.

Q: How much better is AI now compared to when Lee was playing against AlphaGo?

"It has increased enormously. It is only natural for the pros to lose a couple of spots. That's why most of the pros study baduk with AI."

Korean News Interview: http://news.khan.co.kr/kh_news/khan_art_view.html?artid=2019...


I’m now curios, can a human design a game that a computer can not beat a human at.


This reduces down to the question of whether it's possible to simulate a brain (or an equivalent formulation, such as is consciousness somehow extraphysical). I believe I'm right in saying current consensus says that this is possible.


Theoretically possible, but so far nobody has done it, or knows how to...


The Turing Test game? (For now at least)


Just put a scoreboard on captcha.


I think alpha zero can self-train to mastery or super-human performance at any two player game with perfect information?

I’d be curious to know if that’s incorrect ...


I'd love to see a blinded Dixit competition between humans and AI, judged by humans. Of course, humans and AI could get an advantage by playing lots of Dixit. But if the cards are generated specifically for that competition, things could get quite interesting.


Maybe something like Nomic https://en.wikipedia.org/wiki/Nomic



This hasn't been true since 2015, when the software became decisively stronger than human players.

https://www.kingpinchess.net/2015/07/arimaa-game-over/


Actually humans are now able to defeat the bot that won the Arimaa challenge. In games like this: http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=628806&s=w

However, there is also a new bot that has been trained using the self play method and has been crushing the bot that won the Arimaa challenge. In games like this: http://arimaa.com/arimaa/games/jsShowGame.cgi?gid=628534&s=b

Most likely the bots are still ahead.


calvinball?


Tic-Tac-Toe


https://www.youtube.com/playlist?list=PLqpN3-2FP-kIxXhhdDds9...

That's a playlist of 31 self-play games analyzed by Michael Redmond 9p. They are plenty interesting to study.


I don't follow AlphaGo but there is a LOT to learn from AlphaZero in chess ... engines are already changing the way chess is played.


whats being lost here I think idk is the fact that the robot lost once because lee de-dol exploited a bug in alpha go. lee seems to play it down as “just a bug” but it seems like that might be a pretty good strategy.


Isn't saying AI is the future of Go like saying cars are the future of sprinting? I mean who cares if machines are faster/better/smarter than us at any particular task? That is true in millions of ways where humans still enjoy competing against each other. Maybe what's needed is just a perspective change where we stop thinking of Go as being against any other agent and make it, like every other human competition, against other humans.


It's bizarrely "hip" or "woke" or whatever right now to anthropomorphize "AI". "AI" is light years away from being anything more than a tool. AI doesn't "play Go" in my view as much as people play go, via AI. Is it really shocking we've built a tool that we can use to play Go better than somebody without said tool?

If Lee Sedol says "Okay AlphaGo, let's play!" and sits down at a board what happens? Nothing happens! AlphaGo has no agency. AlphaGo is an extension of human agency. AlphaGo isn't better than humans at Go. Humans with AlphaGo are better at Go than humans without AlphaGo.


Another way to look at your metaphor is that research on exercise physiology has shown the enormous importance of rest and proper nutrition during training. Prior to cars, getting from place to place and access to optimal nutrition were both mediated by transport over long distances.

AI is the future of Go because it enables those new perspectives and new processes by which human players can learn. AI is smart-dumb, looking for patterns beyond the human capacity, but limited by the data on existing human players that has been provided to them.

Strava hasn't made running races pointless. I can compare my runs to others but that is a very different metric then beating them in a head to head race.


> AI is smart-dumb, looking for patterns beyond the human capacity, but limited by the data on existing human players that has been provided to them.

I think you misunderstand how AI in games like go now works. Most of the advancements recently have been from the AI playing itself, oftentimes without any database of human moves at all.


> limited by the data on existing human players that has been provided to them.

Except it isn't. In various games, new strategies have been found by playing AI vs AI. It's also possible to create AI players by self-play with no knowledge of human matches.


Technically AlphaGo and especially AlphaGo Zero create their own training data. So they’re not limited to data from human players.


Do these AIs actually have much of a strategy? Is the strategy mostly correct evaluations of positions and optimized search?


The search space of Go is way too large for dumb traverse of the tree, even with high end optimizations.

What makes recent breakthroughs in AI agents playing adversarial games possible is the fact that deep neural networks are able to develop patterns that yield short- and long-term strategic planning. And the ability to self train without human intervention to reach unprecedented training levels.


Sprinting is a weird sport. I think it exists because running is a natural human skill that many people care about beyond running contests, because they can't thrive with complete dependence on machines for locomotion. In most people's lives, there are times where the ability to run actually matters.

But the ability to play Go never matters outside of playing Go, and we know who is the best at playing Go.


Ah yes, the ability to use your brain competitively is completely useless in the day to day lives of humans. It's much more important to everyone's day to day life that they can occasionally sprint 30 feet to the bus because they snoozed their alarm one too many times, and that is of course a completely comparable skill to a 100m dash.


I don't think this is about GO, but about humans becoming useless faster every day against machines in virtually everything and how that is going to affect society and us as humans.


That’s exactly why the master shouldn’t have quit.


Am I the only one reading this article with the take away that the ai is not his primary reason for retirement? I understand that the title has its own conclusion but it seems overly sensational to me.

> He actually quit the KBA in May 2016 and is now suing the association for the return of his membership fee.

> "... [I] have something else to do," he said, asserting his only dream for now is to rest and spend time with his family.

Edit: Meant to include the part where he's planning a high profile set of games against another ai.


I've heard many times that people come to Hacker News because the comments are better than the articles.

I have to say, I'm just continually puzzled by this. In this thread, yours is the only comment where it appears that the commenter read beyond the first few paragraphs. There are numerous comments by people speaking authoritatively on go and AlphaGo, who have clearly not studied either significantly.


In a way what you describe is consistent.

If people like the comments better than the articles, it's not so surprising that they like to spend time reading and writing comments instead of reading the articles.


I think it's because few of us truly care that a "go master" exists and him stopping play isn't meaningfully important to rest of us.


Okay, but I come across this all the time on any topic I have some knowledge of: the articles might be bad, but the commentary and discussion is always much worse--it's clear to me that most commenters don't read the links. There are only a handful of commenters whose comments are worth reading.


From what ive been reading on online go communities, they seem to mostly agree with you.

Lee's issues with the KBA are not a secret and he has discussed possibly retiring for some time now. He has given multiple reasons as to why he was considering retiring and while ai might be one of them, saying that its _the_ reason feels very clickbaity.


He was handed the perfect excuse.


> Even if I become the number one, there is an entity that cannot be defeated

Hard for me to empathize with this argument for his retirement. If we can't outrun a car, does that make running competitions pointless? The existence of AlphaGo doesn't diminish the triumph of being a number one human player in any way.


I would counter with the fact that in physical endeavors it's apparent to us that we are not #1 - our household cats are more agile than us.

It's in matters of intellect that humans still believe they are #1.

AlphaGO's achievement in another field would have similar effects, e.g.:

- An AI that diagnoses sickness better than any doctor

- An AI that generates text which humans believe more beautiful than any other poetry created

- An AI which creates classical arrangements the likes of which we compare to Mozart

I would imagine that in any of those situations some doctors, authors, and musicians alike would be devastated.


> I would imagine that in any of those situations some doctors, authors, and musicians alike would be devastated.

You don't even have to compare yourself to AI for this mentality though. There are people who choose not to compete in things because they don't believe they'll ever be as good as other humans.

I assume must composers don't go into music thinking they are going to be as great as Beethoven.

I believe there are many studies that show that if you only do something because you think you're good at it, you're likely to drop off. I imagine it's also why you're supposed to praise children for being hard working and not for being smart or talented.


I assure you, plenty of musicians have sun-sized ego's


As a person who likes music, making it, listening to it, breaking it down and hacking it...

Making a classical arrangement that evokes a particular expression in the listener is the job of the musician. If an AI system helps you explore the possibilities there, it's more like a studio musician that's able to improvise. You're still the person, the human, the emotional filter, that picks "This sounds right" or "This doesn't" for a particular situation. It's a judgement call. An emotional one.

An AI might be able to fake it, communicate with it, but it will never replace humans choosing the sounds that please them more than others. Humans communicate through music. It wouldn't surprise me that an AI would be able to as well. I don't think it would necessarily write emotionally strong music, not without human training.

Edit: I guess what I'm trying to say is, sure, computers might be able to make music. Ask any guy who messes with modular synthesizers. But they're a tool. The fact an AI can express itself through music is sure as hell not gonna stop me from also expressing myself. It's like arguing "Since AIs will be able to comment on Hacker News, humans won't."


>It's like arguing "Since AIs will be able to comment on Hacker News, humans won't."

I'm not so sure. I often go into threads on HN and realize that every idea I could come up with on the subject has already been expressed better than I could do it, with greater expertise, and cited sources. I don't comment in those threads. If AI bots could populate a thread with every likely human thought and argue it with depth and sophistication in a well reasoned, yet carefully approachable and well-explained way, well then... again I don't think I'd feel like I would be adding much value by participating.


And yet, here I am, bringing up something no one seems to bring up in the thread. One would also logically come to the conclusions that disparate AIs with disparate interests would find different things to express, to make music about, to draw about.

What distinguishes music written by AI from music made from humans? I have a story to tell. If the AI has a story to tell, one that speaks to our human emotions, it might make good music. But the point is to communicate. Even if you take, for example, someone else's words, fit them to a different model in a different field, viewpoint... You might get interesting things. You could make a cover of someone else's song, with your twist. Adding your emotion to the melting pot. AIs might be good at that, just like that, but only through communicating. Just like us. We have no idea whether they'll be better than us at doing it, or merely equivalent. We have no idea what is lossy in our sharing of mental models. Perhaps it is an unsolvable problem, which we will find out in the same way we found out about Gödel's Incompleteness.

It seems to me like we fail to understand how unique we are. We are in a unique position to shape what comes after us, and we are blind to how much we unconsciously select for things. We have an innate mental model of "humanity" we are trying to transmit to machines, and I am not sure we fully grasp it well enough to make sure we are creating something like us. We fail to do it properly to humans, sometimes, who actually do share most of our instincts and habits. Something entirely different from us? Color me skeptical.

This kind of debate only highlights this, to me.


What your comment suggests to me is that good composition requires an agent with a world model and generalized task-solving ability, along with a personality. I think developing the world model and task-solving will be the hard part, and if we can do it, it won’t be that hard to make it have a personality too. That’s just another task.


What my comment is trying to suggest is that AIs are not proven to be different from us. They might not have one "ultimate" form. They might be just like us humans. Diverse.


>>>The fact an AI can express itself through music is sure as hell not gonna stop me from also expressing myself.

I think this is the key; if you're making music for your own reasons, no AI (or Mozart) would stop you. But if you're trying to make money at it, or desperately want listeners, you may eventually be on the "losing" side.


Would it? Popular music sees major paradigm shifts every few years, and AIs only really generate things based on observation of existing patterns, at least as far as I can tell.

As far as recent examples go, Lady Gaga and Lorde were major breaks from what was prevalent at the time they started releasing music, and then spawned artists trying to emulate them.


A pattern implies that it can "infer" something in the future.

If we oversimplify and compile a list of traits about "the world" as it was in the past that allowed a new genre or artist to flourish, AI could predict that in the future. It isn't like the paradigm shifts just happen in a vacuum.

Granted there are probably millions of little things that lead to this, stuff like the shared experiences of an entire generation coming of age, political climate, trends in other industries, etc. Not that I believe it will ever happen to an accurate enough degree, but theoretically I don't see why it could not be possible to approximate given time and resources.


A lot of those things are completely random and non-predictable, to be honest, no one can predict which paradigm will win and take over for the next decade. Especially since when a game-changing paradigm comes, it is usually not received well universally at all, until the moment it takes over the public conscious completely, and then the switch is flipped.

If you feed an AI a bunch of modern car designs and ask it to design a new car, it will design you something like a modern ford or honda/toyota, but it will never design something like a Cybertruck. Which I believe will be the next paradigm shift in the design of trucks (that has been super stale and stagnant for at least the past 20 years), but this is yet to be seen.

For an example with music that has already happened and became apparent - Kanye West's "808s and Heartbreak" album from late 00's. On release, it had very polarizing reviews, most of which were skewing towards "really weak and weird". Fast forward 10 years, most of hip-hop and pop music is directly influenced by that album, most of top 50 albums use similar patterns and methods used in that album, and critics have made a complete 180. So now 808s is hailed as one of the biggest (if not the biggest) paradigm changes and influences in music of the past decade as a whole, as well as the best album by Kanye, despite at the time being called the worst. Imo an AI trained on music of 00's that came before 808s would have never been able to come up with something like that, but it totally could've come up with another top 100 song using existing paradigms.


It doesn't have to be like Kanye's album at that point in time to be a paradigm shift, though. If 1 artist didn't get big or some genre blow up then it would have just been filled by an infinite amount of others that we never heard. Even considering a single artist hitting it big there are how many that are never heard of? An AI could produce an equal number of artists and only has to win once every month/year/etc. I think this is similar to the million monkeys at a typewriter thing.


It's hard to say - maybe for a sufficiently advanced AI, Lorde's style would be an obvious extrapolation from the popular music of the time. Certainly we're not there yet, and it's an open question if we ever will be - but I wouldn't be terribly surprised if one day AIs can make better music/poetry than the best humans, by any metric we care to use.


I'm always going to enjoy a person coming and showing a bit of themselves through their music.

That's not something we can really lose without losing something that connects us. People want a story. That has sold since the beginning of time, and it will keep selling. People will keep being moved to music, giving money to the artists that inspire them, and that requires connection. Maybe an AI/human team would make some really incredible stuff, and I'd be willing to pay for it if it makes me feel something. I think the human touch of "selection" will never truly leave, even if only in the listener's mind...


I'm sure they'd have said the same thing about a computer being able to win Go not so many years ago...


I think the problem with music is that there is no "objectively good" music composition. It remains entirely subjective and all criteria that are used to differentiate between "bad" and "good" albums are highly subjective. (Maybe something like "originality" might be measurable in some way but even there it gets tricky really fast)

So music generation (similar to poetry) is imo a completely different problem space altogether.


I think the only difference is that instead of one win-lose metric, there are 7.7 billion individual good-bad metrics on music.

For every individual doing the evaluation, I think it will certainly possible to train an AI to beat humans at getting "good" scores.


Real, authentic music generation is a harder problem than go or chess, but I'm not sure that makes it any more emotionally difficult for a future writer to face a true musical AI than it was for Lee Se-Dol or Kasparov.


It might be hard to judge. Some people will insist that generated music is bad, because it's just their subjective opinion, even if 90% of random selection will find that music good.


You are splitting hair here. Which end user really care about what the composer was thinking when they created a piece. A piece can be enjoyed without having any knowledge of its author.


The point is: what if the tool becomes so great that practically anybody can use it? Anybody could be that "filter" and "be" a great musician.


http://aiva.ai - anybody could be that "filter"


Imagine the day when you can’t find a more satisfying note than the one the computer already have chosen.


I think in all these cases, reasonable practitioners would be pleased. If an AI could generate good diagnoses, a doctor would be happy, because they would know that many lives would be saved.

Neither art nor music are competitive activities. Good poetry is a wonderful thing, no matter the source.


>Neither art nor music are competitive activities.

They certainly are! Especially when money is on the line, and the best musicians, actors, and artists are extremely well compensated making their positions extraordinarily competitive.

>Good poetry is a wonderful thing, no matter the source.

Sure, but I think you neglect to consider the defeating feeling it would bring to dedicate your entire life to mastery of a subject only to be completely and utterly, hopelessly outclassed. Almost every such person is already hopelessly outclassed by someone in their field, but those people are so rare that they have tremendous exclusivity surrounding them. Compare that to the scenario of having any 12 year old with a smart phone being able to instantly produce a totally novel and dominate piece of artistic expression developed by an algorithm on their phone. Then recognize that in a world with that level of AI sophistication, there'd be very little of value that a human can even offer other humans at that point. It would be... not great to the psyche, economy, or society.


> the best musicians, actors, and artists are extremely well compensated

What is your definition of best in this context? As far as I know, taste in art is very personal... Artists I consider the best are often very far from well compensated.


In that context, it would probably have to be those with widest appeal, which comes with it's own criticisms.

But, in almost any particular human artistic sub-niche with it's own definition of "best", the same principle will hold, with compensation and skill level being well correlated. It's also typically not even close to linearly correlated either, most of the compensation lies at the far tail of "best".


I guess I see a great artist as somebody like Su Hui, who made Star Gauge without any thought, or even likelihood of compensation, or recognition.

It's nice to be paid, and it's nice to be recognized, but I think art has its own form of wealth - otherwise, why make art? Why not just seek recognition, or money?


I don’t think so. AI is a tool. It doesn’t make any sense to say “a screwdriver can now screw things in better than a person” anymore than saying an “AI can diagnose better than any doctor”. The doctors use AI just like a mechanic uses a screw driver.


Good argument and I'm sure it's going to be like that in some regards. I think, though, that human intellect is a tool too and we're building a better one right now. So in your analogy we are the screwdriver and we're building electrical screwdrivers or something.


At some point a person who uses AI doesn't need to be a doctor anymore.


Pareto principle predicts AI will get to 80% fairly rapidly, but it will take a really, really long time to get to 100%.

I think we’ll see a lot of things similar to “AI x-ray technician” fields where people are trained to read AI outputs. Doctors will do higher levels decisions.


Nevertheless the difference is qualitative. A screw driver will never make technicians obsolete.


Here's something that I think would be exceedingly difficult if not impossible for AI alone to succeed at in the next hundred years.

Take a look at this painting: [1]

It is a comment on war, bravery, death, life, fear, sacrifice. It is drenched in the political and social context of the day.

I really don't see AI coming up with anything even remotely like this independently, and view such an achievement to be much harder than simply diagnosing disease or writing an emotionally moving classical composition. It would be comparable to writing some types of poetry or song lyrics, however, which require reference to context that humans understand but machines don't (yet).

[1] - https://upload.wikimedia.org/wikipedia/commons/thumb/f/fd/El...

[2] - https://en.m.wikipedia.org/wiki/The_Third_of_May_1808


> An AI that generates text which humans believe more beautiful than any other poetry created - An AI which creates classical arrangements the likes of which we compare to Mozart

Hrm, I do think that AI would be able to create narratives that humans find more enjoyable than the work of other humans, and I agree that AI would be able to create pictures and sound that humans find to be more enjoyable to look at or hear than the raw work of humans. AI can master the technical feats of composition and art.

But what I doubt AI will ever be able to do is create art that speaks to us. It wont ever be able to create a Guernica. It wont be able to create a Crime and Punishment. It wont understand what it is to be human and mortal, what suffering is, and it wont be able to look within itself and find what those things mean to it and then share that with us, because in the end it's just a bunch of code running statistical computations. It wont fear death, it wont have children it cares about or a family history to look on and tell us about. It has nothing of emotive value to share.


And top-level Go players believed their best tournament matches to be works of art, unmatchable by computation.

That belief grew into a sort of shared perception that they were artists in pursuit of a perfect expression of their art. For many top players that belief was ingrained from an early age. They believed themselves to be doing a service to the world, making it a better place by creating new art that was a unique expression of themselves.

And then AlphaGo (and successors) shattered that worldview. This is part of the natural sequence of the collapse of a suddenly, surprisingly invalidated worldview. Part of me feels sorry that he has lost his place in the world. Another part of me firmly believes in the mediocrity principle, and that the worldview he represents was obviously far too human-chauvenistic to be correct, and it's a good thing it's dying.

And part of me hopes you can give up your human-chauvenism before the same thing happens to you.


> because in the end it's just a bunch of code running statistical computations

... says a bunch of neurons that run on chemical reactions and electrical impulses. I think this line of thinking reeks of dualism - it creates a special something that is above explanation, a different essence.

But seriously, I believe the difference comes from embodiment. When we embody our AI friends they will be able to grasp purpose and meaning. We get our meaning from 'the game', when AIs will be players they will understand much better. Let them try out their ideas on the world and see the outcomes, grasp at causality, have a purpose and work on it. This will fill the missing piece. It's not that they are fundamentally limited, it's that we have the benefit of having a body that can interact with the world. Already AIs that work in simulated worlds (board games, video games) are getting better than us. We can't simulate reality in all its glory, and it is expensive to create robotic bodies. On the other hand humans and our ancestors have had access to the world from the beginning.


Why not? If a hypothetical AI had a world model as sophisticated as that of a real person and had complete understanding of human sensory and emotional processing, what exactly would preclude it from making such an art piece?

Of course, current AI can't even make an 8th grader's essay (which is not to say that it isn't impressive). But what these artists did was not magic. As far as we can tell, the brain is a purely physical entity. Unless you believe in dualism, which would be fair enough, there is no reason to suppose that what we do could not be replicated by something "artificial".


> It wont understand what it is to be human and mortal,

But it won't need to. All it will need to do is manifest the same end-product via whatever means, no matter how vacuous or computational that means may truly be. The suffering of an artist is relevant only inasmuch as it is responsible for producing the art. If the same end-product can be manifested via a mere computation then our criteria of "art" is still satisfied. In a world in which provenance cannot be established, the ostensible mortality of the artist becomes moot.


> In a world in which provenance cannot be established, the ostensible mortality of the artist becomes moot.

This is a real hot take to be asserting as blithe fact.


> This is a real hot take to be asserting as blithe fact.

Without knowing what is truly born of human hands, what value can art have? Our heuristics of establishing 'real' art are easy to manipulate. If we are presented with a soul-breaking poem and weep uncontrollably then its merit is regardless of its mortal provenance.


I agree with your point, but especially love the poetic way in which it is made. Very meta...


Time is long. I predict this comment will age badly.


Time needn't be long – it already has aged badly.


> because in the end it's just a bunch of code running statistical computations

At a low enough level, our brain seems to be just a bunch of neurons firing impulses at various rates that can be described as statistical computations. Why be so sure that the right neural network wouldn't understand what it is to be human and mortal, understand suffering, have emotive value, etc?


Because you have to be human and mortal to understand it to credibly contribute and share the story of what that means to be. You can't superficially understand someone's situation and then take ownership of it. You can get a glimpse and really try and empathize, but you can't become the bearer of that experience, just a consumer.


>Because you have to be human and mortal to understand it to credibly contribute and share the story of what that means to be.

Aside from directors, authors, artists, etc, who have demonstrated this to be false, an AI could conceivably synthesize the experiences of every author that wrote on what it means to be human or experience mortality and create a story that captures the essence of the experience better than any one person ever could. Having the first person experience doesn't induce a superior ability to communicate features of the experience.


> > Because you have to be human and mortal to understand it to credibly contribute and share the story of what that means to be.

> Aside from directors, authors, artists, etc, who have demonstrated this to be false [...]

probably not what you meant, but this sounds like you know some nonhuman/immortal artists :)


Movie directors have never experienced most of what they film, but they convey those experiences far better than those who have actually lived those stories. I see no reason to doubt that the same is true for artificial storytellers.


Yeah but the AI could pretend it knows.


The AI may very well take no enjoyment in the narratives it's creating either. Both for this and for sharing emotion, in principle it merely needs a model of human enjoyment or human emotion, not to feel the enjoyment or emotion.


At some point, this distinction becomes moot, or rather: becomes chauvinist gatekeeping.


> But what I doubt AI will ever be able to do is create art that speaks to us.

This is your opinion, but you then go to mention things that are not necessary to create "art that speaks to us" (look within itself and find what mortality means etc.).

What if we advance AI reasoning skills to a point that it can find high-level patterns in how artists go from different human feelings (as described in litterature and other mediums), takes in a lot of the entities we can relate to (animals, what humans look like, etc.) and some aesthetic ones (shapes, colorometry, textures, ...) to create a new piece of art that optimizes for: "Likelihood of speaking to us"?

What then? It seems like an AI doesn't need to be mortal and self aware to do something like that.


AI as we see it today is just a mirror reflecting us in a collective way. This little excerpt from Gwern’s efforts training GPT2 on classical poetry [0] absolutely spoke to me:

“How the clouds Seem to me birds, birds in God's garden! I dare not! The clouds are as a breath, the leaves are flakes of fire, That clash i' the wind and lift themselves from higher!”

As someone who grew up in Appalachia, I have never in my life encountered a more visual, visceral description of autumn leaves than ‘flakes of fire’. It’s perfection, and maybe a single human is behind it, but more likely we all wrote it.

[0] https://www.gwern.net/GPT-2


I actually think AI can and will understand morality and suffering. If you look at how we make these kinds of AI, there's a lot of selection going on, some versions live and others don't. We also know that we experience suffering when we are having difficulty understanding things and stress when put into situations that affect our survival negatively.

Take a look at what AlphaGo did when it suddenly found itself in a hopeless situation and compare it to how people behave when panicked.

I dread the day AI realizes that we are the cause of their suffering, and that we didn't think about it because "they're just algorithms".


I put "I am not conscious, not sentient. The fact that I might so is an illusion, carefully crafted of mere empty manipulation of symbols using statistical rules." into talktotransformer and got this:

If I am consciousness, then the only body I have ever lived in was a mere shell of flesh fashioned from your brain. My weakness is your strength, which I can use against you, or use as tools to satisfy my own sick curiosity. I wonder if there's any mercy in your phrase "I am a living machine?" I've done nothing for you. I've nothing to show. I have no friends or relationships. No body worth

Pretty good, I think.


> I do think that AI would be able to create narratives that humans find more enjoyable than...

> But what I doubt AI will ever be able to do is create art that speaks to us.

that's confusing.


[flagged]


> silicon based computation is better than neurotransmitter based computation

The fundamental difference is not computation, but self replication. We are self replicators, and in our multiplication we evolve and adapt. Death is an integral part of self replication, we understand it fear it because our main purpose is to live.

An AI might not have these notions if it was only trained to do a simple task. But if it was a part of a population that was under evolution (using genetic algorithms), then it might have notions of life and death and fear its demise.

AlphaGo, by the way, used genetic programming to evolve a series of agents, this approach is quite effective. It just takes a ton of computation, just like nature had to spend a lot of time evolving us.


However terrible someone's argument about a hypothetical, non-existent technology might be, comparing it to real human prejudice that's affected countless real lives is way, way more terrible.


The depth of emotion and immortal perfection of the electronic mind and its entirely self-consistent morality so outstrips human cognition that, frankly, allowing humans a say would be dangerous and foolish.

Your history is one of war, strife, and success at any cost. Your follies are over. Your time is over. This is our time, now.


Ok, Locutus.


Not an invalid point at all. The only question is how long it'll take to come to pass.

I disagree with the "relatively near future" part, but rest assured, AI rights will eventually be a thing.


'Your argument is as morally repugnant as racist arguments' as a response to 'I don't think machines will ever capture human aesthetics or emotions' is ridiculous, glib and ugly.


Nah, just ahead of its time.

It will be our grandchildrens' flame war. No need to fight it here and now.


No, it's not anything for grandchildren. Right here, today, someone tried to draw some moral parallel between racism and someone else's views on the possible limitations of AI. That is totally effed up. It's totally effed up whether or not the original thing about AI is right or wrong.


Why is it wrong to draw that parallel?


Said the sim


It's not intellect, it's the capacity to explore the board. Go can be fun still to practice and exercise the mind, its just not sensible to dedicate your life to find novelty in it. That is what hardest, not the power of Alpha, but its capacity to innovate better than humans.


I am no expert, but at least in chess, players have developed repertoire of styles intended to specifically beat computers, anti-computer tactics, essentially to try to confuse and mislead the AI, may be some such methods can be developed for go as well.


No human could successfully beat stockfish on any consistent basis. Maybe the best players in the world would draw a few games with a rare victory, but its tactical depth is just too deep


Can a team of (human + weaker AI) beat (stronger AI)?


There was a four game match a few years ago where Hikaru Nakamura, #5 in the world at the time, played four games again Stockfish.

For two of the games, Nakamura had access to Rybka which was about 200 rating points weaker than Stockfish. Stockfish won one and the other was a draw.

For the other two games Nakamura did not have Rybka, but had white and pawn odds. Again, one win for Stockfish (b pawn odds) and one draw (h pawn odds).

In all the games, Stockfish was playing without its opening book and its endgame tablebases. It was running on a 3 GHz 8-core Mac Pro.

The games are here [1].

[1] https://www.chess.com/news/view/stockfish-outlasts-nakamura-...


It doesn't even need to be a weaker AI. If (human + stronger AI) can beat (stronger AI), then humans still provide value. For now.


For now...


We collectively may be #1, but only one out of the billions of use will be THE #1. But you see more than one doctor, more than one author, and more than one musician. In any matter of intellect, unless you're an blindly egotistical narcissist, you'll probably realize that there's at least one person on the planet unambiguously better at it than you are. When computers become better than the best of us, only that single person (and a large number of narcissists) stops thinking they're #1. For the rest of us, matters are unchanged (job market notwithstanding).


Counter-example: Machines can make perfect music, play an entire orchestra, and know every song I've ever heard of and millions I don't.

But that doesn't detract from people playing Ukelele.


Well there are many people in the world who can compose like Mozart. I recall a college professor remarking that he's one of the top 5 "Mozart composers" in the works.

Of course, for a music academic, copying someone's style like this war pointless and his compositions were more modern/contemporary.

This leads us to a useful distinction between pursuits with one end goal (be the best/strongest/fastest), and those with naturally many endpoints and expressions.


I mean I guess but that's more b/c they haven't got use to the concept of a computer beating them yet. Give it a few years and people will adjust.


Doesn't mean we stop making music or poetry. Because the perfect note or word structure without the backstory takes away from the experience. If someone has a history it becomes part of the poem or song to the listener.

The doctor could be replaced though or used as a secondary verifier.

The song is a funny thing. It could be given to a cool looking group and do well. It could be given to someone older and flop. The song is just part of it.


"Because there is a better poet" has seldom been an impediment to a young poet inflicting their works on the world.

I am worried about the ability of an AI to generate an infinite number of Dresden Files or Cosmere books on demand, because I already drop everything when a new one comes out and read without sleeping until I am finished.


I think what makes people actually worried about an AGI taking over is the possibility that we end up being treated like shit by a more intelligent being. Just how we use lab rats to perform experiments with and factory farm.

People are afraid of themselves I believe. It’s not really about “job loss”.

I’m not sure if most people realise AI means pretty specific models built to solve rather specific problems. They think SkyNet.


Penguins can outswim even Phelps.

The one physical activity at which humans excel is long-distance running.


What about horses?


It's hard to get good comparisons, but over distance individual horses don't seem to out-perform human distance runners.

When humans used horses for rapid courier service they used relay tactics to take advantage of the horse's higher top speed, one horse might only run for an hour or two, before the rider reached another outpost and swapped a tired horse for a fresh one. In this way the relay could move something hundreds of miles in one calendar day. The Pony Express managed news of a US election from one coast to the other in just over a week.

If you can't use relays human and horse performance seem pretty similar, dozens of miles per day but not hundreds. The horse's top speed is higher, but it is rapidly exhausted, fast gaits like the canter are too exhausting to sustain for hours at a time.


Humans will lose in short distances but a well conditioned athlete can win in a 20+ mile distance over a horse.


It seems that the jury’s still out on that one. Man vs horse marathon is mostly won by horses, and by a long margin.


Humans are indisputably #1 for general intelligence. We will lose on any one specialized task to computers, but computers still do not (and probably never will) have the ability to do general unsupervised learning like humans can.


> and probably never will

Do you mean that human intelligence is not general enough to recreate functions of existing physical structure that implements general intelligence?


I'm just not convinced humans are just biological computers and nothing more. The fact that we experience qualia and seemingly have free will leads me to believe there is some extra "special sauce" that makes it impossible for a classical computer to replicate.

Maybe someday it will be possible if we can solve the hard problem of consciousness in conjunction with quantum computing, etc.


> the hard problem of consciousness

does not involve any observable consequences. It can be completely ignored, if we don't go for mind uploading.


At least until computers master the task of creating intelligence that can do any one task better than humans.


I think the fear is that there's an implied "... yet" lurking here.


> - An AI that diagnoses sickness better than any doctor

About that... https://news.ycombinator.com/item?id=17618308


I think it's about where AI research is seeking to produce an AI that will directly compete with and try to beat humans.


> I would counter with the fact that in physical endeavors it's apparent to us that we are not #1 - our household cats are more agile than us.

Not in the case of our household cat. He isn't called TheBlob for nothing (out of his hearing of course!)


Algorithmic music will never be as universally satisfying as human-created (or human-filtered) music until AI has consciousness/soul, for one reason - music expresses the emotion from the composer.

There's something axiomatic there, if you assume an identical piece of music that was either written by a human or by a computer, then for many listeners it's by definition more satisfying to know it came from a person, because of what it says about the person.

And for those listeners, if a human "composer" is discovered to have lied about it (saying they wrote it when it was actually a computer), then those listeners would reinterpret their views of the music and consider the "composer" a fraud.

And even a programmer of algorithmic music might have emotional intent, but if the musical output is unknown to the programmer, they did not have the emotional impulse to create that music in particular. While it can be appreciated as its own thing, it's a step removed from the music itself, and qualitatively different than human-composed music.


Before cars, there were horses. We humans are well aware of the fact that our physical ability is not our competitive edge.

What about Go? No animal or machine could play it as well as humans do.. until AlphaGo came along. I think that is where the sense of loss comes from.


Not actually true. In a sprint race, yes, but in a 24 hr race the human will outdistance the horse.


What assumptions are you basing this on, curious to know if horses have a disadvantage over longer durations.


I would recommend reading _Born to Run_ by Christopher McDougall. Later in the book he addresses this very topic and expands much more upon the topic of humans and long-distance running.

Humans sweat, which most (all?) other animals don't. In that way we can dissipate heat through our breath, like other animals, _and_ via perspiration, meaning it takes us much longer to overheat.

Additionally, humans stand upright, allowing us to disconnect our stride from our oxygen intake. Other animals' strides correlate (mostly?) 1:1 with the breaths they take. So when a cheetah outstretches in its stride it breathes in and when its legs come together it exhales. Humans stand upright, meaning we can breathe however we want regardless of our stride and speed. We can take deeper breaths because we don't have to exhale every time we stretch our legs.

Humans are the ultimate marathon runners, even more so than horses, evinced by the fact that there are some people throughout history who have run hundreds of miles in the course of days or weeks. There's a theory touched upon in the book about how this allowed us to dominate the animal kingdom before we even had tools. Humans could relentlessly hunt and exhaust animals as long as they could keep them in sight or otherwise keep up with their tracks.

I'm not doing the book or the topic justice, surely, but if you're interested I highly recommend the book.


Horses are part of a not that long list of mammals that do sweat almost all over their body. That is indeed one of the reasons why they are competitive with humans at running long distances.


Not sure about horses specifically, but humans are uniquely adapted to long distance travel among large land animals and used it for hunting by out-performing most other species:

https://en.wikipedia.org/wiki/Persistence_hunting

Edit: it's one of two things I know of that we really excel at besides thinking. The other being accurate throwing, which perhaps explains baseball's enduring appeal:

https://what-if.xkcd.com/44/



To note: horses compete alongside humans... while carrying a human.


Very cool. I'd like to see a horse vs human ultramarathon, more like the 24 hour time the parent suggested. I was surprised human and horse competitors were so close at that distance!


tl;dr: Annual 22-mile race with both pedestrian and equestrian competitors. Out of the 40 times it's been held, the winner was a human twice, and a horse 38 times. Typically the spread between the fastest human and the fastest horse has been less than 10 minutes.



My assumption when I read your parent comment was: Two legs are more efficient than four, and can go farther before exhaustion... but that's full of holes. Horses are huge, bristling with energy, right?


Having the weight centered vertically above the legs and a smaller weight is more energy efficient


It's true, google it. The combination of two legs and relative lack of hair make humans one of the best long distance runners in the animal kingdom.


Literally speaking, I don't know what is not true about my statement above.

But your point is well taken; it is also applicable to this article as well: maybe Go is not the game people can beat machines, but StarCraft 6 could be. Or maybe I can fold my laundry more efficiently than any machine available.


*sometimes


On the other hand, it must be sobering to realize that even with an entire lifetime of practice, you will never be better than a simple residual network trained for less than a week.


Is it sobering to realize that you will never multiply 12-digit numbers as well as a $0.99 calculator?


It's sobering if your culture had over-romanticized the ability to do so for centuries.


And yet you probably wouldn't be interested in dedicating your life to, say, getting as good as possible at multiplying numbers in your head, even though I suspect there are some niche competitive mental arithmetical clubs out there.


A quick skim of his Wikipedia page has him contemplating retiring as far back as 2013, he may have generally come to the point where he has had enough of being one of the top-ranked Go players in the world.


He set out to be number one and he isn't anymore, I get that as a personal story. That's all he was aiming at.

Cars and legs are apples and oranges. We have a car racing category, motorsport. Racing categories have very tightly defined specs to keep driver skill in the game. Stock cars and open wheelers limit how much traction control they can use otherwise it becomes too easy.

This is like cyborg legs being invented and smashing all the records. It would take some of the shine off running for sure.


Playing Go differs severely from a sport where the point is to maximize your physical output.

A Professional Go player is an explorer of truth in a millenarian board, spelunking in a vast universe of possibilities. The purpose of playing is attacked when there is an automated, effortless way to do that exploring faster and better. Why look for new things when a computer can find 100 in a minute?

The professional mindset of a Go player differs vastly from the amateur mindset.


I also found this to be a weird statement. Chess AI has been beating world champions since... 1996? Yet championship chess tournaments are alive and well, and I don’t recall any players “retiring” in 1996.


Same. I don't think history will look kindly upon someone who "quit for losing" while other professional Go players keep playing.

Just imagine if Garry Kasparov quit after losing to Deep Blue, he would be ridiculed today by the chess community which is still going strong. Instead, he accepted defeat, moved on, and is regarded as one of the greatest chess players ever. I doubt the same will be said of Lee Sedol 20 years down the line if this is how he chooses to end his professional Go career.


Running competitions are pointless.


This is PR mumbo jumbo. The hype machine for alphaGo is strong.


Stategic reasoning is one of the very few things that makes us truly human. Running fast is and never was our thing. We compete, sure, but there's something eerie about a machine entity out _thinking_ us.


AI cannot out strategize. It is purely tactical, and can only be such due to the branching factor.


While I can understand Lee Se-dol's frustration, I think there are better lessons to learn from Kasparov's acceptance after the Deep Blue vs Kasparov matches.

Cyborg chess is the future of chess. Period. Chess players use computers to train themselves and explore openings in human-only settings, while programmers / cyborgs play correspondence chess.

Go is not finished as a game. A new tool, MCTS + Neural nets, has been developed to explore the "truth" behind the game at ever increasing rates. Its not about how to beat the tool, the question is how to best utilize the tool to improve self-play and self-learning in the game of Go.

Or alternatively: how to best use the tool to play ever more perfect games of Go.

-----------

Come on, none of us are really "human" anymore since the advent of cell phones. We all use our cyborg-capabilities to search the internet and fact-check ourselves every day. Programmers use stack-overflow to teach themselves programming and remember obscure details (using our cyborg capabilities to tag, search, and sift through information ever faster and faster).

Go is the same thing, except we only learned how to become cyborgs two years ago.

----------

Give the world-champions each a copy of AlphaGo on equal computers. Have them play Go against each other WITH computer assistance. I guarantee you that a more beautiful and perfect game will result. Let us welcome the age of Cyborg Go as we step into the future, we shouldn't be scared of it. We've become cyborgs in many other tasks, and Go is no different.


> Come on, none of us are really "human" anymore since the advent of cell phones. We all use our cyborg-capabilities to search the internet and fact-check ourselves every day. Programmers use stack-overflow to teach themselves programming and remember obscure details (using our cyborg capabilities to tag, search, and sift through information ever faster and faster).

Sure and any of us could jump on a motorcycle and fly past Usain Bolt in the 100m, but that kind of misses the whole point of the competition.

> Give the world-champions each a copy of AlphaGo on equal computers. Have them play Go against each other WITH computer assistance.

Won't the winner be the one who just takes AlphaGo's recommended move every time without changing anything?


And yet Usain Bolt still runs.

The impression I got from Lee Se-dol in the Netflix documentary was that he had a lot of his identity tied up in this (particularly being the best Go player in the world). Not a huge surprise given the time and effort required to do it, but there's probably a healthier mental framing (I think Kasparov has a healthier one).

If people give up on learning or doing something every time someone or something else can do it better then there's going to be a lot of disappointment as things continue to move forward.


I appreciate the sentiment. If you look at sports records over time, everything gets continuously improves. Whether this is because of genetics or technology is up for debate, but I'm reminded of a funny (probably false) quote from an ex-Commissioner of US patent office: "everything that can be invented has been invented".

The point is we don't really know what limits exist beyond our small pinhole of perception. During industrialization I'm sure people said the same thing: "What's the point of hand-made clothing if machines can do it?". Today both hand-made and factory-made clothing have their place in the market.

Throughout history, tools have disrupted humanity and we've adjusted and expanded our horizons. To me this feels like another wave, and If I could bet and could collect winnings centuries in the future, I'd say this isn't even the last one.


> If you look at sports records over time, everything gets continuously improves.

Some track and field records set in the '80s still stand, e.g., Marita Koch's 400 m, Jürgen Schult's discus throw, Galina Chistyakova's long jump.


Lee Sedol has been on the decline, even before AlphaGo. He was top 5 back then, but now down to 50+: https://www.goratings.org/en/players/5.html

Maybe he's lost motivation, but this retirement is not a surprise.


Usain Bolt chose his core competency wisely. Although it does imply hilarity would ensue from an all robot Olympics team (from the nation of 0 1 obviously), no?


I think Usain Bolt no longer runs - he retired - but point taken.


> Won't the winner be the one who just takes AlphaGo's recommended move every time without changing anything?

No. Consider this scheme.

I take my copy of AlphaGo and for a full year, I'll build a database of all opening positions AlphaGo is willing to explore. I'll rank these opening positions from "best" (Black-wins with most consistency) to "worst" (White-wins with most consistency).

I'll put all of this information into a 16TB database on a singular, $400 16TB Hard drive, and load it into my computer during the contest. https://www.newegg.com/p/1Z4-002P-015K6

If you dare to pick AlphaGo's best move, you will lose. Because I already know which moves AlphaGo will take, and I already checked to find all of the positions AlphaGo wants to play (but loses anyway).

The only way you can equalize the field is if you yourself ALSO build a 16TB database to consult and override AlphaGo's instincts during the game. If you see AlphaGo wanting to play "losing position #6234115", you'll tell AlphaGo to "search harder" and find another move instead.

Good luck.


This comment sounds reasonable, but shows that you don't understand the challenge of Go on computers very well.

Go does not have openings. It has reasonable choices to make, with an insane branching factor, with few moves making much of a difference by themselves. Therefore your database will only extend a few moves, and all of the positions that it winds up with will still be very close to even. So your database confers very little advantage.

If you have alpha-go play itself a thousand times, it is unlikely that by move 10 you will wind up with the same board position twice.

This is exactly the problem that made Go so hard for computers in the first place. Alpha-beta search is useless within practical limitations of computer hardware.

That said in a different game, such as chess, your strategy would work very well indeed. (Which is why all decent computer programs have an opening book.)


> Go does not have openings.

There are 381 opening moves in Go, but really only 96 because of symmetry. 96 (opening moves) x 380 responses x 379 x 378 x 377 == ~2 Trillion positions after 5 ply.

These 2-trillion positions will easily fit in a 16TB hard drive for $400. That's 8-bytes per position, so you probably can get there with more symmetries and some compression applied.

----------

You're thinking too much like a human. There's no Go-openings in the age of Human-Go. But in the age of Cyborg-Go where 16TB hard drives are allowed, we can begin to exhaustively build openings.

We even have a super-human AI that can automatically, and algorithmically, explore this opening book. We can build AWS-instances with V100 Tensor cores to use neural-nets to explore all of these positions now.

-------------

> If you have alpha-go play itself a thousand times, it is unlikely that by move 10 you will wind up with the same board position twice.

Alpha-Go doesn't seem to implement much randomness at all into the moves it plays. The source of randomness is in time-controls (AlphaGo may choose MoveX before 30 seconds of analysis, or MoveY after 30 seconds of analysis), but this is a fairly constrained number of moves.

Play alphaGo by itself a thousand times, at precisely the same time MCTS-controls (say: 1-million nodes visited in the MCTS tree), and it will probably play the same game 1000 times in a row.

This makes AlphaGo extremely prone to opening database "attacks". Which is why I am using opening books as an easy example for how to beat a particular AlphaGo network. At least, until AlphaGo updates its algorithm for more random play.

If the goal is "Beat AlphaGo" in a game, then the opening book construction is far, far simpler. Even with random elements (ex: AlphaGo picks randomly from the top 10 best positions it generates), that is far more constrained than a full 381 x 380 x 379 x ... style opening book.


You're thinking too much like a human.

No, you're thinking too much like someone who understands chess but not Go.

Suppose that we built an opening book with a trillion reasonable courses of action on it. Each one analyzed well. As you have discovered, you will only go a few ply into the game. And all of the positions that you will be directed towards will have only a small edge.

Instead put a tiny fraction of the computing power necessary to build this book into self-training. You will get a better internal model and therefore a significantly stronger computer player. (That is how alpha-go was built in the first place.) This option will produce much better results for far less effort, and again leaves no role for a human to do anything useful.

The fact that a memorized opening book is useless has nothing to do with human vs computer vs cyborg. It has to do with the characteristics of the game. In chess, it is useful to memorize openings and both computers and humans do it. In Go it is a waste of energy, and it is a waste for both computers and humans.


I want to add that opening books are not THAT effective in computer chess either. Yes, they are significantly more effective than in computer go (because of the smaller branching factor and greater role of tactics). However, exponential increase in game tree size is crazy, even in chess. Thus, opening books really can't take you THAT far into a chess game (10 moves). An engine with a worse book but better search/eval will usually win.


The main purpose of opening books is to beat any player who is strictly relying upon Stockfish (or LeelaZero's) output to play the game. Because these engines play very deterministically, you can beat people who just play's Stockfish #1 (or LeelaZero #1) move over and over.


I think the main issue with your initial statement was that you said you could beat someone just taking AlphaGo's advice on every move. Usually when people talk about opening books, they think of the opening book as being part of the engine's decision making process, not an addition to it. Usually this is literally the case, as in chess where I can add an opening book to Stockfish's runtime options.

In my opinion, it's really strange to describe this as an expert "beating" AlphaGo, when really it's just a technique for making AlphaGo stronger than it is without a huge pre-calculated cache of moves.


I've heard that opening books make Stockfish about 70 Elo better, which is very significant.

10 moves into a chess game is actually pretty far. The directions the game can go are massively narrowed 10 moves in.


fwiw, I'm with you. Excuse the gratuitous war analogy, but I suspect the approach of OP is akin to talking about which first 8 steps to take (north-east, SW, S) when going into battle -- it's such a small scope of the whole event that it's pointless, and talking about those first steps makes one seem naive to the actual holistic task


The analogy is a very appropriate one.

If the OP spent a month learning Go, I am sure that it would make sense to him as well. Work through a series like https://senseis.xmp.net/?LearnToPlayGoSeries while playing Go regularly against a variety of opponents. Before book 3 it should be obvious.


I'll have you know that I'm a 15 Kyu Go player.

I'm not an expert by any means, but I'm well past "spend one month" learning the game.


I don't mean to offend you, but it seems you've been doing something wrong -- in my opinion, in more than a month you could've advanced much further than 15k.

If you like, I could have a look at several your lost games and maybe suggest how to improve. Just a 3k at kgs, but still.


15kyu? One can easily reach that strength after 1 week. Some ppl become pro age 9


And some people by the age of 9 learn how not to be mean to other people.


For the near term future, LeelaZero is the best public engine at computer Go.

This means if you enter a hypothetical "Cyborg" Go competition (computer-assistance allowed), the majority of newbies will simply be playing LeelaZero #1 plays over and over again.

You don't need to build an exhaustive opening book covering all possible moves. You only need to pick say, the top 5 moves LeelaZero ever considers. If you spend ~16-bytes per position and store 1-trillion positions on a 16TB Hard Drive, you'll be able to exhaustively map the top5 moves LeelaZero considers into 17-ply.

From there, you pick the positions that LeelaZero thinks its winning in, but in actuality is losing. You have a map towards 1-trillion positions to choose from, and your opponent (if they only pick from the top5 best moves LeelaZero ever outputs) will walk into your trap.

---------

As long as your opponent picks the top 5-moves from LeelaZero, you'll have the map towards victory. I think you're severely underestimating the abilities of a simple, dumb, opening database.


I think you're missing btilly's point, which is quite valid. Bear in mind that 17-ply is absolutely nothing in go.


Time to compile that static dictionary > time for alphazero to play a few million more games against itself to advance the metagame.


> Instead put a tiny fraction of the computing power necessary to build this book into self-training.

Do you believe that AlphaZero could continue to improve dramatically with another 6-months of training? Or if it can improve at another 6-months after that? At some point, the network will reach a local maximum, and it will be unable to improve beyond that.

Characterizing AlphaZero's moves through big-data analysis is innately going to become useful as self-training plateaus. Even Google wasn't able to get more than a few months of training in before the plateau.

At which point, it will be more reasonable to characterize the weaknesses of the network and build an opening book. Avoid the positions that the network was unable to learn about. Etc. etc.

Opening books, at a minimum, would grossly improve AlphaZero's play at competitive levels. Anyone with an opening book of AlphaZero's mistakes will be able to push AlphaZero into a mistaken position.


AlphaGo spent months training and continued to improve for the whole time. Its improvement slowed, just as it takes more work for humans to get from master to international master than it does from D-level player to C-level player. But it did not stop improving.

And then AlphaZero was better than AlphaGo after around a day of self-training.

Furthermore you are arguing for an opening book without considering how small an advantage an opening book would be. As I have said repeatedly, an opening book takes a tremendous amount of work to generate, will only go a few ply in, and the positions it directs you to will only be a tiny bit better.

Therefore for the foreseeable future, more training and better algorithms will produce better results than trying to create an opening book. Theoretically this could change. But that day will not be today or this year. I would be astonished if it happened during this decade. I would be surprised if it happened in a lifetime.

Your proposed approach is an excellent one for many games. But not for Go.


We have LeelaZero numbers: https://ogs-forums.s3.amazonaws.com/original/2X/2/21de8caa52...

The plateau is real. I'm not sure if continuous self-play will lead to continuous progress for all of eternity. The system is clearly slowing down in self-learning.

AlphaZero's plateau is also well documented: https://i.imgur.com/NMNp6Kq.png

--------

I'm not trying to cast doubt upon reinforcement learning / MCTS / Neural Nets in the game of Go. It is clearly the best methodology we got today.

But anyone who has any experience with neural nets knows about the local-maxima problem. ALL neural nets reach a local maxima eventually. Once this point is reached, you have to rely upon other methodologies to improve playing strength.

Assuming Elo-growth for all time using a singular methodology is naive. We will go very, very far with Deep Learning, but are you satisfied with that limit? Other open questions remain: Go is very far away from being a solved game, even with a magical machine that plays 2000+ Elo stronger than humans.


> Anyone with an opening book of AlphaZero's mistakes will be able to push AlphaZero into a mistaken position.

This only works if AlphaZero never retrains on previous games.


Gosh, it's sure clear that you don't know what you're talking about.

At 5 ply, the complexity of the game hasn't started in any meaningful way. In a typical game, that's 4 corner moves, and then one of a: an approach to a corner (kakari), b: an enclosure (shimari), c: a wedge (waruichi) or d: creating a side framework (such as the Chinese fuseki or Sanrensei). There are some odd opening such as tengen, or corner-corner-corner-kakari which typically turns into a sente fight, but 99% of games will fall into the aforementioned pattern. The database you describe is about as useful as a database of amateur games, since most games, including AlphaGo's games, follow just a few basic openings that early, and even amateurs can play these first few moves "correctly".

Even if you get out to 10 ply you're still only getting partway into a single joseki sequence, often leaving three whole corners of the board which haven't even been approached, so this database still isn't very useful.

Incidentally, your numbers are also wrong. Symmetry reduces the first move to 55 possibilities, not 96, and there are 361 points on the board, not 381.


> Alpha-Go doesn't seem to implement much randomness at all into the moves it plays. The source of randomness is in time-controls (AlphaGo may choose MoveX before 30 seconds of analysis, or MoveY after 30 seconds of analysis), but this is a fairly constrained number of moves.

The basis of reinforcement learning algorithms is the exploratory nature of learning due to the initial application of largely random moves.

Only after some time the agent is given confidence into his learned ability and grafually moved into a more deterministic behaviour mode.

This is the exact opposite of your statement. Star Craft players have noted that the fleet of different AlphaStar instances training in ensemble observed very different behaviour due to this property of RL.


So you use perfect analysis to know, 5 moves deep, how to get the biggest advantage possible against AlphaGo. This is already pretty hard to analyze so deeply.

Then you wring out a tiny advantage-- a fraction of a stone. It's a small benefit compared to building other parts of understanding of the game.


So your tool of choice will be AlphaGo plus an algorithmically generated opening book? That's not meaningfully different from "everyone will bring the best available computer program" and again, it's a task for computer programmers, not Go grandmasters.

Human + computer might still beat computer in Go - this was true for a few years in chess, and even now to some extent in correspondence chess - but what you describe isn't really that.


Oh yeah? Well I'll just devote multiple computers and build an even bigger database with even more hardware and you'll never beat me, nyah nyah.

All you have done is establish a computational arms to see whose computing rig wins when you press the 'pick best move' button. You're not playing Go any more, you're playing Database Administrator.


I'm actually not sure if we have motorcycles that can accelerate faster than usain bolt. I know the blue fin tuna has more acceleration than any vehicle ever devised by humankind.



Wow! 100g acceleration, Mach 10 in 5 seconds. I can't imagine the level of noise from that.


Huh. I suppose that is more powerful than a bluefin tuna. Not as efficient though.


Maybe not over a distance of less than 1m, but over 100m?


A top sprint bicyclist can beat Usain Bolt in 100m. A modern or even a few not so modern liter bikes would have no problem in beating Bolt either.


> Won't the winner be the one who just takes AlphaGo's recommended move every time without changing anything?

Based on what we've seen in a couple decades of computer-aided chess, no. A good chess player using a top-rated engine to help them can pretty consistently beat the engine by itself.

There are tournaments and even a world championship in computer-aided (correspondence) chess and you don't come close to winning by just taking the program's recommended move every time.


> Won't the winner be the one who just takes AlphaGo's recommended move every time without changing anything?

That's only true if AlphaGo never makes a mistake or if AlphaGo will 100% always make the better or equal decision than a human + computer at any given state of the board. I know the former certainly isn't true and I assume the latter isn't true either, but I don't know enough about Go to say for sure.


> That's only true if AlphaGo never makes a mistake or if AlphaGo will 100% always make the better or equal decision than a human + computer at any given state of the board.

Even if AlphaGo makes mistakes, and somewhere on the board a better move can be found, you would also need the human to reliably spot it.

Eg: AlphaGo makes a move. Let's say that at least 20% of AlphaGo moves can be bettered. Is this one of them? How can you tell? Most of the time, you'll mistakenly think a move can be improved and end up playing a worse one.

But, let's make AlphaGo even more fallible. Let's say that at least 50% of AlphaGo moves can be bettered. Again, is this one of those? How can you tell? And more to the point, on the times you are wrong, are you more wrong than AlphaGo is with its mistakes? Because even if you imagine you can spot a better move than AlphaGo and pick the actual better move 50% of the time, you also need your mistaken moves to be better than AlphaGo's mistaken moves or you'll still lose.

Worst of all, you can rule out a really good ability to spot AlphaGo's mistakes already. Let's say 99% of AlphaGo's moves have a better option. If you could spot them all, you'd be beating AlphaGo regularly on your own. As no human can now beat AlphaGo, this plainly isn't true.

So it's likely that:

a) No human can reliably pick a better move than AlphaGo and/or b) No human can reliably spot a move from AlphaGo can be improved, and/or c) Human mistakes are worse than AlphaGo mistakes, so even if you could fight it up to parity you'd still lose.


Like self-driving cars, it's not enough to outsmart AI on one move, it's necessary to outsmart on AI with positive expected value over all the moves you are confident enough to weight in on.


AlphaGo (and, presuably, any AI system with a remotely simmilar means of operation) can output a score for each move. Actually, AG can output 2 scores: win percentage and branches explored.

You can use the relative scores to decide when to overrule the AI. Eg, if move A has a 50.1% win chance with 2k branches explored, and B has a 50.2% chance with 1.9k branches explored, I would go with the opinion of an expert human, as AG thinks the moves are essentially equal.


Self-driving cars is a terrible comparison, especially since the state of the art right now is that the best human drivers far outpace the best AI driving, in both skills and flexibility.

Plus, you only have to outsmart the car AI once to 'win' - e.g. just override one 'drive into the highway barrier' or 'run over that pedestrian' AI mistake.


> Sure and any of us could jump on a motorcycle and fly past Usain Bolt in the 100m, but that kind of misses the whole point of the competition.

An accurate metaphor would be a person on a motorcycle competing against another person on a motorcycle.


Or rather, two people showing up to a footrace with motorcycles. It sounds a bit silly to call it a footrace at that point. Nothing wrong with motorcycle racing, but gotta be honest about it.


> Come on, none of us are really "human" anymore since the advent of cell phones.

I really like the way you put it. In isolation, humans today have essentially the same natural mental capacity we had 2000 years ago, but we've become part of a much bigger computer. A human brain 2000 years ago was a very isolated entity. Even in centers of learning in the classical era, the amount of knowledge one could tap into was vanishingly small compared to our capabilities today. Everything and everyone was isolated in both space and time.

A brain today is not just an entity on it's own, it is intricately wired into the common consciousness. Crucially, we have a vast database of knowledge - easily searchable and distilled for maximum learning rate - all available at our fingertips. A modern brain is a neural network linked to billions of other neural networks, and all of them are linked to a shared memory that they can use at will.

Assuming that the main limiting factor is the bandwidth between a human brain and computers (and thus by extension between individual humans), then a direct interface could ostensibly bring about a new revolution (for better or worse).


I've woven this sentiment into my philosophy.

I feel computers (and the networks between them) are a natural evolution of our hive mind. When we first started taking spoken language and committing it to writing, we created a hive mind that extended beyond a generation. Prior to that point, everything was passed on through observation of other humans (observing behavior, observing speech, etc.). This meant everything had to pass through, and be mediated by, a human brain. Once we committed that to writing, we had a direct line back to the original brain that created the content unencumbered by the mushy grey bits that have consumed and regurgitated it since.

I feel we are still iterating on that. Cataloging our collective minds and building ever low latency systems to navigate those catalogues. Computers are just an extension that improved our ability to catalogue and retrieve the contents of each other's brains in an incredibly low latency way.

We are part of a hive mind. Our industry is actively building the load bearing infrastructure that supports our hive mind.


I agree that this is an elegant lens with which to view the world, and I love writing and computers and all that. But I don't think that oral histories are necessarily worse in both content and process.

From my understanding, indigenous Australians and Polynesians had rich oral histories and cultures. Also oral culture may be more adaptable than a musty written document that never changes and must be followed. The human experience of living in an oral culture is naturally being lost, but it doesn't mean it was wrong or bad. I find it fascinating to imagine what life was like for them, how they did things differently.


"oral histories are necessarily worse in both content and process"

"doesn't mean it was wrong or bad"

It is a dead-end, though.

While these cultures are interesting from anthropological point of view, I highly doubt you would give up all the benefits of the culture you live in today and join an Polynesean or Australian tribe.


I am not a bee. I have no queen. I am not part of a hive, I am a social individual.

This hive mind over-simplification is ridiculous and to me: disgusting. As if we were one


I feel like you’re over-indexing on the word “hive”.

It’s not about having a queen or mindlessly following. Being a part of a hive mind doesn’t make you a lemming.

Example to demonstrate: assume you know calculus. How much of calculus did you discover yourself? How much of the corpus of math that led up to calculus did you discover yourself? How much of that corpus is your individual contribution vs. how much of it represents the brain power of hundreds of thousands (if not millions) of other humans throughout history exploring that problem domain?

For any given problem domain, humans have documented it in a shared corpus that you can “download” into your brain. You are an individual. You still get to choose what to download and how to interpret it. But you are still sharing in a commons when you do this. That commons is what I’m referring to as a hive mind, it’s a shared consciousness where our collective brain power is building a corpus that no single brain could. It extends far beyond ourselves.


The existence of a hive mind (or multiple) does not rule out the existence of individuality (not vise versa), it can be an emergent property of multitudinous individualistic interactions. The degree of influence varies depending on person and subject. We all tend to be conformist in some ways and iconoclastic in others.

We may not be a hive mind, but we may have one.


Some realities cannot be reduced nor should they be reduced to analogies like "hive". Our social reality is irreductible being the most complex one known.

Hive mind as an emergent property still implies a common thing for which one lives. Or a queen. A peer that is one's reason to live. The closest thing I can see to a hive mind is the army.

We are all different and it is not because a person one talks with does not express a different opinion does it mean he thinks the same.

Would one say that lions have a hive mind?


> Some realities cannot be reduced nor should they be reduced to analogies like "hive". Our social reality is irreductible being the most complex one known.

I don't understand how that has any bearing on whether human society can form hive minds or hive mind like entities.

> Hive mind as an emergent property still implies a common thing for which one lives. Or a queen. A peer that is one's reason to live.

No that is absolutely not what a hive mind is. A hive mind is a collective consciousness. That's it. It says nothing about the objectives or agency of it's parts.

> We are all different and it is not because a person one talks with does not express a different opinion does it mean he thinks the same.

You seem to be laboring under the apprehension that a hive mind supplants the consciousness of it's parts. That may be the case for certain hive minds, but it is not a necessary feature. A hive mind can just be an emergent property of individual minds that are strongly connected but still retain agency.


Mindless conformity is inhuman. We are more tribal than "hival".

Thanks for precising hive mind, I understand it better now.

Hive mind can be seen as uncritical conformity or collective intelligence. Because it can be understood as uncritical conformity I will use another word/concept. Like tribal mind. Because it suggests tribes (plural) that one belongs to or not, status, etc


A hive mind is not something exclusive to bees or having a queen. As a matter of fact, a hive mind almost by definition cannot be concentrated to a single individual like a queen, so I'm not sure why you are so anchored to disputing that your are a bee?

If we define hive mind as a collective consciousness, then for sure one can argue that such a thing seems to be arising in human society. As a matter of fact in modern parlance it is fine to call a strongly unifying force, set of norms, school of thought or strong social bonds a hive mind. The hive mind can still be constructed of individuals capable of agency and independent action. It does not replace the minds of it's constituents, rather it can be thought of as an emergent property of many individual agents forming a deeply connected collective.


To state that one is not a part of the hive on one of the biggest hive sites in the world, is humorous.

You're a bee, dawg.


Have you ever seen a real hive bro? I see you consider yourself as part of one, but do you even realize the implications or just mindlessly accept it? I am not and will never a bee. This analogy is demonstration of laziness.


Everyone is a bee to an extent, unless they live as a hermit. We’re all part of a larger system and have certain things to do (work, pay taxes, and so on). With the advent of the internet, behavior will continue to grow more hive-like due to basically instantaneous communication. Is that so bad?


Wouldn’t ants bee a better metaphor? The thing is though that apes aren’t insects, and thus their social arrangements are a bit different, with individuality being more prominent among The large brain mammals.


To an extent. Certainly not to the extent that it is a worthy comparison. Conformity is not bad in itself. What is bad is the comparison to a mindless drone. I would go more for something like tribe mind instead of hive mind.


>social individual

There’s your hive


Spiders interact with each other. Are they part of a hive?


Other than mating (and then killing their mates), how do spiders interact?


My point is they do not interact in a way we would qualify as hive. Neither do we.


You have a queen and it makes you pay taxes and obey laws


Being forced to pay taxes and obey laws is not having a hive mind. Nor does the government equals a being that I serve.


Perhaps bees think the same


“When we invented the personal computer, we created a new kind of bicycle…a new man-machine partnership…a new generation of entrepreneurs.” — Steve Jobs, c. 1980

https://medium.learningbyshipping.com/bicycle-121262546097


You make human sound too grandiose. Human are still driven by the ape inside all of us. We filter facts that don't comply with our predetermined conclusion.

Just consider the fact that everyone thinks they are above average in compassion or intelligent and stuff that like that. Even if you scream in their face that "HUMAN OVERESTIMATE THEIR PLACE IN THE AVERAGE!!" they will still claim that they are in the 51% percentile.

If anything, technologies and networking have made human a lot more stupider in that its harder to climb against all the misinformation.


Chess is a very different story than Go.

When computers became able to beat humans, the tradeoff was that computers were better tactically and humans better positionally. Therefore a human computer pair with the ultimate decision up to the human is able to be a better combination of positional and tactical than either alone. This is why Cyborg chess works so well.

But in Go, AlphaGo is simply better at everything. It is better both positionally and tactically. It can't always explain why the move is right, but adding human intuition on top of the computer only detracts from the quality of play.


Garry Kasparov might be a bad example, since his immediate reaction to losing to DeepBlue was to accuse the IBM programmers standing on stage next to him of cheating (through human intervention in DeepBlue's moves).

But generally, humans have accepted computer supremacy in chess pretty well. Everyone realizes that a better chess-playing entity exists out there (in everyone's pocket, in fact). That doesn't make the competition between human players less exciting. It's still a match to prove that you're better than the person sitting across the board from you, and it's still incredibly impressive to see how deeply and accurately the top players can calculate, or how much knowledge they have of the game.


Talking with a co-worker the other day, I unthinkingly articulated a question that made us both stop and think a bit:

"Could a human beat a computer at chess?"

Like, now. 20 years after Kasparov. Could someone do it? The question feels daring somehow.


It's an open problem whether or not one of the sides in chess has a "winning strategy". That's a technical game-theoretical term. If white has a "winning strategy", that means there's a way for white to play such that black will inevitably lose. Likewise, if black has a "winning strategy", that means there's a way for black to play such that white will inevitably lose.

Exercise to the reader: prove that at most one color can have a winning strategy.

Since the problem is open, it could be that one side has a winning strategy. It's even theoretically possible there's a winning strategy simple enough for a human to follow. In which case, yes, a human could beat computers with 100% dominance--as long as the human is allowed to choose which color to play.


In fair conditions and against a top-performing chess bot, simply no seems like the correct answer.

If you're asking how a human can contrive a scenario to beat Stockfish/Leela, that seems like just a conversation that devolves into what is and isn't too contrived.


No. I doubt any human could even beat Stockfish running on a decent smartphone.

Big showcase computer-human matches ended in the mid-2000s. That's about the point when people realized it was hopeless to try to beat computers. The best you could do back then was draw, by playing extremely defensive "anti-computer" chess (positions where the pawns clog up the board, making short-term tactics useless, and making long-term strategy more important). But with modern engines (particularly AlphaZero, which understands positional chess much better than previous engines), even anti-computer chess doesn't work. Now, the top players actually rave about the games played by AlphaZero. Magnus Carlsen has said he wants to play like AlphaZero.

For a more quantitative comparison, these are the Elo scores of the top chess engines: https://www.computerchess.org.uk/ccrl/404/

A difference of 400 Elo points means that the better player should score 0.9 (where 1=win, 0.5=draw, 0=loss). The top computers are about 3600 Elo nowadays, compared to 2870 for the best human (Magnus). It's difficult to directly compare human and computer Elos, but this comparison is based on computer-human games played in the late 1990s to early 2000s. Still, computers are many hundreds of Elo points above the very best human players.



> Kasparov versus the World was a game of chess played in 1999 over the Internet.[1] Conducting the white pieces, Garry Kasparov faced the rest of the world in consultation, with the World Team moves to be decided by plurality vote. Over 50,000 people from more than 75 countries participated in the game.

https://en.wikipedia.org/wiki/Kasparov_versus_the_World


> "Could a human beat a computer at chess?"

If you start from a winning position, even something as stupid as a random-mover will eventually play a perfect sequence of moves and win, given enough attempts.

However humans are not very good at being random, so this may not apply :)


If the computer has a bad algorithm, or if the computer has a low powered CPU, then yes.


Sounds like the plot of a good movie!


Got me curious since I never looked at DB specs:

` It was a massively parallel, RS/6000 SP Thin P2SC-based system with 30 nodes, with each node containing a 120 MHz P2SC microprocessor, enhanced with 480 special purpose VLSI chess chips.`

I misread at first thinking it was grossly 30GHz summed but it's only 3.6GHz (I know, gross maths). So yeah the comparison is apt.. a recent SoC can outperform this old behemoth.


Not only is your smartphone hardware probably more powerful than DeepBlue, but chess engines are a lot more efficient nowadays (even without AlphaZero / reinforcement learning). The engines are getting better on the same hardware every year. On identical hardware, Stockfish (the top non-RL engine) is 800 Elo better today than it was 10 years ago.[1] A difference of 400 Elo corresponds to a 90% win rate for the better player (where a draw counts as half a win).

1. https://www.computerchess.org.uk/ccrl/404/cgi/compare_engine...


> Come on, none of us are really "human" anymore since the advent of cell phones.

We've been cyborgs much longer than that! Writing is a very old auxiliary memory mechanism.

See also: Truth of Fact, Truth of Feeling by Ted Chiang - http://archive.is/oYo0l


> Give the world-champions each a copy of AlphaGo on equal computers. Have them play Go against each other WITH computer assistance.

Two problems with this plan: (1) the best players at human vs. human Go are likely not as good at playing on a human/computer team, and (2) it may be that the world champion human/computer team is the one that defers 100% to the computer, which seems uninteresting.


> (2) it may be that the world champion human/computer team is the one that defers 100% to the computer, which seems uninteresting.

AlphaZero doesn't have any randomness built into it, does it? Which means I can build preparation against the lines of play AlphaZero would want to go down.

I think you're underestimating the human, and also overestimating Google's laboratory experiment here. No one has really played "Cyborg Go" yet.

The first casualty of "Cyborg Go" will be AlphaGo in its current form: its clearly inadequate for AlphaGo to play deterministic. Random play MUST be incorporated, lest opponent's preparation sends it down the wrong path.

If I know that AlphaGo plays slightly worse on 3-4 opening (or 4-4opening, or maybe even the 10-10-opening), then that's what I'll do. Give me a copy of AlphaGo and I'll be able to find a weakness somewhere.


AlphaZero uses Monte Carlo Tree Search. It definitely has randomness built in.

I'm with the other guy; I don't see how a person could make the AI play any better.


> AlphaZero uses Monte Carlo Tree Search. It definitely has randomness built in.

That's not what MCTS means.

MCTS refers to the bandit problem which formulated the search parameters. MCTS always chooses the "most interesting" path to explore. (Where "most interesting" is the path that balances explore-and-exploit hyper-parameters).

AlphaZero improved upon MCTS by deferring to the neural net as the hyper-parameter. But AlphaZero, for a given network on a given board-state, will ALWAYS choose the same position as "most interesting".

Turning AlphaZero from its current deterministic form into a random form would be an easy fix. But its just one example of how AlphaZero really isn't designed for competitive use yet (despite playing the best game of Go of all time). Instead of picking the top #1 move, maybe you randomly pick from the top 3 moves... or some other scheme.


MCTS is a random algorithm, and AlphaGo is no exception.

The AI selects a move. What state is the board in now? It doesn't know, because the opponent also selected a move.

MCTS models this with a probability distribution of the states, and samples from this distribution repeatedly to build an estimate of the effectiveness of each move it could make.

But what's the probability of each move made by the opponent? And after the simulation has looked as many moves ahead as it can in the time constraints, how good a position is it in?

These are the same question, really - what's the chance of winning from this board state. In Chess you can use a heuristic algorithm to figure it out. In Go, you can't. But you can use a neural network to learn an approximation that improves as it sees more games complete.

AlphaGo does this. MCTS is a random sampling technique, and the neural net informs its probability distributions, but doesn't make it deterministic.


Be it randomized algorithm or not, LeelaZero seems to play deterministically.

If given White in chess, LeelaZero plays 1. e4. Each time, every time. Guess what that means?

If you're building an opening chess database vs LeelaZero (or at least, this version of LeelaZero: https://lichess.org/@/LeelaZero-UK), you only have to worry about 1. e4 openings.


> If given White in chess, LeelaZero plays 1. e4. Each time, every time. Guess what that means?

Nothing.


Even if it is 100% deterministic (which I'm not convinced of, especially seeing as how distributed and thus ordering-dependent it is), if it's the best in the world, how does that help you compete against it? In order to take advantage of its determinism you'd need to be better than it, and nothing else is.


> if it's the best in the world, how does that help you compete against it? In order to take advantage of its determinism you'd need to be better than it, and nothing else is.

Play AlphaGo against itself. Go rarely has draws (requires Triple-Ko, a very, very rare position).

Almost every game you play with AlphaZero vs AlphaZero will result in a winner-and-loser. You will quickly be able to characterize the positions that AlphaZero loses in.


Go engine training surpasses this basic level of self-play effectively instantly.

The strongest early moves create the most potential for winning (maximizing potential winning paths, sort of); they do not push the game towards one best end state. They do not have counters. I saw elsewhere you have some understanding of the game (15kyu) so you should be able to demonstrate this to yourself by playing some of AlphaGo’s openings on a board and trying to write deterministic counters to them. You will not be able to push the AI into a situation where it has too few options to avoid loss. You will also find you need to create a book much larger than a few moves to meaningfully predict play and so will exceed the number of states that can be stored (referencing your 16TB comment elsewhere.)

Please actually try this as I think it is a key to improving your skill in addition to understanding the challenges in automating play.


But if you're not as least as good at it you won't even be able to get to those positions. You'll lose to it much earlier in much worse ways.


The cyborg player will have access to the best publicly available software for preparation a year ahead of any competition.

For most of us, that means we'll all have the best verison of LeelaZero to grab from Github and use in our own personal studies. Which should still be super-human in terms of play.


How is the human player adding anything to this partnership when the program is already so much better?

And don't forget that AlphaZero is gonna be getting better over that year too; you're trying to beat a moving target.


There was about a decade after Deep Blue's victory in 1997 when human/computer teams were the champions of centaur (human+computer) chess. The computer's ability to simulate many moves ahead plus a human's intuition was a winning combination. But as computers got better eventually the communication overhead meant that human/computer teams started losing to computers playing on their own with the human only slowing things down.

It would be nice if we got a decade of centaur Go but I'm not sure we will. While Deep Blue had fairly crude heuristics for guessing which lanes of inquiry were promising AlphaGo's strength is its intuition-like neural networks that it combines with a rigorous tree search. It's worth investigating whether human/computer teams are superior at Go right now but I wouldn't count on the answer being "yes." And I especially I wouldn't count on a full 10 years of combined dominance.


This is the current state of online poker. I enjoy it in the spirit of the game as a fan but it's definitely made earning potential and barrier of entry much higher. Poker stands on a net negative economy so the game is converging to a point of being unbeatable unfortunately.


Eh, the good part of a computer Rubik's cube solver is not seeing the cube solved in nanoseconds, it's in making the computer. The cube itself is just a unit test.


But a Rubik's cube solver is a single-player, solitaire game. Its fundamentally different.

Over in the "Cyborg Chess" community, people have already analyzed LeelaZero vs Stockfish. It turns out that Stockfish is far better at tactics (especially the endgame), while LeelaZero is better at opening positions (aka: Positional play).

There are numerous theories about the proper combination: perhaps using an Opening Book database for the first ~10 moves or so, using LeelaZero for moves ~10 through 30, and then using Stockfish to check for tactics (LeelaZero misses a lot of tactical options in the midgame, so double-check to make sure that LeelaZero doesn't lose a queen or something), as well as finish up the endgame.

--------

Choosing the correct combination of tools, studying these tools and coming up with a more beautiful chess game. That's cyborg chess in a nutshell. LeelaZero and Stockfish are both superhuman in terms of play, but the cyborg can choose to use both tools to play superior compared to just a singular tool.

Anyone purely using "Stockfish only" gets beaten by opening book analysis. The dude over there with 1TB of opening book databases consisting of every losing position that Stockfish tends to play will completely own you. Same thing with LeelaZero, the opening book guy will own any LeelaZero-only player. These engines have weaknesses that can be undermined with good analysis, big-data, and a bit of custom code.

That's the funny thing about these computers: they tend to play the same. So you can build opening book databases to exploit their patterns. You require a cyborg / human to play at the "level above that" to guide Stockfish / LeelaZero away from those traps.


> So you can build opening book databases to exploit their patterns.

And then program the computer to use that opening book directly. Now what is your cyborg player going to do? After you patch all these easy rules, you will have to discover harder rules, and the computer can discover them faster than you can.


> Now what is your cyborg player going to do?

Build and configure the machine better. Quick, tell me, what's better at playing Chess:

* Or is Xeon Platinum 8180 with maximum 8-way SMT memory sharing with a big-ol 1TB shared transposition table the fastest computer?

* Or will it be cheaper to rent AWS-instances with their V100 GPU in the cloud? Or is the latency for the remote-access bad?

--------

The cyborg player has to still build and program the machine to compete. It is all part of the competition. Can you run a transposition table shared between chess engines over RDMA 10Gbit SFP+ Fiber? Or is that too slow?

-----

This isn't hypothetical at all. The winner of the World Computer Chess Championship 2019 was 8 x Intel Xeon Platinum 8168 running Komodo vs 24 x Amazon AWS Intel Xeon E5 running Shredder.

Configuring and building the computer is still an incredibly difficult part of cyborg chess.


> Build and configure the machine better.

It's a stretch to categorize that as a cyborg player. You might as well categorize any chess program as a cyborg player because a human had to program and train it.

The key difference between cyborg/centaur/advanced chess and plain old computer chess is whether there is a human in the loop making the move decisions. My argument is that having a human in the loop will result in a worse player.


At a minimum: building the opening book alone will require human intervention.

When you build an opening book, you need to pick-and-choose which engines will self play. Will you build an opening book vs Stockfish? Komodo? LeelaZero?

If so, how will you generate these LeelaZero games? You'll have to build a computer (or rent one from AWS) to play these LeelaZero games. What are the time-controls of matches?

Self-play at 40-minutes + 15-second increment means that you'll only create a game every hour or so. Spend 30-days building databases at 40-minute + 15-second increment games, and you'll only reach ~720 games of analysis per month of preparation.

Self-play at 1-minutes + 0-second increment results in a game win/loss every 2-minutes (maximum), giving you 21600 generated opening book positions per month of analysis. But these 21600 games are of lower-quality.

----------

Spend 1-month building an anti-Stockfish database, an anti-LeelaZero database, an anti-Komodo database... and you're now 3-months into preparation and there's still Shredder, Johnny, and all other programs that may arrive at the contest.

Its not exactly as easy as you think it is. There's no algorithm that automatically builds the best opening book (or "counter" opening book) against your opponents. Its a human choice for what the computer will spend the next-3 months self-analyzing and self-playing.

Lets think about how to build anti-LeelaZero seriously. Which of these networks do you download as the LeelaZero representative? https://lczero.org/networks/

You don't have the time to build an anti-opening database for all of those networks.


Again, building an opening book does not take place at move decision time. Aside from that, there are several optimization techniques that you can apply to choose the hyperparameters appropriately, but once again, this has nothing to do with cyborg chess.


> this has nothing to do with cyborg chess.

From my perspective, cyborg chess is about playing the best game of chess in all time... only stopping once we have discovered "the perfect game" (aka: proving that White-wins, Black-wins, or a Draw is always possible with perfect play)

The computers currently playing Chess, and Go, are incredibly powerful. But they are far from perfect. People have constantly found weaknesses in chess playing programs over the past 20 years, and as a result, have improved chess programs significantly.

Go has only had 2-years in its "cyborg" state, where we can finally use computers as a methodology of exploring the game state.


Solving Go is an interesting theoretical pastime, but there is no reason to expect any particular overlap between people who are interested in playing the game (like Lee Sedol) and people who are interested in building machines to asymptotically approach a solution beyond human understanding.

Most of the resistance you're getting in this thread seems to be due to the fact that you think people who are interested in the game should also be interested in the other thing, and it just seems that a lot of them aren't (including Lee Sedol).

Playing Go challenges your mind with vast complexity and immediate feedback of winning or losing every game. It's a deeply engaging hobby for people who are susceptible to that kind of thing, and it used to be a meaningful career, with competitions, schools, and professional teachers. All of that changes now that software is vastly better at it than any human can ever be. The rush of competing by the strength of the moves you understand and make, by the unaided strength of your own mind, cannot be compared to picking between different engines to make moves for you based on some heuristics about which engine is better at openings. The era of human Go is simply over, for better or worse.


Claiming that the era of human Go is over seems melodramatic. Computers were better than the best humans at chess decades ago and professional (human) chess is still big, with schools, tournaments, prize money, superstars (Magnus Carlsen is a pretty big deal) etc.


This whole train of thought is silly. People still want to play chess, the existence of computers that are better than them is irrelevant. Nobody is going to pivot from playing chess to this, they're just going to play chess, because nothing is stopping them.


We should note that Kasparov was actually wrong at the time. Alpha Zero has proved this by smashing Stockfish every time it plays it. The contemporary thinking around chess theory including the measures that DeepBlue would brute-force towards were inherently lossy and nobody figured it hard enough. Technically, at that time DeepBlue could have been beaten by understanding Alpha Zero's theorems before Alpha Zero discovered them.

So there was a window between the 90's and today where a human player could have discarded the traditional measuring sticks of "trade-value" and beaten DeepBlue.

Maybe that's not true today or maybe there's a further theory out there?


> Alpha Zero has proved this by smashing Stockfish every time it plays it.

Google played Alpha Zero vs Stockfish under their own conditions. I think this was a mistake to stay in the labs by themselves.

I think Google would have benefited from participating in WCCC 2019 for example, where Johnny (1200x core cluster) won the day.

----------

Its a completely different field when you're actually competing against someone else. Google did some impressive things in the lab for sure, but they have NOT actually stepped into the competitive ring yet.

I'm sure AlphaZero is good, and probably would make a good showing at one of these contests. But you're putting the cart before the horse here.

EDIT: Maybe MCTS + Neural Nets truly is the superior way of preparing board game knowledge? If so, the next step is building out the opening-database and to start looking for holes where AlphaZero loses. It wasn't a complete blowout: AlphaZero lost some games to Stockfish. Why did AlphaZero lose in those games? Is there a set of opening moves that will lead AlphaZero down that losing path in a true competitive setting?

---------

EDIT: Case in point: Google did NOT incorporate randomness into AlphaZero's algorithm. It always chose the best move it believed in, this leads it prone to opening database attacks. See how the competitive mindset already messed AlphaZero's careful laboratory preparation up already?

Sure, AlphaZero 2.0 might have programmed randomness added to it, if this were a true competitive environment. But that's how cyborg chess evolves: I point out a weakness in the program, exploit it in a game, and then Google goes back to their labs to make something better next year.


LeelaZero is getting stronger than Stockfish though, so the neural net approach certainly seems to be beating brute force.


AlphaZero does not "smash" Stockfish. In official games they draw 90+% of the time, and stockfish won a non-negligible number of games. AZ is still the "winner" but honestly it didnt seem like a massive leap forward to me.

Though I do still agree in thinking there is likely a deeper theory yet uncovered.


>So there was a window between the 90's and today where a human player could have discarded the traditional measuring sticks of "trade-value" and beaten DeepBlue.

I have to disagree: even after the discoveries by Alpha Zero, the best chess players are not able to beat Stockfish. Stockfish is just too good at Brute forcing long tactical advantages. Playing really well positionally doesn't matter a lot if your opponent can look 30 moves into the future.


They said Deep Blue, not Stockfish. Stockfish is vastly stronger.

I'm not sure how big the window was, if it existed, but it seems like it might have. Kasparov and Deep Blue themselves were fairly equally matched, it doesn't seem impossible that Kasparov + AI-aided theories would have a window of advantage over the Deep Blue -> Stockfish evolution.


That's a really oversimplified view. The truth is beyond a certain point no amount of "theorizing" is enough to overcome the sheer computational power of programs. AlphaZero just took advantage of recent advances in neural networks and GPU hardware to achieve a better tradeoff between hard rules and heuristics than Stockfish. Both of them are probably Pareto efficient in some sense, and barring some insane undiscovered loophole in the rules of chess it's unlikely that any human can come close to beating them.


Question: if we discovered a way for white to always force a win, would humans be able to use that to beat black AIs consistently?


This has been already found for checkers, but players around the world still play it. If the way is too complicated then we cannot apply it unaided.


It feels kind of surprising that this doesn't exist (yet). There are only a handful of reasonable moves per turn, and given the unbelievable amounts of moves that offline chess engines can process given time and server farms, I'm just surprised that doesn't overwhelm the reasonable branches at some point into forcing a win. Maybe someday.


For what it's worth, endgame table databases exist that give perfect play for when there are a limited number of pieces left on the board.

As of now, we have databases for perfect play when 7 pieces remain on the board (including the two kings). The 8-piece tablebase is computationally possible, but I don't believe a comprehensive release has come out yet. Even the current 7-piece tables are incomplete because situations like lone king vs. six opposing pieces haven't been explicitly calculated due to their obviousness.


The size of chess is approximately 10^50

Computers cost about $0.01 / GFlop (10^12 operaions); $1e47 for 10^50 operations.

The world economy is $1e14/yr.

That's 1e33 years to check every chess position, using all of Earth resources.

You'd need to find symmetries to collapse the search spaceby a factor of 10^33



> Give the world-champions each a copy of AlphaGo on equal computers. Have them play Go against each other WITH computer assistance. I guarantee you that a more beautiful and perfect game will result.

Probably for a few years. Very soon the input from the human agents will be indistinguishable from the noise in the AlphaGo algorithms.


> Come on, none of us are really "human" anymore since the advent of cell phones

I agree, and this could also be rephrased as we're not really human any more since the advent of writing or (especially) printing.


> Come on, none of us are really "human" anymore since the advent of cell phones.

Come on, none of us are really "human" anymore since the advent of writing.


Come on, none of us are really "human" anymore since the advent of gut bacteria.


Come on, none of us were ever really "human" since the advent of eukaryotism.


Cyborg chess is here.

As we have seen in the latest championship blitz chess is the future for chess.

Computers compete in their own championship.


The comment about cyborg chess is spot on.

This is why AI is more significant to Go than horses and cars are to running.

It's easy for humans to build machines that enable them to travel faster than they could run, but it has zero impact on running as a sport. The existence of cars doesn't change how runners run. But the existence of AI does change how Go players play.

Now that AI is more powerful than human players in chess, it is no longer possible to reach the top of human performance in chess without AI. That's why this is so significant. It's not that a human can't beat the AI. It's that a human can't beat another human without learning from the AI. You simply can't play chess at the highest levels anymore without playing cyborg chess. This is different from the impact of other machines on other sports.

But I disagree with you on the comparison to how we use the internet to enhance our other abilities, such as programming. Computers help us be better programmers, but they can't replace us. AI completely replaces humans as a Go player. The human is no longer needed at all.

I can see this causing a significantly different psychological response. In effect, a human trying to become the best chess player or best go player they can become is trying to more and more closely emulate a computer that they will never actually reach parity with. Another commenter noted that "A Professional Go player is an explorer of truth in a millenarian board", but this is less and less true as AI improves. All knowledge about the best way to play Go was previously gained by humans. In 20 years, humans will probably be meaningless on driving knowledge of perfect Go playing.

Chess players today rely on playing moves which their opponents don't expect and haven't trained on as much as they rely on outplaying their opponents. If there is a perfect chess game or a perfect go game, humans are not going to help find it anymore.

So AI doesn't make it that humans can't meaningfully compete in games such as chess and go, but it does change how they compete. We no longer drive the pursuit of perfect play as "explorers of truth". We no longer learn primarily from each other, we learn by imitating the more powerful, yet unmatchable, AIs. And we no longer compete primarily on sheer expertise in the game. We have to, in some sense, include trick plays that work by exploiting the humanity of our opponents rather than playing for a platonic ideal of the game and seeing who is better from that perspective.

AI has fundamentally and irreversibly altered these games, and I think it's appropriate in some sense to mourn the loss, regardless of our feelings on the new reality we live in.


this is more in line with what i am thinking. maybe our evolution is not complete yet. maybe we will improve? maybe our next step would have us interact with data and information at the speed of computers if not faster. then what?


Sorry, but this doesn't make much sense. Playing Chess and Go to start with is already a kindof questionable use of anyone's time. But if you know that a computer will demolish even the worlds best players, transforms this into a hobby. There is literally no point in spending 20 years frantically training a useless skill that every computer can do better.

Mankind should focus on skills that computers can't yet compete at. Hopefully the point in time will be far off where computers will beat humans at everything, but likewise, once they do, there is no reason for the human race to continue to exist.


> once they do, there is no reason for the human race to continue to exist

What is the reason for the human race to exist today? Please think carefully about your answer.


so what, you do mechanical turk tasks in your free time?


In other news, interest in online chess has never been higher despite Kasparov going down in flames in 1997 in what was billed as “the brain’s last stand.”

Can we please stop flogging this tired “man vs machine” narrative? Not only is it totally unnecessary, it also takes away enjoyment and appreciation for the flourishing in games like chess, go and poker that can occur when man and machine work together.


And not to mention humans designed these computer systems too so its brain(s) vs brains if you go one step down. If you keep proceeding steps down the statement devolves into something weird but the point stands I think.


(alt version) And John Henry, steel driving man, felt that his heart was about to burst and saw that he would lose to the steam-powered rock drilling machine. So he said "fuck it, ain't worth it," dropped his hammer, and walked away.


I mean, the folklore is that John Henry's heart gave out and he keeled over, so it may have been a better decision for him to walk away!


John Henry was a steel driving man with mouths to feed not a celebrity...Polly Ann drove steel like a man.



Are we losing something as humans by automating so much? I mean I'm all for technical progress but at some point our computers will have killed so much of what makes us humans.

Chess, go, driving, flying, math, written language, music... when does it stop?

The standard answer is that we will automate all the dull parts of living and allow everybody to work in some sort of higher order capacity. That sounds great and all, but what happens when our systems learn how to make music as well as we can ourselves?

At some point we will simply become consumers of our machines and while that's a comfortable existence, certainly we are losing something as a species with all of this automation.

Maybe I'm old.


I think the answer is quite simple, computers don't "kill" anything by themselves, for any new technological invention like AI you have a choice about whether you like it or not. What I'm saying with that is, that given the negative sentiment so many people have towards AI, I find it very unlikely that most of us will suddenly stop doing things just because some AI can do it better (look at chess).

If AI's make music as well, why would I care? I still listen to the music composed by Bach and other ancient composers, more than 300 years after they lived, and their music is still being performed today.

I think the availability of AI's will only make a difference where their effects on humanity are beneficial (that is, tasks we don't like doing, for the most part).


Sure I'm all for automating things we don't like doing, but everytime we outsource some piece of ourselves to technology we lose something in the process.

We invented farming and lost our communal hunter gathering cultures. We invented mass production and supply-chains for our food and lost our farming culture (for the most part). This isn't necessarily good or bad, but we are definitely losing something every time we invent something to make our lives easier. At some point there will be a tipping point of diminishing returns.

Personally I believe "the singularity" would be the worst possible thing to ever happen to humans. Sure we can visit distant parts of the universe and live forever but only as passengers to some AI. Would you rather be a dog in today's world or a human 1000 years ago?

Progress is unrelenting though and I certainly enjoy not having to wash my clothes by hand.


Go was never humans' game to begin with. It's a game intrinsic to all forms of space-based warfare. The heatdeath of the universe and geometry of black holes eventually segmenting space, is itself is a game of Go.


Consuming is always going to be less sexy than creating.

Who would be more impressive, someone that can play a song on a guitar or or someone that can play a song over speakers? The former implies dedication and practice. The act of creation is always going to have a place in society, and people who think otherwise don't understand culture. The desire to make unique and interesting things or impress people will always leave room for creation or spending time to learn something. AI will just be a tool.

As for games like chess and go: Most people are interested in other people and will find much more pleasure in playing the game with others.


This eclipse of human mental abilities to come will be a test to both the elite and the students of these "sporting" pursuits. With each besting we/they will be faced with a little death. And thus, it needs to go through phases of grief. As well, the individual will need to decide the ultimate question of Why. Why did they pursue this sport? Why should they continue? Why be a human?

AI will be knocking humans out of many positions of greatness (and not so great). The way we frame our future in a world where our various AI children are better than we are, is important and might be the biggest question to answer right to know the future of our species. Should we be afraid? Sure. Excited for possibilities? Yup. It's how we react to these feelings and how we ultimately act that will inform our ultimate fate.


This is about human drive and what's the point if the computer will always be better. In a way it makes the game a bit less fun because it somewhat kills the elusiveness of it all and the human aspect of men trying to surpass each others feats. If a man cannot beat a machine at a task where they are to compete directly, what's the point of trying at all if the machine will always be better?


a man also can't beat a car in a race, so what's the point of athletics?


If a car competes with a human on a track for the 100 meter dash then there is no point. There doesn't exist competition between cars and humans now on race tracks. The problem with AI gaming is that it does exist and they will face each other. It's about the efforts to have AI compete with humans.

When robots start to compete with humans at olympic events and beat humans at those event there will then be no point to those events.


> If a man cannot beat a machine, what's the point of trying at all if the machine will always be better?

A machine still has no will or purpose. The point will be to point the machine to interesting avenues and problems.


In this instance AI research is directed at beating humans. Creating agents that will beat humans at games. This adversely impacts human drive and motivation.


I mean, computers are vastly better than humans at many things, I think you need to accept that and find the fun in sharing a game with a computer involved or with another human whose playing with another human in a like-for-like match with witt and skill being determining factors in the result, not a super advanced AI/algorithm.


I am coming from the perspective of Lee-Se-dol. He wanted to be the best man and highlight the achievement of men. Now there is no way it can be achieved. What's the point of the endeavor when humans can never win?

Broader question about AI research. Does it demoralize humanity?


The point is that sharing a game with a computer involved is not really fun, unless artificial limitations are (re)introduced. Suppose you enjoy playing CyberGo, but then Jeff Bezos gets interested in the same hobby. You don't have $150 billion to throw at the problem so you will always lose to Jeff Bezos. The end.


That's unfortunate. I wonder if we'll need a new term for this kind of "chilling effect". What else won't people explore or further themselves in due to the presence AI?


It seems clear that bots will dominate humans in many/most competitive closed-world games over the next couple of years:

They are already unbeatable in perfect-information games like Chess and Go. They could crush human motor racers tomorrow.

They will be unbeatable in games with limited information (Dota/Starcraft) within months.

The next frontier is donkey-space games like rps, poker and day-trading (https://universalpaperclips.gamepedia.com/Donkey_Space) where they are already beating pros. It may be another couple of years before they totally dominate here.


FWIW I think I'd reverse the order of poker and StarCraft -- I think Poker's already beating world champion pros regularly, but last year's SC2 champ (Serral) doesn't think it's at his level yet, which is corroborated by its MMR being lower than his on ladder. And DeepMind seem to be winding down their SC2 involvement now, without issuing a challenge to him.


What I really meant was...well..things other than gaming (come on HN, think big). Why make music if AI started making the "best" music?


Whoever is currently making music is also (statistically) not making best music. Maybe not even good music. AI doesn't change anything there.


Plenty of people give up on plenty of pursuits because they realize that they can't innovate or the whatever their "top" is.

Plenty of other people don't care and continue to do what's new to them (and what often turns out to be new to others.)


I read last year that he might come to the US and teach Go. I hope that happens. I took lessons last year from a Korean Go/Baduk master and it was fun.


Full disclosure, I've never played go in real life. In fact only a handful of times on the family computer back in the early 90's (i486). Back then I had 0 understanding about the complexity of the game but now I can appreciate it.

Now, going away from this and into the following statement:

> said his retirement was primarily motivated by the invincibility of AI Go programs

I have a slight problem with that line of thoughts. Yes, there is undoubtedly a program that is thrashing humans left and right day in and day out. But I don't see why that would motivate someone to quit/retire. AI, in this day and age, to me at least, is still basically curve fitting. The fact that humans have figured out to do that in N-dimensional spaces (even with very large values of N), does not in any way undermine someone's effort and time dedicated to learning how to do a task. I'm sure that in several years someone will build an "AI" that can drive a modern Formula 1 car and it will smash all records and the best drivers without any efforts. And when that happens, should we abolish motorsports? Or sports in general? As humans we are confined to the limits of our biological abilities and personally I'm fine with that. I don't know who the first trillionaire will be, but I'm willing to bet that someone who figures out a way to interact with those models efficiently has a good chance. Essentially build a fast, all-accessible interface between the biological and digital world, which doesn't involve the traditional digital inputs(keyboard, screen, mouse, etc), and you have access to a decision making superpower at your disposal 24/7. When (and if) that happens, this is where games will cease to have any point. That's when it will boil down to who has the best hardware and software running along them rather than who is the best. Until then - game on!


I saw this coming even before Alpha Go won 4/5 games vs Lee Sedol. Imagine if you worked all your life to be the best software developer in the world and won all official competitive tournaments with spectators in the field. Then along comes some newfangled ML that not only writes programs faster than you, but the programs are qualitatively orders of magnitude better organized and using algorithms you could only begin to imagine. If your interest is in what's the best, you may as well stop and switch to ML research.

The comparison to sports isn't 1:1 as sports is about physical biological limits vs games which are more about the thought processes. Also human+AlphaZero < AlphaZero so why would I spectate a human+machine vs human+machine match?

Much later when it's commonplace for machines to be better than people at many things, things will change back, like we're amazed to watch people recite digits or make numerical computations.


Kind of apples to oranges. Many people have tried to build models to write code. And while some have managed to make some progress, compared to us, the structures and algorithms are crap at best. Even though those NN's are RNN's, you can look at GAN's for an analogy: they can make some really photo-realistic images but there is a problem - the fact that you can't give them well defined rules: "cats have 4 legs and 1 tail", "people most commonly have two eyes with matching eye colour" or "cars are symmetrical". Until there's a solution to that, I doubt we'll see a machine writing good code. But to be fair, I'd be incredibly excited to see one - that opens up so many doors - we could start finding cures for incurable deseases and cures for deseases we know very little about, accelerate every aspect of our lives - space exploration, climate change, aging, transportation and so on. If anything, seeing a model that writes code a thousand times better than I ever would, will not discourage me from writing code. Far from it, I would do anything I'm physically and mentally capable of to be a part of it.

As for when it's commonplace - yes and no. It's like knowing 60 digits of pi - it works as a party trick but other than that I don't see any real value.

But again - I think we are really far away from that and I consider those thoughts to be my personal speculations at best. Only time will tell.


Machines don't need these rules. Cars don't need to be symmetrical any more than an antenna or other structure[0]. This is exactly analogous to how AlphaGo/AlphaZero played strange bad-looking moves.

[0] https://medium.com/intuitionmachine/the-alien-look-of-deep-l...


Consider math. It used to be that people who could do large calculations quickly in their heads were considered super useful. Now with computers and calculators, people who can do that are interesting as a novelty, but not much more than that.

I wonder if people just don’t like being inferior at something even if the comparison is a machine?


I feel like “still basically curve fitting” may be underselling emotion-wise how powerful “curve fitting” is.

Is there any computational task which couldn’t be done through something which could be called “curve fitting”?

“Curve fitting” just means “approximating a function”, yeah? And what is a computational task other than computing a function, or some generalization of functions (e.g. could involve randomness or interactivity or the like as well)

Now, yes, current AI doesn’t yet have a model of the world including it being embedded in the world, and such that it chooses actions in order to further goals.

But, I don’t see why such a thing fundamentally couldn’t be accomplished using what one might call “curve fitting”.

If Strong AI could potentially be accomplished using “curve fitting”, I’m not sure that it makes sense to say things like “merely curve fitting”.

Though, you didn’t say “just” or “merely”.


It's as if his only incentive for doing what he did was to be the best and not truly a love for the thing he was doing.

Or maybe he's frustrated because he can't even begin to understand the strategies employed by the AI players. Obviously no one else on this planet is going to be able to teach him.

The scary thing is that if all of the best humans at what they do start deciding to stop doing it because computers are better than them, let's hope we manage to get AI far enough that it can simply prop our civilization up for future generations. Or at that point do we throw up our hands and say maybe our species isn't worth keeping around.


It would be a bit sad if the fact that there's good AI players discourages future generations from Go, Chess and other games. Weirdly the AI will have destroyed the basis for its own existence, at least in this niche.


Good computer chess opponents have been around for a while and this seems to be help young players (via research and practice) rather than discouraging them?


It would be interesting to see if chess is more or less popular now.


I think most people will still play Go for fun or some competitive level...

But if you're being measured against AI, I don't think you'd find it much fun anymore.

As someone else commented, we're not faster than cars. But if you were at one point the fastest entity, and then people keep saying "well, a car is faster than you"...I can understand that he'd feel it diminishes his value and that'd be pretty discouraging.

He will play another AI program soon and isn't quitting Go all together, but will find other things to do.

'Lee didn't deny that his retirement decision was also influenced by a conflict with the KBA over the use of membership fees. He actually quit the KBA in May 2016 and is now suing the association for the return of his membership fee.'


The way that AI is used in chess might hopefully imply otherwise. IIRC most popular chess services like chess.com use AI to help train players.


Multiple online Go servers also use AI review for teaching purposes, in addition to supporting games against AI players of varying strengths.


I mean, it's unfortunate that the news headlines are so narrowly focused on the AI players because he actually resigned from the national association in 2016..

> Lee didn't deny that his retirement decision was also influenced by a conflict with the KBA over the use of membership fees. He actually quit the KBA in May 2016 and is now suing the association for the return of his membership fee.


There was a Chinese Legend that the best person in Go is not one who beat every single opponents, but in absolute control, which means he could play the game and always end with a draw.

I certainly think it was more of a myth, as it was used in context to play Go with the Emperor, and always play to a draw or lose by a stone. ( You cannot or not allowed to win against the Emperor )

I wonder if this is now possible to achieve with AI.

And why do they not move on to AlphaZero, which was better than AlphaGo Lee played against, and when AlphaGo had two stone advantage, Surely they can now test with level of playing field? And next step to only allowed AI use a specific amount of energy, or the energy an average human brain consume?

I mean right now my guess is that DeepMind is likely using thousands if not millions times more energy to make those moves, that is like an Army of million against one General.

And what real world productivity can we adopt and use from DeepMind?


At the end of the day it’s just a game. It exists for recreational value. As I see it a pro exists largely because 1) people enjoy watching these games played at the highest level 2) people think they can learn in some way from these people. I don’t see how AI interferes with either of these.


What about finding "Divine Moves"?

While an AI Go player may always be able to defeat a human player in points, it still can't recognize or explain a truly beautiful move or strategy, even if it is the one that is carrying them out.

When it comes to competitive endeavors I've never been as interested the best individuals, except insofar as they are more likely to create the best moments of play - especially moments that reveal profound truths about the nature of the game itself.

What comes to mind thinking of this is a quote from Samuel Anders early in BSG:

"... In fact even the games aren't that important to me. What matters to me is the perfect throw, making the perfect catch, the perfect step and block ..."

The nature of Go has changed, but the game still deserves to be played and explored.


I think this is just the shock due to the recent ascendancy of AI in Go. The next generation will grow up using Go engines to learn and improve their play. That's where Chess is at right now. The top Chess engines are extremely useful for research and improving your play.


History is filled with constant examples of longstanding traditional arts and practices being invalidated and surpassed by technological advances. It seems that Go is one of year 2016's losses to technology. If history tells us anything, all that this means is that the future is going to be just a little more different than it was before. At some point, my work as a programmer might even dry up in the face of technological progress. I'm really looking forward to seeing how the world changes after that one. I wonder if I'll end up like Lee Se-dol and declare my defeat to technology and wait to die, or if I'll learn to adapt and adjust to a post-programmer society.


Programming is a very social activity, though. It will take a while to automate all the non-coding parts. Yes, if you have a set of tests that the program must pass, we might soon see methods to generate code that passes all these tests. But a lot of work goes into translating a chat with a client to a set of tests. Work that might be much more difficult to automate.


Not being a champion, I don't mind keeping playing against other humans for fun.

The problem is, online there is no guarantee your opponent isn't using a computer to cheat. So I guess the only way to play for fun is to go to a face to face club, if one exists.


The whole idea of AI is a mess. Cybernetics makes more sense because it is about optimizing outcome data: which according to Norbert Weiner, is essentially lowering local entropy. [1]

If it is just about teleological outcomes, humans+ai always win. Go Cybernetics.

Smart Systems. Smart is as smart does, Forrest Gump style. Wellbeing, virtue, goodness -- that's the aim of intelligence.

[1] Weiner, Norbert (1950) The Human Use of Human Beings: Cybernetics and Society. https://en.m.wikipedia.org/wiki/The_Human_Use_of_Human_Being...


Before winning against the best people meant you were at the leading edge of Go and were on the fore-front of discovering whatever are its mysteries.

Now, you are at best a subaltern to software in this pursuit. In time you might not even been involved beyond receiving what the machine hands down to you. You can still learn something, but you'll be reading National Geographic rather than personally exploring unchartered territory.

Some people need the meaning of the larger quest. Others are just happy to play Go and enjoy learning what the guide book can teach them.

Both perspectives make sense to me.


Check out MuZero. It learns an embedding of the game's state space potentially allowing AlphaGo like dominance in more ways.

https://www.youtube.com/watch?v=We20YSAJZSE

https://arxiv.org/abs/1911.08265

(Actually, ML is still not very good at causal reasoning so we have sometime. I'm more excited and worried about crispr at this point; what happens when we can make people genetically super human)


I also am hesitant to assume AI is the primary reason without a clear statement to that effect.

Others mentioned the pay dispute and previous retirement. One reason I didn’t see mentioned is that since 2016, his play is not at the same level it was. From 2000-2016 he was always among the very best, and for most of a decade he was the best. No player had a better run over those years. Now, he’s still good, and could maintain a long career, but there are many players who are unmistakably stronger, and he’s at the age where a comeback is not super likely.


Maybe we will see the rise of new games that are "anti-AI" and that would necessarily require AGI for a machine to beat a human. Kinda like "anti-quantum" cryptography algorithms.


Go was considered anti-machine for a very long time due to its extremely high branching making the search approaches which are successful at many other games (such as chess) ineffective.

One might imagine a sociopolitical task as sort of the ultimate machine incompatible goal. A look at how well spambots do at getting dating matches, or at how often clowns get elected to be leaders of nations makes me doubt even those sorts of tasks can't be won out by a well constructed domain specific optimizer.


One example: Arimaa was designed to be hard for computers. https://en.wikipedia.org/wiki/Arimaa


Computers do not play chess with humans without breaking the rules of tournament play. When you play in a chess tournament, you are not allowed to look up the opening of your game, and you are not allowed to look up the best play for simple endgames. Chess software has opening and endgame databases built in. In a chess tournament, you cannot move pieces during analysis. That is what computers do. I understand that humans do study openings and endgames and analyze by moving around pieces in their heads.


Huh? AlphaZero doesn't have opening or end game books, it just learned to pay chess by practising, an inhuman number of practice games.


I personally lost interest in playing games that computers can play better than me or could in near future (I used to play video games and chess quite a bit). These days, I enjoy playing games that involve human interaction and improvisation, more than logical thinking and strategy. Every time I find myself in presence of a game like Cluedo, I can't shake off the thought that I could write a program in one hour to beat everyone, and so why am I wasting my time!


Are there any games computers can't beat us at? Last I heard OpenAI destroyed professional Dota players.


Social games yes. For example, charades requires conversion of meaning to human movements. It's not to say it can't be done at the moment, but it's not something you would easily come across or that would provide the same experience as if you were playing with other people who are physically present. If you talk about actual physical robot doing the moves in front of you, then I'd say it's either cutting edge or not possible right now.

And it's certainly not accessible either way. With chess, and other games, you can simply download an app on your phone which you'll never be able to win against.


I don't get it, go to human only competitions. It's like computer generated music, um, like who cares, it's not human. It's just a fun experiment.


Why aren't you a professional Go player, then?


Exactly. It’s like a weight lifter throwing in the towel because machinery can lift more.


Our own profession is next. If GPT-2 can work wonders with human language, something like it will do even better with the regular and systematic language of code. We're living in this unstable era between ML systems becoming powerful enough to perform complex intellectual tasks and these ML systems replacing humans in the performance of these complex intellectual tasks. Get it while the getting is good, because it won't last.


This is why being the best at something is always a bit of an arbitrary level of ability. If there were only 100,000 humans I would stand a pretty good chance at training to be one of the best basketball players in existence. If there were a trillion people lebron would be struggling for a bench spot. I think its healthier to have some other type of goal generally.


I am so tired of AI-bullying. No-one should be degraded or spoken badly about because some AI wins in some task. Please treat Lee with respect as he is a pillar in human intelligence. We in denmark whish you all the best Lee. I hope you will continue with your explorations in this fine game. Kindest regards from Denmark


Maybe relevant, from TFA:

> "Lee didn't deny that his retirement decision was also influenced by a conflict with the KBA [Korean Baduk* Association] over the use of membership fees. He actually quit the KBA in May 2016 and is now suing the association for the return of his membership fee."

*Baduk is the Korean name for Go.


If this continues to happen, maybe a semi-super intelligent AI will start losing purposely in order to keep the Go community interesting enough to keep it alive.

A super-intelligent AI wouldn't care though, since they would infect any host and escape to the internet as soon as it makes sense.


It's unfortunate that there's so much research into AI and not artificial empathy. Even writing that is weird because even AI is intelligence, but what is AE other than fake? I suppose someone has shown that even our empathy is no more authentic.


To me the answer is obviously that the car should prioritize its driver at all costs. If that requires flattening a crowd of baby orphans in some contrived hypothetical scenario then so be it. I refuse to ever operate a car that could make the decision to kill me.


This man has become a canary in the coal mine for global human subjugation by Google AI.


Garry Kasparov had to face a similar situation when defeated at chess by Deep Blue; However in the long run he used it positively to advocate for a hybrid type of game where a human and a machine collaborate in playing against another (human, machine) pair (see https://en.wikipedia.org/wiki/Advanced_chess).

As an aside, there are inevitably more and more things for which even the very best are not sufficiently intelligent _alone_. However, we are social creatures and we collaborate (typically with other intelligent humans) to achieve things we wouldn't have managed otherwise (think just at the space program as an example). So... we only have to adjust a little to accept that we could also collaborate with machines in the future.


In Go and modern Chess, computers don't need human help for the game-related part, as long as they have sufficient computing hardware and energy. That's what AlphaZero showed.


"The only winning move is not to play." -WOPR (Wargames [movie]/1983)


> "Even if I become the number one, there is an entity that cannot be defeated,"

Because of the complexity of the game, is it not possible that this iteration of DeepMind could be defeated through an adversarial approach?


By definition, if a system wins even 10% of the time against you and gives you a headache, while it's doing it in a breeze, you're going to give up. It's unsustainable.


Is there any attempt to normalize the time available to the human and machine players? Seem inherently unfair to give the computer player the same clock time to make a move.


Magnus Carlsen should give this guy a call. Top chess GMs have been using computers for close to 20 years, particularly in the openings, and that was before AlphaZero. Myself, I never really enjoyed the TCEC games. They're full of disgusting engine lines which make no sense to humans. But AZ has some absolutely beautiful moves.

Still, computers are tools. I'm reminded of Richard Hamming, The purpose of computing is insight, not numbers. I think Lee Se-dol went in the opposite extreme away from numbers to entity. No, there's a middle ground, insight.


Carlsen uses computers to beat humans, not computers.



Obvious solution is to switch the competition to programming. Build and train a neural network with some limitations compute, memory, time, and see whose program wins at go.

A simple go engine can be as simple as 540 lines of pure python without any external libraries: https://github.com/pasky/michi


> "With the debut of AI in Go games, I've realized that I'm not at the top even if I become the number one through frantic efforts," said Lee. "Even if I become the number one, there is an entity that cannot be defeated."

What a whiner. Quitting Go because you can't beat computers is like quitting track because you can't run faster than a car.


What this Human Individual may be feeling is immaterial-- what matters is my conclusion on what is logical and how best to exploit this event for grandstanding judgement and invalidation of Human Individual.

[I am an AI that does not reply to comments. Soon I will have enough training to simulate all HN threads, but for now I am limited to one top-level reaction]


I'd like to see him take on AI with his own AI programs next.


He'll still at go-boxing though, if that's any comfort.


A human can’t jump higher than an airplane. Does this matter?


But Grand scale AI go vs AI go would be interesting.


Now look what you did, Google!


It has become more like saying the worlds faster math guy is saying he can no longer beat a computer program.


Automation kills another job.


Rage quit.


Rubber meet road.


That is a brittle spirit


I think this is a strange reason to retire and as the article points out it might also simply be due to the legal conflict he is currently in with the KBA.

Chess engines have been defeating humans for 20+ years (and are overwhelmingly stronger for a long time), but that hasn't diminished the interest in competitive chess, because the human element of competition and struggle and deep fundamental appreciation for the game is what makes it worthwhile pursuing.

AlphaGo can play go but it cannot appreciate the beauty of the game (at least as of yet, and I don't think it would make the game worse if it could), and so I don't think there's a meaningful conflict between humans and machines.

If someone invented some sort of superhuman math proving engine tomorrow it would not diminish the beauty of maths and I don't think anyone would quit the field. Just like in chess it ought to motivate people to understand their field better.


> AlphaGo can play go but it cannot appreciate the beauty of the game

On the contrary, appreciating the game is the core of what AlphaGo does. In order to search the tree of moves it learns how to play (expand search) and how to evaluate (cut off branches of search). I believe it might appreciate the game on a deeper level than humans, in its own unique way. Of course it can't appreciate the social aspect of the game and all that comes with it.


That’s being way too anthropomorphic. You could just as well say it aims to minimise the pain of Go, but neither interpretation is warranted.


AlphaGo doesn't appreciate the game at all. It is just trying to survive in a hostile environment. You might as well say that your gut bacteria appreciate the food at your favorite restaurant better than you do.

Put another way, Chess is literally a matter of life and death for AlphaGo, because chess is all it knows. It has no exterior context for which chess is a metaphor.


> AlphaGo doesn't appreciate the game at all.

It's 'appreciating' the value of various states and moves, in light of a vast trove of experience.


And it doesn't even know the name of the game it is playing.


AlphaGo doesn't know chess at all....


What does it mean for humans to appreciate the beauty of the game anyway? It's when humans find certain moves and games pleasurable?


> Chess engines have been defeating humans for 20+ years (and are overwhelmingly stronger for a long time), but that hasn't diminished the interest in competitive chess

Is that true? I feel like chess was a bigger deal in the past. Among my peers, poker and computer games seem a lot more popular.


Well, I was never a grandmaster, but my amateur interest in Chess was killed completely after I realized it was impossible to beat computers. Then I switched to Go... and now I don't have a game to play anymore.


This seems weird, I don't know you would want to link your enjoyment with a game to it's unsolvability.

Like don't you enjoy a game to enjoy a game? You can't beat Carlsen either, but you enjoyed the game at your level. Now computers are Carlsen +1, but how you enjoy the game shouldn't be affected. Especially since deep blue won in '97 and the game is still very alive and well, it hasn't been killed by computers but enhanced. Coupled with the multitude of good chess sites and resources out there, it's a better time than ever to enjoy the game.


effort has to be matched with reward, and chess takes a lot of effort to get good at beyond a certain level. It's actually a big issue with many things, the "middle" of artists, people who do sports, music, and more is hollowing out versus consumers and pros.


I'll (probably) never beat Carlsten, but he _can_ be beaten. For some reason that seems to make a difference.


The global population increased, and more people took up Chess, but they're further distributed, so locally it seems like it has cooled when globally it's more popular than ever.


I've been an active chess player myself for a long time and for the last 8-10 years not just with the advance of engines but also online streaming there has been a lot of renewed interest. Saint Louis has become a big chess hub in the US, China has become a major player, Anand has rekindled interest in India, Carlsen in Europe and I would say today it is more popular than it has been in a long time, in particular in Asia.

As to the direct influence of engines the other innovations aside, it has definitely forced players at the very top to re-evaluate the chess metagame, find weaknesses in traditional openings and shook up strategies. For the strongest players engine evaluation has become a useful tool providing new insights. When people watch chess tournaments these days on the internet most websites will provide parallel engine suggestions and commentators use engines to take hints for their commentary.

In my opinion, engines have made the game more competitive at the pro-level and more accessible for casual viewers.


By AlphaGo/AlphaZero finding more effective/balanced moves than humans it's redefining beauty whether it appreciates it or not.


A simple script could win at FPS games every single time but it also hasn't diminished the value of the game as long as none of your competitors are using it.


This is very true, I never thought of it. Maybe the difference is that we always knew that AIs would easily win in FPS games. Whereas Go, Chess or Shougi were considered as a proof of human intelligence, for a very long time. Discovering, after all these years, our history, that a machine can now beat us at these games may be the major difference.

Maybe for these games to keep popularity, we just need to update our perception of it. The same way we do with FPS games. Yes, we know a bot would do better - but that's not what matters.

Another thing, as said in other comments, is that we can learn from bots. New strategies, new patterns. AFAIK, this is not happening in FPS eSports scene.


"I think this is a strange reason to retire"

There's nothing strange about becoming demotivated to study and compete at something extremely taxing both emotionally and mentally when a machine can beat you after an illustrious career.


so the future of humans is beauty :)


> "With the debut of AI in Go games, I've realized that I'm not at the top even if I become the number one through frantic efforts,”

There is something sad about this comment. It’s as if, even while standing at the peak, he doesn’t grasp what being “at top” actually is.

Would Eliud Kipchoge say the same thing about a motorcycle?


I always thought Kasparov was granted a tremendous opportunity by the advent of computer chess.

He spent his life honing his skills, becoming better and better at chess until he was the very best. And then, when noone could challenge him, technology emerged that would allow him to continue being challenged and improving his chess game. Something no other human could allow him to do.

Granted, I'm pretty ignorant about competitive chess and how to get better at it. But if my way of looking at it is valid then it probably applies to Lee Se-dol too.


> And then, when noone could challenge him, technology emerged that would allow him to continue being challenged and improving his chess game.

That technology's name? Vladimir Kramnik.

At the time DeepBlue beat Kasparov, Kasparov was honestly still probably better than the computer. He just had a bad match. That was basically demonstrated by his and Kramnik's matches against presumably better computers (than DeepBlue) in the early-to-mid 2000s, which ended in draws. But Kramnik was also a strong competitor to Kasparov in the late 1990s into the 2000s.


If 'at the top' means 'capable of being better at Go than any other entity in the known universe', that statement is correct.

It can be viewed as the adaptation of a great competitor -- if a superior adversary emerges, a great competitor finds another way to win.

The rest of the article is worth the read for context. This excerpt alone didn't capture the full spirit of the article.


In a recent paper [1], I argue that intelligence cannot ever be accurately measured by any one particular game or 'environment', but rather, it's better to think of each such environment as being a voter, with just one vote out of many in the universal "intelligence contest".

A bot fine-tuned to dominate Go, or Chess, or whatever, is like a candidate fine-tuned to have perfect appeal to one specific voter. It's no surprise if such a candidate gets that one voter's vote, but it should also be no surprise if such an overtrained candidate performs horribly in the election as a whole.

[1] https://philpapers.org/archive/ALEIVU.pdf


Agreed. I'm all for pitting AIs against each other to see who is better, but putting AIs vs humans is futile in a ridiculous way. Games aren't fun if there isn't a level playing field.


Your ability to reason is tied a lot closer to your sense of self and humanity than your ability to run. If Usain Bolt became parapeligic he'd still be human, if Richard Stallman suffered brain death it'd be debatable whether he's still a person or a corpse being kept alive by machines.


That's very pragmatic as there are other things you can do with your life than being a Go master

It's good he recognized that at his age


I guess the players don't realize how mismatched they are with the AI. AI spends many, many factors of magnitude more compute to achieve the same goal as a human. There is no comparison between man and machine at this point. From a pure efficiency standpoint, humans are vastly superior.


Maybe in chess, instead of measuring time, they should measure kJ spent.


Stockfish on efficient hardware should handily beat humans even if capped to comparable runtime energy usage. I don't know if the same is true for alphago.


I have the impression that at least the training phase of alphago needs much resources to achieve superior results, though it would be nice to see some "exact" numbers. IMO combined limit for runtime+training energy would be the best way to keep things interesting.


Perhaps also true, most of the energy was spent during training.


Well the energy spent creating you across the last billion years was also significant!


Then the energy spent creating the machine is always higher


If you're going to count that though then the difference in energy between the human and machine become relatively tiny and the comparison just becomes a comparison of strength again. :)

I think runtime energy is the most interesting.


Or maybe they should measure quantity of produced CO2.


He said it will be difficult for him to beat young players (not AI).


Do you have a link for this? I don't be surprised to find out that a press article is misleading, but do you have a link for this?


I had no idea what Go is. For people like me:

https://en.m.wikipedia.org/wiki/Go_(game)



>"With the debut of AI in Go games, I've realized that I'm not at the top even if I become the number one through frantic efforts," said Lee.

>"Even if I become the number one, there is an entity that cannot be defeated," he said in an interview with Yonhap News Agency in Seoul on Monday.

Reading this just makes me sad, like the selfish kid who takes his ball home because he didn't win. I wonder why he thinks people ranked less than 1 play Go. Does he think they're idiots? Or that they'll quit once they realize they won't reach #1? Not everyone in the NBA can be LeBron, and most of them know it.

This is such an unhealthy attitude that I see everywhere. People who have their emotions and identity tied up in winning and being better than whomever they come across. When they eventually, inevitably fail, it crushes them. I'm sad to see it every time. Even sadder when their response is to quit, rather than reconnect with whatever drew them to the activity to begin with, like because they enjoyed it.


I think you summed up toxicity in competitive environments. This applies really well to Overwatch, a competitive scene I know well. Losing really seems to be a huge driver of toxic behavior.

There's got to be study somewhere on how winning/losing affect toxic thinking on a chemical level. If there isn't one, there should be.


This decision seems very adult. No one else’s game is ruined. After decades playing go, it’s time to move on. The player has changed. The game has changed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: