Hacker News new | past | comments | ask | show | jobs | submit | m0g's comments login

Your pocket calculator is superhuman at arithmetic. Alphago is superhuman at go. It's not a big deal.


Superhuman has a very specific definition in the field of game playing algorithms. It is the case when the algorithm can always beat all humans. AlphaGo winning 5-0 against the number 5 ranked human (Lee) would give only a small indication that it is superhuman. Regarding streaks, evenly matched humans can go 5-0 an expected (0.5)^5 or 3.125 percent of the time. So not particularly rare. If it loses even one game then it is not yet superhuman.

If top humans get beat 5-0 with significant handicaps then it is likely AlphaGo is superhuman. However, it is expensive to run AlphaGo so it is unlikely that we will know the true strength of AlphaGo for a while (more challenges) or until hardware catches up.

Update: typos and clarifications


The term ("superhuman") is used very differently in other communities though, that don't have such a clear metric of "beating humans" as in traditional game-playing. Which means it's about time we clarify what is meant by it, especially as various parties that have commercial interests start throwing it around carelessly (there was an example on HN a while ago).


The 2x4 in my floor has superhuman strength by that definition. Kind of a pointless term if it's just to mean what humans can't do.


That's literally what it means though.


The way in which it is superhuman matters. The calculator uses simple mechanical algorithms. Alpha Go uses a completely novel approach to deep learning that can likely be applied to many other systems/problems.


As long as those systems/problems include a grid based problem space where the goal is to successfully place stones restricted by a limited set of rules.

Ok flippancy aside, there are two problems that make techniques like this single-domain: network design and network training.

The design, uses multiple networks for different goals: board eval (what boards look good) and policy (which moves to focus on). Those two goals, eval and policy, are very specific to go. Just like category layers are specific to vision and LSTM is specific to sequence learning.

Network training is obviously hugely resource intensive -- and each significantly complex problem would need such intensity.

It is amazing the variety of problems DNNs have been able to do well in. However, the problem of network design and efficient training are significant barriers to generalization.

When network design can be addressed algorithmically I think we may have an AGI. However, that is a significant problem where you automatically add another layer of computational complexity so it is not on the immediate horizon and may be 50+ years down the road.


My pocket calculator is faster than a human and has better memory. I don't know that this means the same thing as "superhuman in arithmetic". I can concede that it means superhuman in speed and memory, but, arithmetic? I don't think so. What it really does is move bits around registers. We are the ones interpreting those as numbers and the results of arithmetic operations.

AlphaGo is rather different in that it actually has a representation of the game of Go and it knows how to play. I don't doubt at all that it's intelligent, in the restricted domain it operates in. But I do doubt that it's possible for an intelligence built by humans to be "superhuman" and I don't see how your one-liner addresses that.


Your calculator does have a representation of arithmetic too. It's those bits is moves around in registers, which are very much isomorphic to the relevant arithmetic.

Why would an intelligence built by humans not be able to be superhuman? The generally accepted definition seems to be "having better than human performance" in which case it seems we've done it many times (like with calculators).


>> The generally accepted definition seems to be "having better than human performance"

I don't think there's a generally accepted definition and I don't agree that performance on its own is a good measure. Humans are certainly not as good at mechanical tasks as machines are -duh. But how can you call "superhuman" something that doesn't even know what it's doing, even as it's doing it faster and more accurately than us?

Take arithmetic again. We know that cat's can't do arithmetic, because they don't understand numbers, so it's safe to say humans have super-feline arithmetic ability. But then, how is a pocket calculator super-human, if it doesn't know what numbers are for, any more than a cat does? There's something missing from the definition and therefore the measurement of the task.

I don't claim to have this missing something, mind you.

>> Why would an intelligence built by humans not be able to be superhuman?

Ah. Apologies, I got carried away a bit there. I meant to discuss how I doubt we can create superhuman intelligence using machine learing specifically. My thinking goes like this: we train machine learning algorithms using examples; to train an algorithm to exhibit superhuman intelligence we'd need examples of superhuman intelligence; we can't produce such examples because our intelligence is merely human; therefore we can't train a superhuman intelligence.

I also doubt that we can create a superhuman intelligence in any other way, at least intentionally, or that we would be able to recognise one if we created it by chance, but I'm not prepared to argue this. Again, sorry about that.


>> Your calculator does have a representation of arithmetic too. It's those bits is moves around in registers, which are very much isomorphic to the relevant arithmetic.

Hm. Strictly speaking I believe my pocket calculator has an FPGA, a general-purpose architecture that in my calculator happens to be programmed for arithmetic, specifically. So I think it's accurate for me to say that, although the calculator has a program and that program certainly is a representation of arithmetic, I have to provide the interpretation of the program and reify the representation as arithmetic.

In other words, the program is a representation of arithmetic to me, not to the calculator. The calculator might as well be programmed to randomly beep, and it wouldn't have any way to know the difference.

(But that'd be a cruel thing to do to the poor calculator).


Yup, Google now has two teams publishing the very best ML papers alongside FAIR, Toronto, Montreal and some other places. The teams or Deep Mind and Google Brain.


your idea is called adversarial networks, and it works quite well (http://arxiv.org/abs/1406.2661) although it's not useful every time.


academia is extremely prototype oriented so it might even be worse than industry in this regard, fyi.


I have some experience in academia and I mostly see the extremes: either blatant wholesale copy/paste as much as possible, or an extreme "not invented here" attitude, refusal to use existing solutions, a sentiment that everyone else's code is useless crap, and a lot of reinvented wheels.


Biological analogies are too often misleading and confusing when talking about deep learning[1]. We currently have very little knowledge of the way the brain works and most analogies are only wild assumptions. The ones contained in this article are blunt and based on strictly nothing but the author's feelings. Please read with care.

[1]: http://spectrum.ieee.org/robotics/artificial-intelligence/ma...


The author's "feelings" are useful when they were one of the authors of AlexNet[1]. A 10% improvement on ImageNet[2] makes one think he might know a little about the subject.

[1] http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf

[2] http://www.image-net.org/challenges/LSVRC/2012/results.html (Look for SuperVision)


Well, I'm talking specifically about the biology analogies, and no, being good at ML doesn't mean you know anything about the brain.


If I'm not mistaken, prezi is hungarian and very widely used


It's a type declaration saying that the & operator takes two Picture as arguments and returns a picture.

So, if "a" and "b" are pictures "a & b" is typechecking.

To be more precise, if you account for currying, it says that the & operator takes a Picture and returns a Picture -> Picture function. This function can then be applied to a Picture to yield a Picture. It's as I explained above but with an intermediate step.


You mean like the people who wrote HN, the site you are reading, right?


Well, I come from a CS master's of the university of Nantes and I don't feel _at all_ like I have a lower level than if I went to one of those schools. It's clearly the other way around.


I would hope so since Epitech/Epita is bac+3 whereas a CS master is bac+5. Although I doubt many universities would still reach those schools level even after a master.


SUPINFO / EPITECH / EPITA are bac + 5, you make no sense


Oh right, I've never met any people doing a Master of CS though so I don't know the level. I was talking about bachelors.


CS master of Nantes is a really high level diploma comparing to some other universities , I will get mine from the same university this year (hopefully)


Please don't use caps meaninglessly. This blog is awfully hard to read.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: