Hacker News new | past | comments | ask | show | jobs | submit login

Sure, this observation is not a definitive proof, it is more along the evidential lines.

There is also a class of deciders that can decide for an infinitely large class, which cannot be decided computationally, yet is not complete. I'd say humans most likely fall in this class, since they far exceed anything we can do with the best algorithms we've dreamed up, yet also cannot solve everything.

Of course, you can always say there is a yet undiscovered algorithm to fill the gap between state of the art and human performance, but at some point that hypothesis is less well supported by the evidence than the uncomputable mind hypothesis, and I'd say we are well past that point.




Humans exceed our current computers in the same way that a large asteroid impact exceeds our current nuclear weaponry. It's a matter of scale, not of kind.

If we track the capability of computation machines from binary adders through old-school chess AI, expert systems and robotics, to the neural networks playing Go, chess, Starcraft, DOTA, we see an ever expanding sphere of ability and influence - often exceeding what humans predicted was impossible for computers.

> Of course, you can always say there is a yet undiscovered algorithm to fill the gap between state of the art and human performance

This implies a kind of asymptotic approach towards human intelligence, that it's something we keep trying to approach but will never reach.

But beyond "filling the gap" I believe computers have the potential to shoot past us in ability. And if they do, I think it's going to be a pretty shocking experience, because I think the rate of increase in intelligence at that point is going to be bigger than we expect - there's no reason to believe the exact level of intelligence is a special border, objectively speaking.

To claim that the evidence so far supports the uncomputable mind hypothesis seems to me like if we had managed to build tiny motors which had been getting bigger and better each year, and then claiming that we would never be able to beat an elephant in strength because no motor up to that point had yet been able to do so. Yes, our metaphorical motors are still tiny, and yes, it takes other extra pieces of engineering beyond motors to actually push down a tree, and indeed it's possible we will perhaps never build a big enough machine. But to take each larger motor as somehow being evidence AGAINST us ever getting to that point is a strange viewpoint.


I don't see why you believe this. From my perspective, all the recent gains in AI abilities is purely due to faster hardware. But, due to the combinatorics of search spaces, exponential speed increase only gets you linear AI improvements. So, we'll rapidly hit peak AI in the near future, falling way short of even emulating human abilities.

As it is, compare the amount of processing, space and energy humans have to expend to perform equivalent tasks to computers, and there is no comparison. Computers are extraordinary orders of magnitude inferior to human performance. The only reason there is the appearance of parity is because we are not looking at the back end of what it takes computers to accomplish these tasks.

It is like we flipped a coin to create the works of Shakespeare, and waited the many eons necessary for just the right sequence to occur. We then say a coin is just as capable as Shakespeare, but that's only because we've ignored how much effort it took to flip the coin vs how much effort it took Shakespeare.

So, I'm not arguing our motors will not become bigger. I'm arguing that as our motors become bigger, the returns are diminishing exponentially, much more rapidly than can get us to true human parity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: