Hacker News new | past | comments | ask | show | jobs | submit login

Imagine a clocked-down Turing computer, whose operating speed us is so slow that we can see its operations. Now imagine the programs which cannot, in principle be run on this computer. You can't because there aren't any.

Now imagine we are that computer.

Humans are capable of abstract thought and reasoning, which is sufficient to understand even the most complex software programs, to such an extent as they are understandable. The comparison to the worm is bogus.




The clocked-down Turing computer you are proposing would have to have an infinite amount of tape to be truly "Turing complete" and capable of running any program we can imagine in principle.

To actually match the worm analogy, imagine a clocked-down Turing machine with a tape length of maybe 100k cells. Now you can definitely imagine programs that can't run on it.

The worm's limitation isn't that it's slow at thinking, it basically has only a handful of neurons compared to a person and there are problems/concepts that are just too large to fit in its brain and it would never be able to imagine even if it could live and think until the end of time.


Human beings have effectively infinite tape in the form of pencils and paper, to say nothing of digital storage.


Those things aren't infinite and you don't have infinite time either. If you're trying to use something like Turing completeness to argue a point, you need to respect that it's only relevant for theoretical, formal systems. You can't pull it out as a "gotcha" when it's a question of real-world capabilities.

The basic argument here is specifically about the practical limitations. An actual worm, with a finite number of neurons and a finite lifetime, even if it was an exceptionally clever one, lived much longer than average, and spent every waking minute of its life pondering the mysteries of humans, would never be capable of comprehending more than the tiniest fraction. Even if you had worm scholars that developed some form of writing and could pass along information to subsequent generations, it's still not going to be possible because of the actual, physical limitations that have to be dealt with. A theoretical intelligence of a similar ratio to human intelligence would be similarly incomprehensible to a really clever human that spent their whole life studying the output of scholars who spent all their lives trying to understand it.


An AI isn’t infinite either. Infinities don’t enter into this. You brought it up, not me.

I’m aware of Bostrom’s argument and it is bogus. There isn’t a continuum of intelligence difference between a worm and a human, and a human and a so-called superintelligence. There is a categorical difference between the worm and the other two, which occupy the same class. Hierarchical abstract thought allows the human to reason (using cognitive artifacts like pencils or computers) about processes of any complexity, including those which vastly outstrip its own.

Boston’s a philosopher, not an engineer or computer scientist, and it shows when he makes basic errors like this.


> Infinities don't enter into this. You brought it up, not me.

No, you did when you brought in Turing machines:

> Imagine a clocked-down Turing computer, whose operating speed us is so slow that we can see its operations. Now imagine the programs which cannot, in principle be run on this computer. You can't because there aren't any.

If the Turing machine doesn't have infinite capacity, I 100% can imagine programs that "cannot, in principle be run on this computer".

Furthermore, if the Turing machine has many orders of magnitude less capacity than a computer and can only run for a tiny fraction of the number of cycles before it grinds to a halt, I can imagine many, many, many programs that won't be able to run on it.

> An AI isn't infinite either.

It doesn't have to be. Just being many orders of magnitude "larger" than us is enough that it, for all practical purposes, is incomprehensible to us enough that it might as well be an entirely separate category.


An AI would be running on a computer with the same space and time bounds as the human, because the AI is just software running on the human’s computer.

There are still no infinities.


If there are no infinities, then I am not sure why you brought up an infinite machine as an argument. The point is to imagine a being with orders of magnitude more processing power than our brains, it'd be able to fathom things we literally could not, even with any amount of slowness of processing. It'd be a physical limitation.


I didn’t bring up infinite machines, the other guy did.

Human beings have access to the same amount of storage and processing power as AIs, because we get to use computers too.


> Imagine a clocked-down Turing computer

You brought up infinite machines, hence my and the other person's comment. Turing computers or machines have infinite tape. Maybe you meant to just say "computer," not "Turing computer."

Humans can use computers but only for human readable tasks. There is no guarantee that we could comprehend a superintelligence running off a supercomputer, we already don't understand neural network internals at even their current stage.


I meant a general purpose computer I the sense of a universal Turing machine, but without the infinite storage requirement. Technically that would be a linear bounded automata, but that glosses over the the universal construction of the Turing machine.

Maybe the norms in HN are different, but in my field a Turing machine is primarily a general purpose computer, and “Turing complete” describes models which can represent any Turing machine within its constraints. An “infinite tape Turing machine” is explicitly specified when needed.

Personally I’m a bit outside of the mainstream in that I never use infinities except in the case of representing an unterminated series. I reject that “infinity” as a number even makes sense as a context.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: