Hacker News new | past | comments | ask | show | jobs | submit login
Hypercomputation: Computing more than the Turing machine (2002) (arxiv.org)
143 points by ColinWright on Sept 10, 2017 | hide | past | favorite | 70 comments



Maybe I'm biased, but I generally view posts about hypercomputation with skepticism. From The Myth of Hypercomputation[1]:

> Under the banner of "hypercomputation" various claims are being made for the feasibility of modes of computation that go beyond what is permitted by Turing computability. In this article it will be show n that such claims fly in the face of the inability of all currently accepted physical theories to deal with infinite precision real numbers. When the claims are viewed critically, it is seen that they amount to little more than the obvious comment that if non-computable inputs are permitted, then non-computable outputs are attainable.

The original idea in Turing's thesis was that the Turning Machine was something that could actually be implemented, and indeed you can find many examples of people's side projects where such machines can be made and can actually compute things.

On the other hand, hypercomputation has no physical basis. So what's the point of using it to model anything real? It's next to useless.

[1]: http://www1.maths.leeds.ac.uk/~pmt6sbc/docs/davis.myth.pdf


Your critique of hypercomputation applies all the same to Turing complete machines. A Turing machine can never be implemented and is a totally theoretical model. At best we can implement a class of machine below called a "Linear bounded Turing machine". For these machines the halting problem is solvable (it can just take a while). You could even implement a linear bounded Turing machine as an FSM it would just take an ungodly number of states.


> You could even implement a linear bounded Turing machine as an FSM it would just take an ungodly number of states.

To be clear, these things have formal definitions, and this statement is not correct.

1. A linear bounded automaton[0] is a Turing machine that can only overwrite symbols presented on its input tape as input. However, the definition of the automaton is still required to be finite, but it is required to operate on unboundedly large input tapes.

2. A finite state machine[1] is a model of computation where the sequence of input symbols are observed once and the machine is at all times in one of a finite number of states. It is equivalent to a TM that can only move right (and consequently cannot read anything it writes to the tape).

They are different, formally[2]. There are languages that a LBA can accept that a FSM cannot accept (famously, the strings of balanced parentheses cannot be recognized by a FSM).

[0]: https://en.wikipedia.org/wiki/Linear_bounded_automaton

[1]: https://en.wikipedia.org/wiki/Finite-state_machine

[2]: https://en.wikipedia.org/wiki/Pumping_lemma_for_regular_lang...


This is correct from the unbound theoretical tape perspective. I should have expanded my point you are very correct. If you look in the context of physical realization (finite) a tape that can be written is some form of RAM. Beyond O(1) complexity all of these machines can express each other with different amounts of RAM. Even in the theoretical model an FSM can accept balanced parentheses of finite depth. My FSM point is basically wrong though.


Turing machines are a useful simplification: they eliminate the (literal) edge-case that we hit the end of the tape. We can either think of this as having "infinite tape", or we can imagine a "tape factory" on either end which extend the tape faster than the TM can read it.

Mathematically, this sort of simplification is used a lot: when our model requires some sort of bound, we can work under the assumption of it being 'sufficiently large' that we can ignore edge-cases. Another example is the set of "real" numbers, which we can represent as decimals and assume a sufficiently large number of decimal places to avoid rounding. In fact, similar to the "tape factories" of a TM, we can think of each real number as having a "decimal-place factory" which produces new digits more quickly than we can read them (for example, during Cantor diagonalisation).

The infinities in hypercomputation don't seem to be providing such a simplification. Their 'sufficiently large' assumption seems to be the number of steps which can be executed in a unit of time, which avoids the edge-case of non-halting programs. I'm not sure that's a useful simplification.


"It can just take a while" is a pretty egregious understatement. As the busy beaver question shows, even an extremely simple turing machine with just a handful of states can execute for a completely unfathomable number of steps.

https://en.wikipedia.org/wiki/Busy_beaver


That is gaming the definitions of "states" a bit. Your example has a official finite number of "states" combined with an unbounded tape of extra states while only counting the FSM controlling it. That tape is part of the state, the machine can edit / read the tape in its decision procedure.

Noting this a few more states than a handful are possible. For example my laptop with bitpacking could track the visited or not visited flag for 64000000000 states without even using disk. Tracking a 33 bit FSM.

I agree though that linear bounded Turing machines having a solution for the halting problem is not actually that useful for real computers considering by the time a few registers had been iterated over the sun would have exploded.


I think you're confusing "states" with cells on the tape. They are completely independent. All turing machines have an unbounded tape, but a well-defined number of states that act as an automaton controlling the head on the tape.


This is unintuitive to me. Source?

(I can understand why we might not be able to build a Turing machine because we don't have infinite tape, etc, but if a program never halts but merely needs finite space, how do you prove that, in general?)


There are only finitely many states. Detect loops.


Okay. "Loop detected. It terminates conditionally on the value of `a`".

I would think that deciding if a loop halts is equivalent to the halting problem (just wrap your program in a loop). Merely detecting loops seems insufficient.


I think the answer is you can solve the halting problem if you have finite memory. You just need to consider every possible memory value. Eventually no new memory values can be introduced, so the next state must be one of the already inspected memory values.

Of course all this takes at least exponential time and so is outside our normal computability assumptions.


Wouldn't the halting problem take at most exponential time?

We know that a program with n bits of memory has at most 2^n states, so we can maintain an n bit counter that is incremented once on each program iteration. If this counter reaches (2^n)-1, then the program does not terminate. (Depending on how you define things, there is an off-by-one error here; just run the program a few times before you start counting).


The general halting problem takes a potentially unbounded input.

That's why it's impossible to solve.

If it were finite (bounded), then you could do an enumeration of the possible programs of that bounded size, and you could make a program that simply looks up that list. (Sure, maybe the making of the list would be a bit problematic, because it'd take ... multiple eons, lesser gods would die, rise and die again during that time, but it's still finite.)


That's the point, yes. A real machine has finite memory, while a proper Turing machine can always advance the tape to get more memory. You can solve the former in ridiculous but non-infinite time, but not the latter.


By "detect loops" the parent meant "detect repetition of exact state". We have a fixed state machine. If we find ourselves in the same state, at the same location in the same tape, we must not halt because we will proceed in a loop indefinitely. If we have a finite tape, there are only finitely many states the entire system can occupy, so we must be able to exhaust them (or halt first, or loop first) in finite time.


> On the other hand, hypercomputation has no physical basis. So what's the point of using it to model anything real? It's next to useless.

Plenty of mathematics does not "model anything real", but you may still want to understand it. The paper starts with an analogy to non-Euclidean geometry, which once "had no physical basis", until we learned more about the universe and found that it did.

Mathematics isn't a pitch deck.


> On the other hand, hypercomputation has no physical basis

The author repeatedly makes the point that we don't know, for sure, that this is the case. And that "we haven't built it yet", sans additional explanation, is pretty shoddy evidence for the non-existence of something.

And that, if such a thing did exist, its existence may well be "very fundamental to the structure of the universe and should not be ignored".

I agree with both you and the author.

I don't think we'll ever build device capable of general-purpose "harnessable" hypercomputation. And I don't think any of the ideas described in this paper are in any meaningful sense physically realizable.

But I also allow the possibility that I am wrong, especially on the first count. And if I am wrong, even just on the first count, then some of the theory surveyed in this paper might turn out to be useful.


> The author repeatedly makes the point that we don't know, for sure, that this is the case. And that "we haven't built it yet", sans additional explanation, is pretty shoddy evidence for the non-existence of something.

The Bekenstein bound (https://en.wikipedia.org/wiki/Bekenstein_bound) and the Holographic Principle (albeit the latter not being a concrete part of physics yet) provide strong reasons as to why analog computing is not a viable path to the realization of "hypercomputation". Besides, a few loopholes nonwithstanding (https://pdfs.semanticscholar.org/a4b2/4409635c442b45f6fa4271...), the physical existence of legitimate hypercomputation stands in almost direct opposition to the Church-Turing thesis.


> And that "we haven't built it yet", sans additional explanation, is pretty shoddy evidence for the non-existence of something.

It's not claimed sans explanation. Hypercomputation would break a lot of physics. Others have mentioned the Bekenstein Bound, but just consider just basic concepts like signal to noise ratios. You'd need infinitely precise detectors for hypercomputation, detectors capable of distinguishing literally any signal, not matter how small, among any amount of noise, no matter how overwhelming.

These properties simply stretch credulity, to say the least.


There was a linked page here on HN a couple of months ago [1][2] with (I thought) an interesting overivew of this topic.

[1] https://plato.stanford.edu/entries/computation-physicalsyste...

[2] https://news.ycombinator.com/item?id=14773808


Isn't Heisenberg's uncertainty principle sufficiently compelling?

If you can't measure anything with arbitrary accuracy then that seems more than enough reason to assume you can't go beyond finite (digital) accuracy.


The uncertainty principle in the strictest form is only technically present in wave-like systems, with the most prominent example being of course, quantum mechanical particles and their wave functions.


That sounds like the real world we live in, doesn't it?


I think so, but the observation is not without loopholes.

For example, classical field equations apparently can behave in non-computable ways, so long as their initial conditions are sufficiently wierd. I'm not sure if QM as we undestand it can handle such fields: but then "as we understand it" is a bit too limiting.

So why haven't we observed these wierd non-computable fields? My guess is that it's because they don't exist. But it might be that we just haven't recognized them, perhaps some of those puzzling things in physics -- like QM perhaps itself is caused by naughty non-computable stuff going on where we haven't yet recognized it.


Could you expand on what non-computability means for a field?

Is it sort of like the n-body problem of classic chaotic behavior?


Re, chaotic behaviour: I don't think there is any simple similarity. Chaotic systems are computable in that you could in principle keep throwing more bytes and CPU cycles at it to get the error down to any bound you like.

Regarding the first question, I am only repeating pop-science here.

I've read (In something by Roger Penrose, presumably The Emperors New Mind, but maybe in The Road to Reality) that the Maxwell Equations are uncomputable only if the initial conditions are functions that don't have Fourier transforms.

So really it is the field dynamics that are non-computable. I might have been talking nonsense when I the fields themselves were non-computable. But then again, the kinds of functions that don't have a Fourier transforms are also very rough, jaggedy things that probably can't be approximated by any algorithm.

And speaking of rough jaggedy functions, that's what you get if you evolve a chatoic system to infinity. So maybe there is some connection after all, but it is well beyond my understanding.


Chaos Theory is nowadays studies as part of Dynamical Systems Theory.

So that kind of computability might be the mathematical algebraic computability (as in does this monstrosity have a closed form?), which can always be approximated (of course with fast accumulating errors).


It's just another case of abstract Math that nobody could find a use for. It does not look very useful from the start because there are too little consequences of any feature decision, but it's not completely unlike most of pure Math.

If anything is a problem it is the attractive name, that pushes non-technical journalists into paying attention to it.


Way back, years ago, A.K. Dewdney published one of his computing recreations columns in Scientific American where he discussed models of computation stronger than Turing machines/lambda calculus/recursive functions/etc., etc. His contribution was a model that solved a NP-Hard (?) problem in linear time: it passed the input to two sub-processors, which either solved the problem or passed it to two sub-sub-processors each. Since each sub-processor was half the size of its parent and required time proportional to its size due to wire delay (?).

Anyway, the bottom line was that it did an unbounded amount of work in a bounded amount of time.

I stopped paying attention to Dewdney after that.


Zeno's paradox?


Yup.


Also sounds a bit like Thompson's Lamp.

EDIT: Whoops, that's pretty much the same thing as Zeno's paradox.


> required time proportional to its size due to wire delay (?).

Isn't conventional wire delay RC quadratic?


I cannot remember the details, but the running time of the component and its subcomponents was twice the time of the component itself.


In a nutshell: you can get a lot of work done if you have infinite time or infinite memory.

The implementation of either is left as an exercise for the reader.


I conjecture that hypercomputation is impossible for the following simple reason. If you could solve the Halting Problem [1], then you could build a "living" contradiction, where a machine halts if and only if it does not halt. (See proof in [2]). It won't be a mere description of a paradox, it would be an actual one. If contradictions are possible (something we implicitly believe to be impossible), then Reality is far stranger than we ever suspected or observed.

[1] https://en.wikipedia.org/wiki/Halting_problem

[2] https://www.amazon.com/Introduction-Theory-Computation-Micha...


That's the accepted theory for sure. It's what every CS student learns.

But never say never. "Reality/logic/math" can be a lot stranger than you think.

There used to be a time when we thought that infinity ( aka countable infinity ) was the largest number. Turns how there are bigger infinities.

And of course quantum mechanics. The duality of light ( both a wave and a particle ).


But never say never. "Reality/logic/math" can be a lot stranger than you think.

I understand it's a Black Swan kind of problem, but I don't think we live in that reality. Please show me a contradiction. The ramifications would be stupendous, to say the least.

There used to be a time when we thought that infinity ( aka countable infinity ) was the largest number. Turns how there are bigger infinities.

Yes, and we know how to build Turing machines that use countable infinity. Please show me a machine that actually uses bigger infinities. Please hypercompute the digits of a real number that cannot be computed by a Turing machine.

And of course quantum mechanics. The duality of light ( both a wave and a particle ).

Universal quantum computers are faster than classical computers, but they are no more powerful (they are not known to be able to hypercompute).


Why are you getting defensive? I agree with you more or less. All I said was reality can be stranger than we initially believed possible and showed you examples in other fields ( math, physics ) where "reality" got turned upside down.

Computer science is a fairly new field. It's within the realm of possibility that reality can be turned upside too. I'm not said it is or will be, but it can. Okay?


I'm not getting defensive, my friend. I was merely challenging you to think through the logical implications of hypercomputation. What many people miss is this: hypercomputation leads to living paradoxes. They might then have to bend over backwards to explain why these paradoxes are not possible, but that leaves the question of why some things are hypercomputable, and some are not.


I wasn't even talking about hypercomputation specifically. I was talking more in generalities about computer science and things we hold to be "reality/absolutes".


A hypercomputer that is capable of solving turing-halting problems but not hyper-halting problems is both non-contradictory and extremely useful.


Interesting, thanks for bringing this to my attention! In turn, this raises the question of hyper-hyper-computers more powerful than hypercomputers:

https://cs.stackexchange.com/a/4871

It reminds me of what Whitehead and Russell tried to achieve with Principia Mathematica, to try to eradicate paradoxes altogether using hierarchies of sets, until Gödel came along.


Wait a minute --- this hypercomputer is still impossible by my conjecture. For if it were possible, then you could use it as an oracle to build that self-contradicting Turing machine halts if and only if it does not halt.


"TM with oracle" can be more powerful than a Turing machine. As long as this halting oracle can only apply to ordinary Turing machines (without oracles) I don't see a contradiction.


That seems to me an unnecessary, artificial restriction. Then it's incomplete: it doesn't solve the Halting Problem. That's the point: either the system is incomplete or inconsistent.


It solves the halting problem for Turing machines. Revisit Dylan16807's comment upthread.

I use "solves" somewhat advisedly - we're postulating something that may very well not exist; it's just that it's not obviously inconsistent for it to exist in the way that it would be if it had an oracle for its own class of machine.


We're talking past each other. Please revisit Sipser's proof for why the halting problem is TM-uncomputable, and you'll see what I mean.


The whole point of a hypercomputer is not being a Turing Machine. So yes, it can calculate things that are TM-uncomputable.

You can't use it to build a self-contradicting Turing Machine, because a hypercomputer cannot be emulated by a Turing Machine.

There is no "the" halting problem. Each class of machines has its own halting problem. The halting problem for turing machines is very hard. The halting problem for non-cyclic finite state automatons is very easy. A hypercomputer that can solve all halting problems is self-contradictory. A hypercomputer that can solve turing-or-weaker halting problems is not self-contradictory.


> Please revisit Sipser's proof

In this kind of context, you should provide a link. This kind of thing can spill over into "You're wrong because you haven't done the work to see that I'm right", and minimizing the work you're pushing off shows good faith as well as increasing clarity that we're talking about the same thing.

I think it's more likely we're talking about different things than either of us failing to understand the relevant proofs. I'll take a look, though, when you've provided the link (and ideally some explanation of what you see as important).


I've often thought that results in computer science about the performance characteristic of classes of algorithms are also, in a sense, making statements about physics. In other words, in a universe where bits of information can be sent instantaneously through worm holes, it may not be correct to say that there's no better comparison-based sort bound than O(n log n). I'm not an expert at physics or computational complexity so maybe my intuition is way off here. But I wonder if, when I hear about innovations in "hypercomputation", I should understand the researchers to be making claims which are essentially as likely to pan out as a claim of having sent bits through wormholes.


Results about computational complexity are always relative to some specific model of computation. For example, a standard Turing machine cannot sort in O(n log n) time. Because it has to walk along the tape one position at a time, it takes longer. When the big-O of an algorithm is given without mention of the machine model, it typically refers to a machine with random-access memory, since that's approximately what real computers have.

So yes, physical possibility is definitely a big factor here, even if it's not always stated explicitly. If it were impossible to build practical random-access memory, then that wouldn't be the standard model used for computational complexity.

Consider the big question of whether P=NP. The question itself is unrelated to physics, since P is defined as polynomial-time algorithms on deterministic Turing machines, and NP as polynomial-time on nondeterministic Turing machines. But the question gets a lot of attention in part because it's considered to be impossible to actually build a nondeterministic Turing machine. If that were untrue, then we could solve problems in NP in polynomial time even if P≠NP.

In theory, you could build something like a nondeterministic Turing machine (without infinite memory, but close enough, much like we consider real-world computers to be close enough to normal Turing machines) if you could, say, send information back in time, or if you could split the universe into arbitrary many parallel universes, then destroy all but one. Obviously, nobody has the slightest idea of how to do this, but at least there are some vague, crazy possibilities.

Hypercomputation is even worse. To accomplish it in anything like physics as we know it would require infinite time, space, and energy. In theory you can get infinite time by putting your computer into just the right orbit around a rotating black hole. Infinite space and energy are somewhat harder to come by. (You know you're in trouble when you need to do something harder than putting something into a precise orbit around a rotating black hole.)

There's always the possibility of new physics. Nothing says it's impossible for there to exist some fundamental particle which acts like a Turing oracle when you poke it just right. But the chances seem pretty poor.

In short, yes, barring any massive physics breakthrough, treat hypercomputation as a theoretical math construct, nothing more.


You don't have to put the computer in the right orbit, you have to put yourself into the right orbit. Your time goes slower, so it gives the computer outside more time to work.

Please don't try it at home.


Thanks for taking my musings seriously and for giving such a thoughtful reply.


How is the hypercomputation mentioned in the report related to a Turing machine with oracle?


the paper is all about Turing machines /oracle

"The paradigm hypermachine is the O-machine....."


On a related note, anyone know anything about this company?

http://memcpu.com/

They claim to be able to solve NP-complete problems with polynomial resources using non-Turing architectures. I'd normally call them crackpots but they have serious pedrigree and have been published in Science and Nature. I'm still very skeptical, but don't have the chops to evaluate their caims.


From Scott Aaronson's "Memrefuting" post (https://www.scottaaronson.com/blog/?p=2212):

> all proposals along these lines simply “smuggle the exponentiality” somewhere that isn’t being explicitly considered, exactly like all proposals for perpetual-motion machines smuggle the entropy increase somewhere that isn’t being explicitly considered. The problem isn’t a practical one; it’s one of principle.


> They claim to be able to solve NP-complete problems with polynomial resources using non-Turing architectures.

Not sure why you'd sell that technology, instead of using it to effectively and invisibly conquer the world.



Can't a Turing machine emulate a digital circuit in polynomial time? Then either they've proven that P=NP, or they're full of it.


Supposedly it's analog, but the Aaronson article linked by others gets into why that doesn't help either.


I see. The page mentions "digital" a few times, but I guess they're either only referring to part of the machine, or are using the term unconventionally.


One of their publications:

"Polynomial-time solution of prime factorization and NP-complete problems with digital memcomputing machines

"We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy)."

http://aip.scitation.org/doi/abs/10.1063/1.4975761?journalCo...

The actual paper is behind a paywall. I am not Scott Aaronson. I probably could not reasonably evaluate their claims even if I could read them. But Aaronson has somewhere posted a list of the weird things that would happen if you could solve an NP-complete problem in polynomial time, and that list provides a good reason to believe you can't.

A part of another abstract of one of their other papers (http://ieeexplore.ieee.org/document/7924376/?reload=true) is,

"Our method can be extended efficiently to any level of precision, since we prove that producing n-bit precision in the output requires extending the circuit by at most n bits. This type of numerical inversion can be implemented by DMM units in hardware; it is scalable, and thus of great benefit to any real-time computing application."

Which makes me suspect there are hard limits on the size of the problems their "hardware" can handle.

But it's fun to chase the graph thingys around on their home page.


From the first paper that you linked (available freely on sci-hub) it looks like they were unable to proove that their method actually works. They made simulations with small problems but are not sure if this method will work for larger problems.


would love a layperson-friendly summary of this. sounds interesting but I am not a CS PhD or mathematician.


(Some of what I say here isn't technically accurate; i.e., it's "close to the right idea but technically wrong". Parent asked for a layperson's explanation.)

There's a famous problem in CS called the halting problem. The halting problem asks for a program that tells you whether an arbitrary program (turning machine) ever halts (a.k.a. stops executing). A function that could compute a solution to the halting problem is called a halting function.

Turning machines can't compute a solution to the halting problem.

Therefore, if you add an oracle for the halting problem to an otherwise normal turing machine, then the resulting model of computation is stronger than turing machines alone. And oracle is basically a magical box that answers your questions in O(1) time, nevermind how.

Merely assuming an oracle for the halting function is only one of many ways to build something that does things turning machines can't do.

For example, allowing infinities in various places where the classical theory of computation assumes finite sets also sometimes increases the computational power of machines. E.g., we normally assume that a turing machine's tape is uninitialized or initialize only a finite subset of the tape. But by initializing a turing machine with a carefully selected infinite tape, you can compute the halting function. Some other "now make the finite thing infinite and then code up the halting function" constructions can be found in this paper.

If you fix one of these various constructions, you can start to do all the classical CS theory stuff in the context of a slightly different machine. It's interesting to see results transfer, which break down, and what surprising new/nice results show up here but not in classical cs theory.

It's hard to say how useful any of this is because all of these constructions basically assume we already have at hand something that we have no idea how to construct in reality. I.e., the whole paper begins with "assume false" as far as us pragmatic engineering folk are concerned. But that's perhaps basically the same thing Euclidean geometers said during the emergence of non-Euclidean geometry, so perhaps this work will end up being as revolutionary as turing machines themselves and we just can't see exactly how yet.


thanks for this. very interesting!


I do not have a PHD in CS myself, but for anyone reading this, you should still make a habit of reading papers you do not understand. You'll likely pick up bits and pieces and learn even if you dont understand what you're reading exactly, assuming you have a minimum foundation.

One of the most stressful parts of grad school in a new field (to me) was pouring over dozens of papers that I did not understand. I wish I had known then that you dont HAVE to understand every part of everything you read, especially when new to an academic field with few or no textbooks. Just be sure to watch how certain you are in your conclusions until you're comfortable with the material.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: