Hacker News new | past | comments | ask | show | jobs | submit login

One fascinating thing about the function BB(n) is that it grows faster than any computable function.

The proof of this is simple: if you had a computable function f(n), and f(n) >= BB(n) for all values of n, you could use that to solve the halting problem, which in itself is impossible.

I used to implicitly assume that we can make concrete, closed-form, functions that grow as fast as we want them to. The above shows that this is clearly false.




But of course BB can get dwarfed by other uncomputable functions, like Rayo's:

https://googology.wikia.org/wiki/Rayo%27s_number


That's a lot of words and symbols for some not-very-impressive lower bounds. I present hnuser123456's function:

f(n)=n (n Knuth up arrows) n

Coming back to Rayo, my number is instead f(10^100) rather than Rayo(10^100), and given the series of lower bounds presented in that article, this should easily dwarf that, unless Rayo's will end up growing Knuth arrows faster than linear. I'm not smart enough to evaluate that though. If it would, wonder if this would beat it:

f(n)=n (n Knuth up arrows) n (n Knuth up arrows) n

Or n copies of n Knuth up arrows...


Knuth up arrows are computable functions, right? So they can't possibly grow faster than BB(n), let alone even faster-growing functions.


Must be. Since Rayo(10000)>2↑↑↑65536 it seemed easy to spam a lot more up arrows there (10k of them), but the tide probably changes after n grows by another 95 orders of magnitude. Wonder where Rayo's function intersects mine.


Rayo's function can describe the idea of "f(n)=n (n Knuth up arrows) n" with relatively few symbols - just like your post did using relatively few characters


n (n Knuth up arrows) n (n Knuth up arrows) n = n (n+1 Knuth up arrows) 3, so that's not really impressive either. Your function is comparable in size to f_omega(n) on the fast-growing hierarchy. [1]

At any rate, it's known that BB(18) is larger than Graham's number [2]. In fact, the same person has a bunch of long-lasting Turing machines for values around 18 [3], giving the lower bounds:

BB(15) > f_omega(2046) > 2046 (2046 up arrows) 2046

BB(16) > f_omega(f_omega(5))

BB(17) > f_(omega+1)(10) BB(18) > f_(omega+1)(7e57) > Graham's number

BB(19) > f_(omega+1)^3(9) > f_(omega+2)(3)

BB(20) > f_(omega+1)^4(7e57) > f_(omega+2)(4)

BB(21) > f_(omega+2)(f_(omega+1)^2(4))

BB(22) > f_(omega+2)^3(f_(omega+1)^2(4))

And for larger values again [2]:

BB(38) > f_(omega2)(167)

BB(64) > f_(omega^2)(4098)

BB(85) > f_(epsilon_0)(1907)

Of course, we expect the Busy Beaver function to grow a lot faster than these bounds.

[1] https://en.wikipedia.org/wiki/Fast-growing_hierarchy [2] https://googology.wikia.org/wiki/Busy_beaver_function#Graham... [3] https://googology.wikia.org/wiki/User_blog:Wythagoras/The_ni...!


Thank you. I've read all your links and will continue to improve my googology.


How would you use it to solve the halting problem?


  # Assuming f(n) >= BB(n)
  def does_it_halt (m: turing_machine):
    n_steps = f(count_states(m))
    for i in range (n_steps):
       run(m)
    return is_in_halting_state(m)  
The key point is that if you know that all machines with n states halt in BB(n) steps at most (or will otherwise never halt), you just need to run any of those machines for BB(n) steps and then check if they already halted or not.


The tricky bit is proving BB(n) doesn’t solve the halting problem. Otherwise feed program of n size and run it for BB(n) + 1 steps and it’s stuck in a loop.


I'm not sure what you mean. Having a computable version of BB(n) for all n does solve the halting problem, which is why it's impossible to compute with any existing computer.

If you run a program with n states for BB(n) + 1 steps it will either already have halted, or it will never halt. This is by definition.


Not quite, a BB(n) is just a number so a computer could have a precomputed look up table of them. Just as you can store 7 in a programs source code you could store BB(7).

The issue is the halting problem is for a given program and an arbitrary input. If you want to store all BB(n)’s for an arbitrary input in a lookup table you would need an infinite program size. Even if that’s acceptable, reading an infinitely large program + some input into itself takes infinite time.


You are missing the point. Simply knowing BB(n) is enough to solve the halting problem for a TM with n states. It doesn't matter whether it is precomputed and stored in a lookup table, or computed on the fly, or revealed to you by God. If you know BB(n) by any means, you can solve the halting problem for a TM with n states. Therefore you cannot know BB(n) beyond a certain n.


Very much no, any pattern of code is acceptable in computer science. You don’t need to have a specific class of computer compute the programs running on it. CS even has the idea of an Oracle machine: The problem does not have to be computable; the oracle is not assumed to be a Turing machine or computer program. https://en.wikipedia.org/wiki/Oracle_machine

It’s a very critical idea in CS. There is no specific BB(1, 2, ... n) that’s suddenly impossible to discover, however writing a program that runs on touring machines that can find arbitrary BB(n)’s isn’t solvable. Another way of saying it is it takes increasing complexity to solve BB(n) as n grows, a given program can only be of finite complexity.


This entire thread is about how to use a BB oracle to solve halting (and why that means BB is uncomputable).


Sure, but the poster clearly doesn’t understand if they are saying Therefore you cannot know BB(n) beyond a certain n.

I am not really conveying this clearly but people seem to misunderstand that Touring machines are just one class of computers. Just as quantum computers are another class of machines, so to are human minds or x86 processors. You can very much program a touring machine with the output of a different class of computer hardware, but that doesn’t solve the problem.

>>> You need an Oracle at runtime not compile time. <<< There that’s what I was missing.


> poster clearly doesn’t understand if they are saying

I have a Ph.D. in CS, so you might want to consider the possibility that you are the one who is misunderstanding.

> Touring machines are just one class of computers

So first of all, it's Turing machines, not Touring machines. Named after Alan Turing. And yes, it's true that they are just one class of computer, but the reason Turing machines are a thing is that they are universal. A TM can compute any computable function, where "computable" means computable by any physically realizable model of computation that any human has been able to come up with, including quantum computers. (Quantum computers are just faster for certain problems.) So the details of the TM don't actually matter. There are lots of different computational models that are equivalent to TMs. It's just that the TM model was the first to be proven universal, so that's what everyone talks about. But all physically realizable models of computation that humans have been able to come up with to date are either equivalent to TMs, or strictly less powerful.

So: TMs can't solve the halting problem, and so no other physically realizable model of computation can solve the halting problem either because all known physically realizable models of computation are equivalent to TMs and therefore subject to the same fundamental limitations as TMs. If you knew BB(n) you could solve the halting problem. Therefore you cannot know BB(n). (It's a little but more complicated than that because we can't rule out the possibility that a more powerful model of computation exists and is physically realizable. But that would be the greatest revolution in the history of human intellectual endeavor, so the odds of discovering such a model are pretty low.)


I don’t disagree with what you just said however odds of discovering such a model are pretty low is irrelevant when dealing with abstract computers. Further, suggesting real world physics is unlikely to work isn’t a proof even if I happen to agree.

In the end even if we had a physical machine to produce BB(n) that would not mean it was computable using a Touring machine. Their unrelated concepts.

PS: Looks like auto corrupt strikes again, I really stopped caring at this point.


For an n-sized program, run it for BB(n) steps. If it hasn't halted then by definition it will never halt.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: