Hacker News new | past | comments | ask | show | jobs | submit login
Who Can Name the Biggest Number? (scottaaronson.com)
92 points by pizza on Jan 20, 2013 | hide | past | favorite | 40 comments



Great read, and some really clever explanations of Turing's work.

While this wasn't mentioned in the post, I thought this seemed relevant: http://en.wikipedia.org/wiki/Graham%27s_number

This bit always blew my mind:

"Graham's number is unimaginably larger than other well-known large numbers such as a googol, googolplex, and even larger than Skewes' number and Moser's number. Indeed, the observable universe is far too small to contain an ordinary digital representation of Graham's number, assuming that each digit occupies at least one Planck volume. Even power towers of the form are useless for this purpose, although it can be easily described by recursive formulas using Knuth's up-arrow notation or the equivalent, as was done by Graham. The last ten digits of Graham's number are ...2464195387."

I'm pretty sure that's the biggest number. Or maybe Graham's Number + 1 ;)


Graham's number is typically cited as the largest number that has ever been seriously used in a mathematical proof, as the Wikipedia article says, but Busy Beaver passes it very quickly.

One of the nasty little characteristics of BB that Aaronson describes, but can stand to be reiterated since I've noticed many people don't seem to quite follow it, is that any mathematical algorithm you can encode into a Turing Machine in some number of rules X, is therefore at most the top-end bound of BB(X), and more likely (which in this case is serving as mathematician's-understatement-speak for "certainly"), not even close. And for all that the TM "programming language" isn't that efficient, certainly almost anything you can describe in a reasonable English paragraph is, say, well less than 100 states in a TM. For all of Graham's Number's stonking size, in algorithmic terms, it still isn't that complicated.

Generally speaking, human attempts to create Big Numbers by fancy recursively-applied algorithms are still quite simple to put into a TM. This is why it's an interesting point that we don't even understand what exactly a BB(5) solution is doing; a mere five states and the human mind is already lost, lost, lost. I think of Busy Beaver as showing in its own way just how thoroughly we have not mastered math, and never will master more than a tiny little island that happens to be relatively easy to follow, because BB shows us that we don't have to swim out into the great sea of "all math" very far at all for there to be monsters.


Whether or not BB(n) is even defined is an interesting philosophical question.

If n is a modestly large number, it can be proven that in no consistent axiom system below a certain size can any explicit number written out in base 10 EVER be proven to be an upper bound for BB(n). If we make n something like 100 million, that encompasses all possible axiom systems that are human comprehensible.

A "finite" number with no provable upper limit - how much sense does it make to say that is well-defined? It is enough to make you take up constructivism! (Or quit math for something more sensible. Like politics.)


While Graham's Number is very, very large, there is another number that dwarfs it:

TREE(3): http://en.wikipedia.org/wiki/Kruskal%27s_tree_theorem

From the article:

The TREE sequence begins TREE(1) = 1, TREE(2) = 3, then suddenly TREE(3) explodes to a value so enormously large that many other "large" combinatorial constants, such as Friedman's n(4), are extremely small by comparison. A lower bound for n(4), and hence an extremely weak lower bound for TREE(3), is A(A(...A(1)...)), where the number of A's is A(187196), and A() is a version of Ackermann's function: A(x) = 2↑^(x-1)x in Knuth's up-arrow notation. Graham's number, for example, is approximately A^64(4) which is much smaller than the lower bound A^(A(187196))(1).


This is the best one I've ever seen. The description seems fairly benign, but then one needs a ridiculous number like A(187196) to describe the applications of Ackermann's function to get a weak lower bound.

And the first two steps are rather small, which adds to the surprise.


Graham's number is big, sure. But in a similar duel[1] between two MIT professors, they settled on this number: “The smallest number bigger than any number that can be named by an expression in the language of first order set-theory with less than a googol (10^100) symbols.” Try and beat that! :)

[1] http://tech.mit.edu/V126/N64/64largenumber.html


Per my comment above, Busy Beaver will pass that pretty quickly too, as again, a Busy Beaver contestant will rather early on simulate that entire problem (and BB has ready access to numbers like a mere googol). As large as that number is, I'd guess it's probably under BB(30), and certainly under BB(100) as I'd bet even a mere human could write a 100-state TM to simply simulate that answer.


Why? Perhaps I misunderstand something, but why can't I define a turing machine and the upper bound, and then BB with first order set theory. It is therefore my misunderstanding, that this gives me a lower bound of

BB( googol -n),

whith n the number of symbols I need to define BB.


To inject a bit of humour into a maths discussion: Graham's Number on QI with Graham Norton. http://www.youtube.com/watch?feature=player_detailpage&v...


Here's another mind-blowing result: there are about 10^80 subatomic particles in the known universe. If every one of these particles were a computer with a clock cycle of the Planck time (10^-43 seconds) and was running continuously for the life of the universe (10^17 seconds) the total number of computable states would be only about 10^140, which is the number of states that can be represented in about 20 bytes of memory. The total number of states that will actually be computed by physical computers will, of course, be much, much smaller than this. So make every cycle count :-)


2 log(10^140) / 8 ~= 58.134, so it's more like 58 bytes. Irrelevant for the argument? Definitely, but I had to double-check the moment I thought '140 decimal digits in about 160 bits? That cannot be right.'

Also, the number of states that the universe has time for to compute may be a lot smaller than the number of possible states. For example, if each particle, at each step, made a two-way choice, that single particle could run through that 10^140 potential states in no time flat. Because of that, you cannot say "64 bytes ought to be enough for anybody"


You're right about the result being 60 bytes, not 20. (Getting old. Can't convert logarithm bases in my head any more :-(

But you're wrong about this:

> if each particle, at each step, made a two-way choice, that single particle could run through that 10^140 potential states in no time flat

No, each particle can compute 10^43 states per second, or 10^60 states in 10^17 seconds (10 billion years). I think that's where I got the 20 byte number. It's 20 bytes per particle, 60 for a universe of 10^80 particles computing in parallel.

So if you find you need more than 60 bytes to do anything, that just shows that you haven't properly optimized your representation ;-)


by "2 log" you mean log_2


Apologies. I thought that would be evident from context, but I should have explained it. Also, it would have been better if I had left of the space.

To make matters even worse, I just realize that the custom to write the logarithm in base 2 of x as ²log x (superscript digit two, 'log', 'x') may not be used world-wide and may even be specific to the Netherlands (http://nl.wikipedia.org/wiki/Logaritme#Natuurlijke_of_neperi...) That may have made this even more incomprehensible.


> the total number of computable states would be only about 10^140

Related scale from the article:

> in Go, which has a 19-by-19 board and over 10^150 possible positions

Brute force really works only in small cases so reducing the problem space is incredibly important, and the human mind is frighteningly effective at this. Once you get to see the patterns and feel the "flow" of a Go game, watching high level player games is stupefying.



It's intriguing following the submission history of this item, and similarly interesting comparing the comments here with the comments three years ago. I idly wonder how much they reflect the change in the HN readership in that time.

We all know about sentiment analysis, I wonder if a similar sort of analysis could be done between this thread and one or both of the others. Find some sort of change, then look for similar changes in other items, discussed twice with a long gap.

http://news.ycombinator.com/item?id=47408 1971 days ago

http://news.ycombinator.com/item?id=492615 1426 days ago

http://news.ycombinator.com/item?id=951095 1157 days ago

http://news.ycombinator.com/item?id=1539538 912 days ago, 68 comments

http://news.ycombinator.com/item?id=2024576 761 days ago, 31 comments

http://news.ycombinator.com/item?id=2531994 620 days ago

http://news.ycombinator.com/item?id=3262788 425 days ago

http://news.ycombinator.com/item?id=3615532 334 days ago


Relevant: http://djm.cc/bignum-results.txt

This is a precise version of the challenge, to write a large number in 512 bytes of C code.


more interesting than it sounds. The challenge uses a hypothetical version of C with arbitrary big memory and integer limits.


I actually got fussed at for my answer to a variant of this question this past semester. I have a professor who talked about when he was in college and his instructor had the challenge: what is the biggest number you can make with three nines?

His answer was 9^(9^9). Of course, I thought this question warranted further discussion and I mentioned that Knuth's up arrow notation allows an even bigger number and demonstrated it on the board. The professor seemed annoyed and I asked him about it later; I believe he mistook my interest in contributing to the discussion as trying to one-up him (which was NOT my intention at all).

I've been much quieter in that class since :(


9! (9! up arrows) 9!

Of course, you can also have 9!! and 9!!! and 9!!!! or even 9! factorial signs after the 9, etc.

The original problem statement of "write this in 15 seconds using standard notation" is less easy to game. But I think the right answer is to just have 9 (up arrow with 9 under it) 9 and then make the 9s bigger using factorials until your 15 seconds run out.

How pointless.


Keep in mind, though, that you would want to write it (9!)! or ((9!)!)!. 9!! is actually the double factorial, which is defined as 9!! = 1 * 3 * 5 * 7 * 9. So 9!! < 9!.


I've never heard of that, but sure, do (9!)!. Since the problem as stated by the OP (not the article itself) specifies that only 9s are limited, it doesn't make a difference.


When talking about big numbers, I would definitely consider up arrow notation as standard notation.


At the very least there should be !s after those 9s.


Or, (9!^9!)!^(9!)

I find it amusing that anyone could think they've "won" a game such as this, there is just so much potential for incredibly large numbers, even if you omit the possibility of using thing's like Knuth's up arrow notation.


I think it all hinges on what's meant by "with three nines".

Are you allowed non-digit symbols? In that case, there's obviously no upper bound. Just e.g. keep adding !s after a 9 and the number gets bigger and bigger.

Are you only allowed the 9s, with the only possibly difference in meaning being the positioning? Then 9^9^9 (where the ^ is represented purely by positioning on paper) looks to be the best you can do. I don't believe things like up-arrow notation can be represented with just positioning.


I love that this essay was so easy to follow in the beginning, before ascending farther that I could have imagined.

I assume that there are so few comments because everyone is still reading!


Nice read. I was epxecting it to mention Knuth's arrow notation though[1]

[1] http://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation


Define: the oh-holy-crap number, d, as follows:

a = the largest number for which there is a concise definition in the English language version of wikipedia

b = the 2nd largest number defined on wikipedia

c = the 3rd largest number defined on wikipedia

Then, let d = c -> b -> a (using Conway chained arrow notation)


This works until you put the definition of d into the English language version of Wikipedia


I always had a fascination with large numbers and expressing them in mathematical notation. It's fun to use the next "step" to outdo someone you talk to, ie tetration for exponentials, knuth up arrows, ackermann function, etc.

I'd gone even as far as put on our startup's team page an offer to give a discount to anyone who can give interesting answers to a similar question as the one proposed in the article!


Oh wow. This is the essay that solidified my decision to go into/stay into computer science. I had forgotten about it, great to read it again.


I really enjoyed reading this. It covers so many different topics and is relatively simple to understand.


As usual, xkcd has something related to the topic: http://xkcd.com/1162/


Or the third panel of http://xkcd.com/207/


Who can name the least integer not nameable in fewer than nineteen syllables?


Did you also read Rucker's 'Infinity and the Mind'? Contains a fascinating discussion of this problem.


I just remembered a philosophy prof. mentioning it. The phrase "the least integer not nameable in fewer than nineteen syllables" is a direct quote from Berry, I think.


I always thought that a googolplex was a googol to the power of a googol.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: