Hacker News new | past | comments | ask | show | jobs | submit login

That's a much less profound statement, though, because you're talking about an implementation detail of "modern computers".

First of all, the set of values in the range [0,1] that can be represented by a computer is not at all the same as the set of rationals. If you're talking about fixed-point or floating-point data types, then there are only a finite number of bit patterns (corresponding to dyadic rationals), so obviously the probability of hitting one of them precisely is zero.

You may think I'm just being pedantic here, but I'm really not. The theorem you described is only interesting because there are an infinite number of rationals between zero and one, and yet the set still has measure zero. But if you're willing to say that any rational number is theoretically "representable" in a computer, then you're already assuming that we can go beyond the natively-available data types, and come up with our own representations -- e.g., a tuple of numerator and denominator. And in that case, we can also come up with representations of irrational numbers; for example, you could have a datatype that can finitely specify any algebraic number, such as sqrt(2) or the golden ratio. So the fact that a typical CPU happens to only be able to natively deal with rationals is irrelevant.

The beauty of the diagonalization argument described in this paper is that it applies to all possible discrete representations, no matter how weird or complex, without having to make any assumptions.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: