I went into this hoping for a mathematical thought experiment, but rather this is merely a historical thought experiment in the sense of "Wouldn't it be nice of mathematicians accepted CH early on?". It seems the big selling point of accepting CH is that mathematicians would be less hesitant to use nonstandard analysis.
For an actual thought experiment that rejects the continuum hypothesis, I rather enjoy the explanation found at:
That sort of argument makes me a nervous. One of my favorite mathematical quotes is a sort of related one about the Axiom of Choice, referenced and explained at https://math.stackexchange.com/a/787648: "The Axiom of Choice is obviously true, the well-ordering principle obviously false, and who can tell about Zorn's lemma?" That sounds like the "obviously false" branch of a similar debate about the continuum hypothesis.
I've generally found the opposite. Polemically, a true mathematician can write theorems where every single proof is riddled with errors (actual errors, not just "typos") but all results, building upon each other, are still true; a true physicist can tell you what the result of a calculation will be even if they are unable to actually do the calculation.
Maybe you would call that "a mathematician's/physicist's intuition", rather than "human"?
I think intuition is not something you’re born with, it’s something you build through experience.
The best physicists I know don’t sit down and calculate that often. They rather play with “cartoon pictures” to figure out what problems are interesting and what their solution might look like, and only throw math at the most promising of these problems.
I don't think this is a slam dunk. For this argument to work, the dart probability must be 100% for any function. This is supposed to be clear "intuitively", and then, by constructing a counterexample using the CH, it's concluded that the CH is false.
But the space of functions from R to countable subsets of R is so vast (and so far removed from the physical world) that I don't think it's possible to have any "intuition" of what's possible in that space. And indeed, we see that there's a construction of a function f that doesn't conform to the "intuition". If there's an "intuitive" line of reasoning and a formal one, and they disagree, shouldn't we just conclude that our intuition is flawed?
> shouldn't we just conclude that our intuition is flawed?
Alternatively, we might conclude that our intuition is right and instead our definition of real numbers isn't exactly what we want for some cases/questions.
The first flaw I see is that the author is imprecise by commingling probabilities (0%, 100%) with absolutes (possible, impossible, none, never, etc).
> After all, probability-zero events do happen. Not a problem! Just pick two new real numbers! And if this fails, pick again!
Probability-zero events happen all the time. The probability of getting any specific value selected uniformly at random from the unit interval (say, 0.232829) is zero.
Probability-zero events should not be conflated with properties that exist nowhere.
> We can now state that for any such mapping, none of the three reals is in the countable set assigned to the others. And this entails that we can prove that |(ω)| > |ω2|! In other words, we can prove that there are at least TWO cardinalities in between the reals and the naturals!
That's... not how cardinalities work. Just because you have two sets with different elements does not mean they have different cardinalities. For instance, consider the set of integers {..., -1, 0, 1, 2, ...} vs the set of half-integers {..., -1/2, 1/2, 3/2, 5/2, ...}. These clearly have different elements, but you can easily construct a bijection between the two (just add 1/2 to each element in your set of half-integers), so you can demonstrate that they have the same cardinality.
> We define f(x) to be {y | y ≤ x}
Um, no. This demonstrates the existence of one such mapping. It does not demonstrate that the set of such mappings covers any substantial portion of the entire space of possible mappings.
Also, this entire argument seems to be founded on https://en.wikipedia.org/wiki/Freiling%27s_axiom_of_symmetry. It is not clear that Freiling himself accepts this axiom -- "Freiling's argument is not widely accepted because of the following two problems with it (which Freiling was well aware of and discussed in his paper)."
> Probability-zero events happen all the time. The probability of getting any specific value selected uniformly at random from the unit interval (say, 0.232829) is zero.
I would strongly challenge that claim. First, you did not choose that number uniformly at random, you chose it from at best a countably infinite subset, or more realistically, from a finite subset. And secondly, I do not think you can describe a situation where a number is actually chosen uniformly and randomly from the unity interval.
It is, and if you choose the number uniformly at random, it's not just "effectively" zero, it is precisely zero.
GP's point, as I understand it, is that it is not actually possible to choose a number from [0, 1] uniformly at random in "real life".
I think you could argue that, e.g. in the dartboard thought experiment, the probability of choosing individual points doesn't really matter: only probabilities of measurable subsets with positive measure matter.
I guess, but, the set is not even countably infinite. "Selecting at random" is something that happens in the real, non-infinite world, not in the mathematically rigorous would where infinities can exist. So, no, probability-zero events do not happen in either.
Not necessarily - you might just come up with a number you know is in the set, say pi/4. I know it's in the set because it satisfies the conditions that define it. Still, the odds of that particular number being picked up are zero.
It wouldn't "select" as if it were an infinite deck of cards, but rather generate a number we know is on the infinite set. It can very well take an infinite amount of time to come up with the digits though...
Mostly I just find these arguments to be evidence that 'measure theory is not very interesting', that is, it's concerned with proving things about mathematical objects that you won't find in reality and therefore I don't care about.
I wonder sometimes if there is a concrete version of the statement: 'there is an infinite number of interesting theorems', which would suggest that perhaps doing 'all the math' is not a good idea and we should only do the math which we find important.
(of course, others would disagree that measure theory is unimportant, anyway. Shrug.)
You need measure theory for probability, economics, QFT and Physics, etc. And who is doing "all the math"? The vast majority of resarchers who "do math" are largely in PDEs and other fields that simply use the technology of math for "things that you find in reality" like engineering problems or machine learning and so forth. And most mathmaticians would agree that it is some of the most uninteresting and ugly kind of math.
Whereas the relative minority of people who study really abstract things like say k-theory or large cardinals in set theory are largely doing it out of interest in it's intrinsic beauty. And this is especially true for idk, some esoteric subfield of tropical geometry or modal logic or something, who's relevance to "things you find in reality" are completely orthogonal as to the motivations of those people who chose to spend their lives uncovering the truths within them.
Math research isn't about blindly marching from proof to proof by mechanical deduction with no conception of the larger picture like a uniform bubble spreading outwards, it is done by small communities of scholars who hack away at a specific nexus of interesting problems and structures for their own sake.
Sometimes, like with spin bundles or lie algebras or non-abelian geometry, yeah you can apply it to "real" problems, but that's not how the theory was developed, and as a theoretical physicist I will tell you that you will find no greater blindness to the underlying structure or ugliness in the use of the technology than those people that exclusively wield the technology against "real" problems, instead of appreciating it for its own sake.
Well it is the theory that underpins those at the moment, but that doesn't say much about the counterfactual where it isn't.
But I think I can say with confidence that none of those fields care about the fact that hitting a rational number out of the reals has probability 0. If they do something's wrong.
Edit: oh wow your reply got a lot longer after I responded
Measure theory is not about events of probability zero, it is about how to ignore them and prevent them from messing up results.
Let's say we play a game, we uniformly pick a random real between 0 and 1 and you win if it is rational, I win if it is not.
You can obviously see that it is unfair, but how do you prove it? You need a concept of integration that can easily ignore a dense set of discontinuities, Riemann integration is not going to give you any good results (in this case at the very best it would tell you that you win between 0% and 100% of the times, not very useful)
Measure theory and Lebesgue integration are a way to discard this noise.
Actually in measure theory you generally use L¹, L², etc spaces where functions are defined modulo null sets; that is the function that is 1 on rational and 0 on irrationals is considered to be the same function as the constant 0.
In measure theory on the Reals the value of a function at a specific point is generally considered irrelevant.
The difference is that I'm interested in a version of physics and economics that is not aware of the distinction between 'real' and 'rational'. Hence none of that should make a difference.
If you work with mostly continous and/or well behaved functions you do not need most of these.
I suspect that in both physics and economics you might end up using stochastic methods that use concepts and techniques similar to those of measure theory
One thing that is used a lot in physics are known as Dirac Deltas[0] that is, in very informal terms, the derivative of the function f(x) = 0 for negative x otherwise f(x) = 1.
Physics are very good at working with concepts and abstraction before any formal mathy justifications can be found, but the only way to formaly work with a dirac delta that makes sense formally is defining it in terms of measures
That is not true, it is not the 'only way' to formally deal with them. A better way to think of them is as the vector space dual of functions (/forms) under the pairing given by integration. No measures required. The measure theoretic explanation is very much "fitting delta functions into our existing machinery" rather than any sort of inherent requirement.
Actually an even better way to think of delta functions is just as a geometric object, a point (or line/plane/etc). Which is somewhat related to the measure theoretic version, but much more simple to think about.
Even if accuracy is finite, the fact that, for example, circles aren't polygons is definitely relevant to physics. You might be able to get all of the relevant physics you need without the continuum by working with several disjoint sets of numbers (the rationals, the rationals-multiplied-by-pi, the rationals-multiplied-by-e, etc) but I'm not even sure of that.
The main problem with a statement like that is that "interesting" is extremely subjective. Personally, I often find math and CS to be more interesting when it's further from reality. To each his own, I suppose.
Sorry, I tried my best. I wanted to mention the thought experiment part, since that is the most interesting bit. (But I'm not sure why it was misleading?)
It’s a thought experiment for how mathematicians could have assumed the continuum hypothesis, and how dangerously close they came to making that mistake. It’s not an argument in favor of CH.