Hacker News new | past | comments | ask | show | jobs | submit login
What are the 'real numbers', really? (vanderbilt.edu)
107 points by dhammack on Dec 26, 2013 | hide | past | favorite | 96 comments



I do have an issue with this line "Ultimately, infinitesimals were discredited and discarded by mathematicians (though they continued to be mentioned in some physics books many decades later)"

Infinitesimals have been made rigorous with modern mathematics.


Indeed, many mathematicians think in terms of non-standard analysis, and then translate their proofs into standard arguments, even if the non-standard ones can be made just as rigorous as the standard ones.

Terry Tao has a wonderful series of posts about hard and soft analysis, ultrafilters, and non-standard analysis. He writes

    I feel that one of the reasons that non-standard analysis is
    not embraced more widely is because the transfer principle,
    and the ultrafilter that powers it, is often regarded as some
    sort of “black box” which mysteriously bestows some
    certificate of rigour on non-standard arguments used to prove
    standard theorems, while conveying no information whatsoever
    on what the quantitative bounds for such theorems should
    be. Without a proper understanding of this black box, a
    mathematician may then feel uncomfortable with any
    non-standard argument, no matter how impressive and powerful
    the result.
and

    The main drawbacks to use of non-standard notation (apart
    from the fact that it tends to scare away some of your
    audience) is that a certain amount of notational setup is
    required at the beginning, and that the bounds one obtains at
    the end are rather ineffective (though, of course, one can
    always, after painful effort, translate a non-standard
    argument back into a messy but quantitative standard argument
    if one desires)
(from http://terrytao.wordpress.com/2007/06/25/ultrafilters-nonsta...)


I agree. It would be truer to say that infinitesimals are studiously ignored by modern mainstream mathematicians because they feel that Dedekind and co. have put the calculus on a firm footing way back when.

Anybody with a small bit of curiosity or a dashing of non-conformity will be suspicious of this narrative.

If anything, infinitesimals in their various guises carry a certain explanatory heft, and are quite beguiling little creatures if you take the time to get to know them. I'd be happy to elaborate or leave a few links here if anybody is interested.


I loathed limit-based calculus in High School and College. Later I read Elementary Calculus: An Infinitesimal Approach http://www.math.wisc.edu/~keisler/calc.html and it all came clear in a fraction of the pages. It's infuriating that most math curricula won't drop those old, bloated, overly formal calculus tomes to improve the clarity and effectiveness of the instruction method.


Added in edit to emphasise a point:

    If all you want to do is differentiate and integrate,
    then non-standard analysis is probably, for most people,
    a faster way to be able to do just that.
Now read on ...

Non-standard analysis has been put on a firm, formal footing. Theorems have been proven showing that (largely) it's equivalent to the regular form of analysis. Some things are easier to prove in standard analysis, some things are easier to prove in non-standard analysis, etc, etc.

However, this is only really of use if all you want to do is calculus. If you want to go beyond calculus, almost everything (in this and related areas) is about sequences, limits, limiting processes, functions, and transformations. There, non-standard analysis tends not to help, and unless you've done calculus the standard way, you have to learn all this stuff in an unfamiliar and difficult-to-visualize, abstract area.

One of the main reasons for continuing to learn calculus in the epsilon-delta limiting process manner is exactly because it's not only formally sound, it's also giving you tools for moving beyond the rather limited world of differential calculus.

Speculating wildly from limited experience, it might also be the case that starting people with the non-standard approach in calculus is actually just as confusing. You may find that you really only got the insights you did because you had already struggled with the standard approach, and then were given something that made it all fall into place. Perhaps some people they think the non-standard approach is easier, but in fact it's only because they've actually got the foundations from the other. Just a thought.


> If you want to go beyond calculus, almost everything (in this and related areas) is about sequences, limits, limiting processes, functions, and transformations. There, non-standard analysis tends not to help ...

Why do you say this? I ask because I've found internal set theory, Edward Nelson's axiomatic version of nonstandard analysis, to be a lovely tool for doing typical sorts of things in analysis.

You have to learn to wield the "standard" predicate [0], which is too dark an art for some mathematicians, I suppose. But, in my opinion, nonstandard characterizations of notions like convergence and continuity are delightfully simple and direct.

It also turns out that when you have nonstandard numbers at hand, infinity is an over-powerful abstraction for some purposes. Nelson came up with a new formalism for probability theory [1], for example, that makes finite spaces powerful enough to capture what's interesting for most purposes. Similarly, finite but unlimited sequences often are "long enough" to incorporate all the interesting behavior of infinite sequences.

0. Alain Robert's Nonstandard Analysis is a good starting point.

1. See his short book Radically Elementary Probability Theory. I love this book, and didn't much like probability theory before reading it.


I disagree. The vast majority of students take math classes for the practical applications - science and engineering - not to continue theoretical pure math study. Therefore the focus should be on effective teaching of applied math. I am sure that if a student wishes to explore their studies in pure mathematics they will be clever enough to learn whatever they need in specialized classes.


Actually, you are agreeing with me. You are saying that doing calculus was, for you, much easier using the infinitesimal approach. I'm not disagreeing with you. In fact, you'll find that advanced mathematicians think in that way, although they can drop back to epsilon-delta work if they need to (which they often do).

So we are in agreement. My point is that if you teach calculus that way you have immediately ham-strung anyone who might go on and do anything other than engineering or physics. In fact, there are deep theoretical arguments in physics where you need to use the standard approach, and the non-standard approaches are much more difficult.

My point is that if all you want is calculus then it's very likely that the non-standard approach is fine. I'm also arguing that this is limited thinking. Clearly you were never going to go further in these sorts of subjects - does that mean that everyone else should also be taught in a similarly limited way?

I also observe that limiting arguments are essential in anything other than the most direct and practical versions of engineering, so again, the point isn't in the calculus, the point is learning about limits.

Many people don't need any math at all beyond arithmetic, and I know a lot of people who proudly announce that they can't even do that. And to some extent it's true - most people don't need any math at all. Why were you bothering to take calculus? I'm sure you've never needed it.

But let me add that if all you want to do is arithmetic, why bother? Just use a calculator. If all you want to be able to do is differentiate, why bother? Feed it to Wolfram Alpha. If all you want to do is program, why bother? Hire someone to do it.

But yes, if all you want to do is high-school calculus, there are easier ways to learn the processes to jump through the hoops, pass the exam, and get the piece of paper. For most people that's all they care about. We probably agree on that.


For me it was completely the other way around: I was "taught" calculus using the infinitesimal approach but without any rigour. Statements like "As dx gets really really small x+dx/x becomes 1" drove me crazy! Why was it sometimes ok to replace dx with 0!? The idea of an "infinitely" small number to me was always vague and suspect. So while I could do the calculations I never trusted the results.

This meant that maths stopped having the same appeal to me as computer programming.

It was only years later when I revisited the epsilon delta arguments that it finally made sense. It was a revelation to me that you could explain all of calculus without ever talking about "infinite".

I wish it had been taught to me rigorously the first time around: I would have been much better off.


Conversely, I find infinitessimals vague and woo-woo, especially the way physicists and statisticians often use them. Once the epsilon-delta style "clicked" for me, it felt like second nature.

How can you tell whether it's standard analysis that's confusing per se or you just had poor math teachers?


That's funny. I despised my college calculus courses because they were so informal. Much like the author of this post said, my calculus education focused entirely on boring rote computation and not at all on proofs, the,a tater of which is. The only part I really consider to be mathematics.


I'm interested! I think Dedekind cuts are reasonably understandable, but infinitesimals are on the surface of much of our calculus syntax, so I'd be glad to understand where they become so tricky formally.


I wrote a short paper on the topic once upon a time[1] which you may find interesting. It's part history of math, part philosophy of math.

It's not a great paper and most of the insights in it come from others but here is some of the arithmetic of nilpotent[1] infinitesimals as shown in the appendix.

Imagine an entity which is not equal to zero but that when raised to the power of 2 or higher is equal to zero! Sounds odd, doesn't it, but it works! (ϵ is an infinitesimal)

ϵ != 0 but ϵ^n = 0 | n>1

ok? so we get:

(ϵ + 1)^n = 1 + nϵ thus: (ϵ + 1)^−1 = 1 − ϵ

e^ϵ = 1+ϵ

(ϵ + 1)(ϵ−1) = −1, or alternately (1 + ϵ)(1 − ϵ) = −1

and finally (for calculus): ϵf′(x) = f(x + ϵ)−f(x)

1: http://leto.electropoiesis.org/propaganda/The_Analyst_Revisi...

2: https://en.wikipedia.org/wiki/Nilpotent

edit: clarity, line breaks!


How do these differ from the [dual numbers](https://en.wikipedia.org/wiki/Dual_number)?


You appear to have an error. You write:

  (ϵ + 1)(ϵ−1) = −1, or alternately (1 + ϵ)(1 − ϵ) = −1
That alternative should surely be:

    (1 + ϵ)(1 − ϵ) = 1
Not least, in a commutative system (1+x)(1-x) = 1-x^2. Thus

    (1 + ϵ)(1 − ϵ) = 1 - ϵ^2 = 1


Thanks, well caught :)


To be fair to the author, he does discuss Robinson's approach to infinitesimals (but not Skolem's) in the very next paragraph.


The article actually does discuss Robinson's non-standard analysis, but it's a shame it does not deal with smooth infinitesimal analysis and intuitionism.


The problem with "points on a number line" as a definition for real numbers is that it's not clear how you can tell if you have all of them. You can populate a number line as densely as you care to using just rational numbers, but that's not all of them, you're missing out on numbers like the square root of two. You can toss in the non-intergral powers of rational numbers, but you still won't have all of them, you're missing out on col numbers like pi (or tau, if you prefer). Even after you toss in every solution to every differential equation you can name, and every number you can generate using well defined finite or infinite serieses, there's probably some horrible diagonaliztion proof that says you still don't have all of them.


If you assume that there is no number bigger than zero but smaller than every positive number (basically the Archimedean property) then you can prove that "you've got them all." You use Dedekind cuts.

Suppose there's a location on the line that's somehow missing - call it x. Let A be all the numbers less than x, let B be all the numbers greater than x, and that gives you your Dedekind cut. That Dedekind cut is, in a very real sense, x, and that means x is a real. QED.

That needs tidying up and formalising, but it does work.


If you're using the Dedekind cut definition why use the line at all? Just say a real is any set of rationals bounded above, with arithmetic defined the obvious way; defining equality is slightly fiddly but it's fiddly with a number line too. What does the line visualization gain you?


Because it was asked how we knew we "got them all", referring to points on the line. The reals are a way of modelling the line, the line is a way of visualising the reals. Each is complementary to the other.

And besides, the rationals are totally ordered, and their completion is totally ordered, so it makes sense to think of them as arranged in a line. The problem is that the reals are very, very strange in some ways, and people do get seduced into thinking they understand them, whereas usually it's just a case that they've got used to them.


The set of numbers that can be uniquely defined in the English language in a finite number of letters is a countable set, because the set of finite sequences of English letters is countable.


That's what I would think. But since the set of reals is clearly uncountable, it seems to me that the precise membership of the real numbers cannot be unambiguously defined. There must be uncountably many reals that are not the solution of any equation that can be made using a finite number of characters. But if a "real" number cannot be specified, in what sense does the number exist?


If you want to take a constructivist viewpoint, it doesn't exist. You can define constructible analysis, where you only work with numbers that you can approximate arbitrarily well using a Turing machine (this is a subset of all numbers that you can define, since you can do tricks with the halting problem). But constructible numbers still don't have decidable equality, since the halting problem reduces to constructible equality: is the number whose 2^-ith place is 1 if and only if the Turing machine M halts on the ith step equal to 0? You can approximate it arbitrarily well by running M for more and more steps, but proving that it's 0 would require proving that M never halts. (You can, however, get decidable ordering if you know a priori two numbers are unequal, simply by approximating them close enough that you can distinguish them.)

Personally, I'm not a constructivist; I think that these undefinable real numbers exist just as well as the ones that we can define. But that's a philosophical argument and I was never any good at those.


I have an issue with this (albeit parenthesised) line: "It turns out that, in some sense, the real numbers would still look like a line under infinite magnification, but the rational numbers would be dots separated by spaces."

In-between any two rational numbers there's an infinite number of other rational numbers. So, in any reasonable sense and at any level of "magnification", if you can "see" two dots representing two rational numbers then they are connected by a line of other little dots (just like the reals). Perhaps you could argue though that at "infinite magnification" there are no rational numbers to be seen, it's just empty space, whereas the reals of course still make a nice line.


Well, consider the ruler function[1], which is continuous on the irrationals and discontinuous on the rationals. The real numbers really are denser than the rationals; that's why something like the ruler function is possible (notably, a conceptual reverse, continuous on the rationals and discontinuous on the irrationals, cannot exist -- the rationals are too far apart). I'm pretty sure this is precisely the phenomenon the quote you extract is referring to: if you were standing, infinitely magnified, at a point on the ruler function, then the function would be continuous ("look like a line") if your point was irrational, but if your point was rational, there would be a measurable gulf separating you from the rest of the function.

[1] http://en.wikipedia.org/wiki/Thomae%27s_function


Infinity is a pretty strange concept. :) I'm not sure arguing over it in this format is meaningful, but for the fun of it:

Consider that the integral of the ruler function from 0 to 1 is 0 (as is stated in your reference 1). In layman's terms you could express this as "there are infinitely more irrational than rational numbers between 0 and 1". At the same time, "for every two rational numbers there are infinitely many rational numbers in-between them". What sort of "picture" is this compatible with?

I still think that the only picture that really makes any sense is a solid line at any finite magnification, yet empty space at infinite magnification.


I don't understand the point you're trying to make?

The Cantor set shares the property that "for every two [points in the set] there are infinitely many [points in the set] in between", but no one would describe it as looking like a line. It's rather sparse.


What I take issue with is an "image" of two rational numbers as two separate dots, with empty space in-between. That's a very deceiving image IMHO, since I cannot think of a sane way to produce it.

The Cantor set is very different. It's even easy to give an example of two points in the set that can (sanely) be depicted with empty space in-between: 1/3 and 2/3. If I'm not mistaken that example also disproves your stated conjecture... ;)


First of all, let me point out that 1/3 and 2/3 are both rational numbers, so if you can imagine them with empty space between, you've imagined two rational numbers with empty space between.

> It's even easy to give an example of two points in the [Cantor] set that can (sanely) be depicted with empty space in-between: 1/3 and 2/3. If I'm not mistaken that example also disproves your stated conjecture... [that between any two points in the set, there is a third one] ;)

Fair enough. Consider, then, the intersection of the Cantor set with the irrational numbers (you can think of this as the "open Cantor set"). It is, obviously, a subset of the Cantor set, and really does have the property described.

Since I'm feeling embarrassed about that last time, a proof follows:

-----

The Cantor set consists of all real numbers in the interval [0,1] which have a "decimal" expansion in trinary which does not contain the digit 1. That is to say, they can be expressed in terms of powers of (1/3) such that the coefficient of each power of 1/3 is either 0 or 2. (1/3 would usually be represented in trinary as 0.1, but is in the Cantor set because of its representation as 0.02222222...)

Let a,b be two irrational numbers in the Cantor set, a less than b. There is some decimal place at which they diverge, and since a is smaller, it has a 0 at that point, while b has a 2. Since a is irrational, it also has a 0 at some later point in its expansion (if every digit after that were 2, then a's expansion would be repeating and a would be rational). The number constructed by substituting a 2 for a 0 at that index is greater than a, less than b, and in the Cantor set.

Graphical representation of the proof:

    a = 0.......0......
    b = 0.......2......
then

    a = 0.......0....0.....
    c = 0.......0....2.....
    b = 0.......2..........


I don't think that works. The rational numbers are a dense subset of the real numbers. Informally this means every real number is either a rational number, or is arbitrarily close to a rational number. This means that at any magnification, if there was a hole that is filled by a real number, then their would also be a rational number that is arbitrarily close to that real number.


Well, I'm not a big fan of this "infinite magnification" idea in the first place, but "arbitrarily close" is typically one of those things that infinity can beat.

(Compare e.g. with the fourier transform of a function. It consists of a sum series which comes "arbitrarily close" to the function, but "at the limit" when the number of terms approaches infinity the function and its fourier transform is one and the same.)


Well, "in some sense" gives a lot of wiggle room! I agree the magnifying glass analogy is bad, but here's one "sense" in which the rationals "have more holes" than the irrationals.

There exists a function of the reals which is continuous at every irrational point but discontinuous at every rational point. However, there is no function of the reals which is discontinuous on the irrationals but continuous on the rationals. In this sense, the irrationals are "more continuous" than the rationals.

That's about the best I can do, though, which I admit is a stretch.


"Since (a,0)+(c,0)=(a+c,0) and (a,0)×(c,0)=(ac,0), the points along the horizontal axis have an arithmetic just like "ordinary" numbers"

Holy hell that is clear, concise and compelling. If only my professors would have explained it like this more often in my freshman calc class which was so much more abstract and proof based than anything I had encountered before. The only thing I remember form that time is hellishly long study groups late into the night with my classmates.


What are "real numbers"? A horribly misnamed fiction. Nearly all of them cannot be represented with a finite amount of information. I strenuously object to naming an uncountable set "real" when only a countable subset (measure 0 of the full set) can be worked with in any way at all.

We need to stop venerating the "real" numbers and start focusing on sets that are actually usable.


This is a similar argument to sqrt(2) being "not a number", back in the BC's, because it was not rational. And yet, you can construct it in a straightforward manner by making a right angled triangle with catheti of length 1, giving a hypotenuse of length sqrt(2). I suppose this would have made you equally uncomfortable back then.

One can definitely "work with" numbers that aren't easy to write. a + (-a) = 0, and this is valid for every real number a, not just "the ones which I can describe with a finite amount of information", or the ones I've written down at some point in my life.


The 'problem' with the reals is that there are numbers that cannot be constructed.

Every number that we can construct can be constructed in a finite amount of symbols. For example sqrt(2) is an unambiguous description of itself. Without use of the sqrt function, we can also call it the number x such that x*x=2. However, every description is a finite string constructed from a finite alphabet. We can easily show that the set of all such descriptions is countably infinite. However, we can also show that the set of all real numbers is uncountably infinite. Therefore, there is an uncountable infinity of real numbers that cannot be constructed.


Indeed, and the constructable numbers are studied as a subset of the reals, as are the algebraics, and the computables. You can make a choice as to the domain of discourse. If you like, feel free to restrict it to the computables (or the constructables).

Then apply the diagonal argument. Take the computable numbers between 0 and 1, including 0, not including 1. These are countable, so we can write them in a list, taking a mapping k from the natural numbers: { 1, 2, 3, 4, ... } to the set of computable numbers in [0,1).

Now let's construct a new number. In the first decimal place we put 1 if the first decimal place of k(1) is 0, and 0 otherwise. In the second place we put 1 if the second decimal place of k(2) is 0, and 0 otherwise. And so on.

This results in a number that's not on the list, and is between 0 and 1. So it must, by our assumption, not be computable.

Things become tricky.

So there's a choice to be made, and most mainstream mathematicians have decided to talk about, use, study, and otherwise accept the existence of the real numbers because it's convenient.

Feel free to choose otherwise.


I can't find a flaw in your arguement, but it seems like it leads to a contradiction.

Let a constructable number be one which can be unambiguously described in a finite string. Because we are working from a finite alphabet, we can trivially see that their is a bijection between the constructables and the integers (if we have n symbols, then each string can be read as an integer in base n, so the amount of constructables is no larger than the integers. We can also show that all integers are constructable, so the amount of constructables is no smaller than the integers). Now, take the set of all constructables, and use the diagonal arguement to construct a new number. We can see that this number is not constructable, however, it would appear that I have just unambigously described it, meaning that it must be constructable.

The only potential hole I see is that the ordering of the constructables when I apply the diagonal arguement is ambigous, but we can unambiguously order them by the lexical ordering of their 'canocial' description, and we can unambiguous define the canonical description as the smallest one when translated into a base n integer.

I suspect that doing the above will run into problems with computable numbers (as it likely involves the halting problem), however it appears to be an unambiguous description of a real number that is not constructable. Obviously there is some flaw in this reasoning.


You're using too many imprecise terms. First, what does it mean for something to be able to be "unambiguously" described? As opposed to ambiguously described? You have to define it.

Second, what does it mean for something to be described (unambiguously or otherwise) with a "finite string?" What is a "string" here?

You're playing too loose with these ideas and it's biting you. You have to start by defining them precisely. For example, I don't see at all how the new number not on your list is "described unambiguously." It's presumably not enough to say "there is some number not on my list, we will call it x" since we know there is more than just one such number. How is that unambiguous?

In any case, that's why you have to define these things precisely.


How do you know if a number is constructible? It is described by a computer program. Although the set of computer programs can be enumerated, determining if a program it will print out any digits is not something that cannot be determined. So yes, Halting Problem. :)

So you cannot list all constructables in a constructive manner because the list itself is not constructible.

A related concept

https://en.wikipedia.org/wiki/Chaitin%27s_constant


I reckon the flaw is that your bijection between N and your set of "constructable" numbers is not itself "constructable". In fact your argument can probably turned into a proof that there is no such "constructable" bijection.


The Reals are venerated because they're actually usable. Other numbers systems tend to be a gigantic pain in the ass to get any work done with.

The Reals are constructed specifically to be the smallest set that has some nice algebraic properties, like Least Upper Bounds. Sets that model the real world, like the constructables, countables, computables, etc. tend to be subsets of the Reals, and therefore don't have those properties. That absence makes life difficult.

The Real Number system, like almost everything in mathematics, is an approximation of reality that makes a trade-off between faithfulness and tractibility. As it turns out, gaining more of the former loses you quite a bit of the latter. It's generally not worth it.


Do you have any idea of what set we should use to replace them with? The rational numbers can do a lot, but we have discovered that there are numbers worth talking about (and which can be described) that are not rational. Whatever replacement you propose must be usable where ever we would use real numbers, and must be at least as simple to use.


One possible replacement is the computable numbers [1]; this includes the algebraic numbers and some common transcendentals (e, pi), and you can even build up something akin to standard analysis (computable analysis [2]).

[1] http://en.wikipedia.org/wiki/Computable_number

[2] http://en.wikipedia.org/wiki/Computable_analysis


Unfourtuantly, there exist numbers which are definable but not computable.


Sure. Chaitin's Omega is a good example. The question is whether such numbers occur in the real world.


The trick here is defining "occur" and "real world" precisely. Are you saying the act of me writing down those symbols and expressing the idea does not count as "occurring in the real world?" ;)


This reminds me of the self-defeating property of an "uninteresting" number -- a reasonable definition might be "any number that does not have any property of human interest", but then of course there is a smallest such number, and so that has the interesting property of being the first uninteresting number, a contradiction!

You're right though that a more precise definition of "real-world numbers" is needed, but I confess that my attempts to think of one in the past few minutes have been essentially circular (coming down to "the ones we know how to compute")!


Well, we can and have made the idea of a computable number precise: http://en.wikipedia.org/wiki/Computable_number

It's not clear whether the universe is computable, however, in the sense that we only find computable numbers in nature. This is kind of an epistemological catch-22, though. How would we know whether this were the case or not?


Unfortunately, the set of computable numbers, while countable, is more difficult to work with than the much more conceptually reasonable set of reals. Luckily, we have pretty great theoretical tools for dealing with uncountability, so I don't think the fact that the vast, vast majority of real numbers are unidentifiable is really that big of a problem. That, and the continuum is a really useful concept, even if it very well might not have any basis in physical reality.


IMHO real numbers are anything but. I believe there isn't a single thing in the universe that is represented by real number. Any physical law that involves pi should be considered as statistical in nature. There are no perfect circles. Only the things that are really well approximated by them.


Amen to that. The real world is discrete. The real numbers in our equations are just approximations.


imaginary numbers are also a horribly misnamed fiction. For decades, my dad was mesmerised by how if you plugged in a bigger than c value into the lorentz transformations (he had long since forgotten the form of the transformation) you would become "imaginary". To set him straight, I asked him, if instead we named them "Green" numbers, would you become "green" if you went faster than the speed of light?


What do you mean by "represented with a finite amount of information"? Are you referring to their representation in a positional notation like decimal or binary? Or are you referring to the much subtler and more advanced fact that almost all reals are uncomputable? The former isn't really true, and the latter, while true, is subtle enough that it doesn't matter for the vast majority of mathematics (and to replace the reals with the computable numbers would make most of mathematics messy).


I don't think "represented with a finite amount of information" means computable. For example, consider BusyBeaver(n). We have shown that their exists an n such that BusyBeaver(n) is uncomputable. However, "BusyBeaver(n)" still contains enough information to describe this number. However, because all descriptions are a finite string from a finite alphabet, we can show that only a countable infinity of descriptions exist. However there exist an uncountable infinity of real numbers. Therefore, most real numbers cannot be unambiguously described.


Again, it just comes down to what we mean by "information" and "description." We can certainly construct the real numbers using a finite amount of precise language, so it's reasonable to claim that we have described all real numbers. Heck, even the existence of the English phrase "all undescribable real numbers" evokes an interesting linguistic and philosophical debate, similar to http://en.wikipedia.org/wiki/Interesting_number_paradox.


We can construct the set of all real numbers with a finite amount of information. However, that set contains elements which we cannot precisely describe with a finite amount of information.

The phrase "all undescribable real numbers" does not introduce any problems, because we have still not described any specific undescribable number. We would run into a problem with a phrase such as "the smallest undescribable real number", as that would be a description of a specific undescribable real number. Fourtuantly, that particular phrase does not raise any problems because we can simply conclude that their is no smallest undescribable real number, in the same way that there is no smallest real number in general.


I don't see why we need to be shackled to the bounds of countability.


"Points on the line" is fine for the first, second, ..., tenth cut at a definition. Sure, completeness is the biggie for the reals compared with the rationals, algebraics, etc.

Still, as in the OP, mentioning Dedekind cuts is okay since it is one way to establish completeness, but there is much more, e.g., as in

John C. Oxtoby, Measure and Category.

and even that doesn't fathom all that is special about the reals. E.g., for just a little more, there is the continuum hypothesis, that little thing!

The OP wants to say that by mentioning Dedekind and completeness he is getting at what the reals really are; no, instead he is just cutting one layer deeper of something that has likely some infinitely many layers available.

Yes, yes, yes, I know; I know; the reals are the only complete, Archimedean ordered field, okay, after we have defined completeness, Archimedean ordered, and field and explained why these are important.

So, back to "points on the line" -- it's actually pretty good for a first cut.


I have a Master's in Applied Math.

The comments about how "few students take [Real Analysis]" doesn't square with my experience and survey of an undergraduate mathematics education. Such a course is often called "Advanced Calculus", and is a required course for a Bachelors-level education in Math. I also understand in the European-style approach to teaching Math, students start off with a foundational approach to Calculus through Real Analysis, and not the hand-wavy & computation-driven Calculus course.

The equivalence class approach attributed to Cantor is more generalizable in discussing sets. The theoretical foundation of Fourier Transforms lies in a similar completion of functions.


"I also understand in the European-style approach to teaching Math, students start off with a foundational approach to Calculus through Real Analysis, and not the hand-wavy & computation-driven Calculus course."

Yes. Where I graduated, all engineering majors learn the axiomatic definition of the real numbers including the "supremum (least upper bound) axiom" at the beginning of the first calculus class.


Along a similar vein you may also enjoy http://arxiv.org/pdf/1303.6576

The foundations of analysis by Larry Clifton. I always enjoy checking out the references in his papers as they are often hundreds of years old or more.


What other papers did he author?

This is a curious paper. It's a rigorous derivation of (positive) real numbers without the use of 0 or negative numbers anywhere. It isn't very useful, although the fact that this can easily be done is by itself interesting.

I have sometimes thought about the possibility of us encountering an advanced alien civilization and trying to match our math to theirs. Someone told me recently that if aliens were able to get into space, we can take it for granted that they knew negative numbers (in addition to more advanced concepts). I disagreed. Negative numbers are very convenient, but all the math that's needed for modern physics can, I think, be built up without them in a way that's more bulky and awkward, but not an order of magnitude bulky. This paper is weak evidence of my position.


The only other ones I know of are on his website http://cliftonlabs.net/TechnicalArticles.html


This is a great article but unfortunately has one thing horribly wrong: Democracy far preceded the Age of Enlightenment. A form of democracy was already in place in ancient Greece at around 500 BC. Newton and the Age of Enlightenment were much later, at 1600+ AD. See Wikipedia: http://en.wikipedia.org/wiki/Democracy#History, http://en.wikipedia.org/wiki/Age_of_enlightenment, http://en.wikipedia.org/wiki/Isaac_Newton.

Other than that, a great article!


Hehe. I balked at that too...

It also states that you cannot order the field of complex numbers. Whereas I seem to recollect that there are ways to do so. For instance, z1 < z2 if x1 < x2 or x1 = x2 and y1 < y2.


By your definition of <, 0 < i and i^2 < 0, however, the OP requires that 0 < p AND 0 < q => 0 < p * q, which is not fulfilled by your < for p = i = q.


Wow, vector multiplication suddenly makes sense. I had never seen it described with polar coordinates.

It's wonderful to have this little insight now. It's unfortunate that my math knowledge is so filled with holes.


" It seems that any proper theory of real numbers presupposes some kind of prior theory of algorithms; what they are, how to specify them, how to tell when two of them are the same.

Unfortunately there is no such theory."

http://njwildberger.wordpress.com/2012/12/02/difficulties-wi...


Guys like that in general have never seemed all that convincing to me.


I admit, some of his ideas are a bit.. well, I don't like when people talk about God seriously, and he sometimes mentions it, very rarely. But aside that, everything I can understand from what he says is true. It's a philosophical debate and if you are on the "real numbers" bandwagon (where most people are), you would lose integrity and your reputation might suffer even if you would speak to Wildberger about real numbers. It's a shame really how people don't see why it's bad to use abstractions which are so general that they can be fit for any kind circumstances. Even if you real this article about the real numbers, there is wishful thinking (where he says that the real numbers would look like a line even at infinity but the rationals wouldn't. well, I don't see why the rationals would stop especially given what he says later...), cherry picking / the whole axiom selection stuff for proving it... and yeah, the axiom idea is generally bad anyway. etc.


So I have a phd in math and I do tend to think less of other mathematicians who argue against infinite sets or uncountable sets and such, the argument's been over for a hundred years, you lost, deal with it. It's mathematical geocentrism.


a real number is "a point on the number line"

These posts are always stimulating.

My understanding of a line is that it is delimited by two points, but does not contain any points. To elaborate, no point could be "on" a line because a point has no extension, whereas a line does. This is the crux of the matter. Therefore a line is not "made up of" points. (By analogy a plane could not be made up of lines.) This begs the question, what are lines made up of? Are they made up of anything? Is a point really where two (or more) lines would intersect if they could intersect. Is this what is meant by a Dedekind cut?


The "point on the number line" definition has always been non-rigourous. It is meant to imply the intuition that real numbers are what we typically think of as "numbers", notably that they extend to infinity, are ordered, and are dense (for any two distinct real numbers, there exists a real number between them). Of course from a rigorous perspective, this does not even suggest a difference between the reals and the rationals.

The line you are talking about in the rest of your post seems to be an 'unrelated' object that is used in geometry. I am not familiar with the formal definition of line that is used in geometry, but one way of defining a line is as the set of all points which satisfy "y=mx+b", for a given (m,b). A line segment would be the above definition with restrictions on the domain: x_0<x<x_f.


"My understanding of a line is that it is delimited by two points"

That is not how Euclid defined it and how it is still seen in geometry today. What you describe is called a (line) segment (http://en.wikipedia.org/wiki/Line_segment)

"but does not contain any points"

Lines extend indefinitely in two directions (if you go past Euclidean geometry, that 'indefinitely' changes meaning a bit)

One talks of a point being _on_ a line in geometry. 'contains' is something from set theory: "the set of all points on line l contains point P" is a perfectly valid expression (but "P is on l" is way shorter)


Excuse me. Of course. I was using line and line segment interchangeably there. Which I should have not been doing if I am aiming for clarity but I think my point (ahem) applies to line segments and lines that extend indefinitely in one or two directions. Presumably people will contend that even a line segment "contains" an infinite number of points. But if points have zero extension then even an infinity of them cannot sum to anything greater than zero. So I ask you again, does it make sense to think of lines (or line segments) as composed of points, I reckon it does not.


> But if points have zero extension then even an infinity of them cannot sum to anything greater than zero.

Can you make this rigorous? Because using the standard definitions, this statement is not true. It's true that a countable number of points must have total length zero (and you can even give a rigorous proof of this) but not necessarily true for a non-countable number of points. The study of "lengths of sets of points" is called measure theory.

I think it is unnecessary, however, to bring in the whole concept of length when defining lines. For example, we could simply define a line as a set of points obeying some special properties.


"But if points have zero extension then even an infinity of them cannot sum to anything greater than zero."

Infinities are weird; anybody who wants to learn math has to accept that. 0.99999… does equal 1, there are as many even numbers as integers, etc. these things are 'true' not because they make sense initially, but because they make the most sense of all the other things we have thought of so far. Similarly, a set of Aleph-0 points can completely cover a line.


"a set of Aleph-0 points can completely cover a line"

Aleph_0 is the cardinality of the integers. I don't think that'll cover a line. For that, you need the cardinality of the reals, C, which may or may not be Aleph_1.


OOPS. Thanks


Infinities are weird.


Perhaps your intuition changes if you think about the line (in a plane) coinciding with the x-axis. Don't you think it is reasonable to define that line as the set of points (in the plane) having y=0? Also, should that line not be identical (isomorphic) to the set of real numbers - which is just a set of numbers? And should not all other lines in the plane be identical (isomorphic) to the first line?


Would it make a difference if you substituted "infinitesimal extension" for "zero extension"?


That's the thing though. As I understand it, or as it's said to be: points have zero extension. So, no amount of points, not even an infinity of them could ever have extension. But, it should make sense for a line to be composed of entities with infinitesimal extension as you say. I have seen the term linelet used before for these entities.

Charles Sanders Peirce said in 1903, ”Now if we are to accept the common idea of continuity […] we must either say that a continuous line contains no points or […] that the principle of excluded middle does not hold of these points. The principle of excluded middle applies only to an individual […] but places being mere possibilities without actual existence are not individuals.”


> That's the thing though. As I understand it, or as it's said to be: points have zero extension. So, no amount of points, not even an infinity of them could ever have extension.

This isn't true for an uncountably infinite set of points, assuming by 'extension' you mean what is usually called 'meausre' in modern mathematics. Modern theory is perfectly fine with saying that a line of nonzero length contains an infinite number of points of zero length, and trying to draw on Euclidean notions definitions of 'point' and 'line' to find conclusions about real analysis is going to be unhelpful.

I'm not sure what that Peirce quote is trying to say.


> My understanding of a line is that it is delimited by two points, but does not contain any points.

A line is (or can be viewed as) an infinite set of points.

> To elaborate, no point could be "on" a line because a point has no extension, whereas a line does.

That seems to be a consequence of an unusual definition of "on".


I suspect you are talking about formal formulations of abstract geometry. That's not what we're talking about here. Here we are talking about lines as being sets of points in the plane that satisfy an equation of the form ax+by=c. Solutions (x,y) of that are said to be a line, although they themselves are points.

You can deal instead with Euclid's axiomatization of geometry, and there "line" is an abstract thing defined by two points. Different animal, although seldom explained clearly by teachers, who often themselves don't really understand what's going on. (Although some do, and don't get the chance to explore these things because of the pressure of the curriculum, and students who don't care, but need to pass.)

All too often people get confused about this and are told to shut up by their teacher, whereas in fact the student has had an insight, and demonstrated deeper understanding.


Your argument is more philosophical than mathematical. Lines are traditionally defined as the set of all points which satisfy some critera. In this case, a line is precisely made up of points.


In linear algebra, lines are sets of points, but in modern geometry, points and lines remain undefined terms, implicitly defined by their incidence relations (which points are "on" which lines).

You may find the Fano plane (a three-dimensional finite projective space) interesting:

  * http://en.wikipedia.org/wiki/Fano_plane (brief description)
  * http://math.ucr.edu/home/baez/octonions/node4.html (connections with higher math)


In geometry it is quite common to define a line as an infinite set of points (namely, those which satisfy a linear equation). Likewise with other figures, like a circle.


this is kind of a nonsense, circular definition: what is a line? A set of points that can be mapped onto the reals. The more meaty answer is down below, with dedekind cuts (although "a set that fulfills the arithmetic axioms and least upper bounds" is also sufficient).


The contents of the linked page were the first lecture I had in my undergraduate calculus course. At the end of the lecture, we all looked around at each other wondering what we had just signed up for.


Can someone explain the setup of the 0=1 exercise? It's poorly worded. Is it saying find: (Y,1,+,1,×) or is it saying find what "1" has to be to make it a valid field?


Given the field (Y, 0, +, 1, *), you need to show that either 0 != 1, or 0 = 1 is the only element in the set.

I don't remember the precise proof, but if memory serves it derives from the existance of opposites and inverses, and 0 and 1 being unique in the set, due the commutative properties of abelean groups.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: