It's the other way around: real numbers are imaginary. All numbers are imaginary. Numbers are very useful concepts, and can be used in various ways to describe parts of reality. These descriptions can stop making sense if inappropriate kinds of numbers are used (e.g. -5 makes perfect sense as a displacement, but not as a count, and 5i makes perfect sense as part of a description of simple harmonic motion, but not as a displacement).
The use of the word "imaginary" to describe parts of complex numbers is quite unfortunate [1], because it misleads people into thinking there's something fake or weird about them, leading to confusion and entire articles trying to convince people they're valid. They're a totally normal, expected, and widely used feature of number systems as we understand them today.
The name they've got comes from how surprising and foreign they seemed to the people who first encountered them. Irrational numbers were similarly surprising and foreign to the people who first encountered them. Today we're perfectly comfortable working with irrational numbers like π, but the original reaction lives on in other meanings of "rational" and "irrational": based reasoning, and not based on reasoning, respectively.
"Imaginary" seems like a tougher hurdle, and while the article says "imaginary numbers" many times, mathematicians and physicists don't really. We talk about complex numbers, and describe complex numbers with 0 real part as "pure imaginary". "Imaginary" and "real" are just labels for the axes of the complex plane, and neither is literal in this context.
[1] As is the word "complex", which most commonly now means "complicated", but is used here with another meaning: made up of multiple parts.
Before I studied math, I always slightly resented imaginary numbers as being "math wankery" and just defined because mathematicians had a compulsion to generalize and define new nonsense because they could, and not because it made any sense to.
On the way to changing my mind, I learned that the Fundamental Theorem of Algebra only works for complex numbers (and not "real" numbers), the beauty and simplicity of rotations in the complex plane, but maybe most convincing to me was a history lesson about quaternions.
Quaternions are an extension of the complex numbers, but they're not typically taught in higher math education these days, which contradicted my resentment that mathematicians were just obsessed with getting more and more abstract and general for the sake of it. Of course, they were in vogue in the 19th century (Maxwell's equations were originally written down using them), but mathematicians soon realized they just weren't as useful or as "nice" philosophically as complex numbers and just about anything you can accomplish with quaternions were better accomplished with vectors of complex numbers.
That story played a big part in persuading me that there really is something special about complex numbers -- that maybe they're the most "natural" or "real" numbers of them all.
Also, note the origin story of the complex numbers:
Contrary to legend, they weren't discovered out of a desire by mathematicians to have roots to all quadratics such as x^2 + 1 = 0. It's perfectly sensible for an equation like that to just lack a solution: this just means the standard parabola never drops below zero. Analogously, if we calculate a rocket's payload mass to Low Earth Orbit and the answer is negative, we don't feel a need to find some deep meaning behind negative mass: we just say the rocket can't get to orbit at all. Simple.
It's cubics for which complex numbers were introduced. Cubics (with real coefficients), unlike quadratics, always have real roots, since one arm goes to +∞ and the other to -∞, so it has to cross the x axis somewhere in between. But when the cubic formula was finally discovered, it had this strange property that you frequently had to take square roots of negative numbers, then add those weird square roots to "regular" numbers, and if you just shut up and calculated, the weird parts would always cancel out and you'd get a "regular" number that solved the original equation. That is, you had to pass through the complex numbers in order to find the real solutions.
While Maxwell's equations certainly can be written with quaternions, they were not originally written that way. Maxwell originally just "wrote them out", meaning component-by-component. He had 20 equations! That's why H is sometimes used for magnetic fields: the electric fields were E, F, and G.
Nowadays we usually write 4 vector equations, or 2 in the language of differential forms.
If you do any electronics or signal processing (digital or analog) at all[1], you stop believing that complex numbers are "math wankery" immediately and embrace them as the only thing that's, uh, real.
You mention the Fundamental Theorem of Algebra, I would also add analytic signals. Those are complex by nature, and yet they make so much more sense than real signals ("real" in both senses: non-imaginary and "real world"). In fact, it turns out, real/real world signals are better represented as the sum of two analytic signals.
It's so weird how real numbers, the numbers we consider to be the normal ones, are the special case in a universe that seems to favor complex numbers for the most fundamental things.
[1] Of course also physics in general, but physics can be theoretical, while engineering is almost always rooted in practicality.
The octonions aren’t even associative, making them less “natural” and harder to work with.
But quaternions are quite useful for working with rotations in three dimensions. To be very technical, the unit imaginary quaternions form a double (universal) cover of the rotation group SO(3).
I didn't even know what "Quaternions" were until I started to get into 3D graphics and found that they're often used instead of degrees or radians to avoid camera "gimbal lock".
One of my favorite places where imaginary numbers pop up is in solving recurrence relations[0]. For instance, the Fibonacci numbers are an example of a recurrence relation and you can obtain the closed form solution by solving the characteristic polynomial. However, you can also get a closed form solution involving imaginary numbers when the roots of the polynomial are imaginary, even though the sequence only contains real numbers! From that you can express the result completely with real numbers if you use the Euler transformation of complex numbers.
Unfortunately this article repeats the same misconceptions about imaginary/complex numbers and their role in physics as most popular articles do.
Imaginary numbers were invented too early, that is why the usual explanations following the historical development are so mysterious. The notion of Sqrt(-1) does not help to explain what they actually are.
In fact imaginary "numbers" are not numbers. They cannot be ordered by size.
Imaginary "numbers" are also not vectors in the geometrical sense. This is because you can multiply two geometrical vectors and get a scalar as result (scalar product). But if you multiply two imaginary "numbers" you get another imaginary "number", so this is different to a geometrical vector. In fact imaginary "numbers" are transformations or operators, because they multiply like operators/matrices and they act on geometrical vectors like rotation matrices. Group theory makes all that very clear: Imaginary "numbers" are just the elements of the group SO(2), the group of rotation operations in 2D (multiplied with a scaling factor).
You can add a non-zero number which squares to zero to your number system [edit: but this doesn't let you divide by zero]. This results in the "dual numbers" and is practically useful for "automatic differentiation", where we represent quantities in the form x + x'ε, and have the rule that ε² = 0.
Quantities which square to zero are also implicit in spacetime. A "lightlike" vector which has equal displacements in space and time between two spacetime “events” (e.g. the displacement between two points along the path of a photon) has a squared length of 0, compared to "timelike" vectors with negative squared length and "spacelike" vectors with positive squared length. (conventions about signs vary from source to source)
In other contexts it makes sense to define 1/0 to be the quantity ∞. There are two relevant models here, with different practical applications. One model where we add a single number ∞ which is also equal to –∞, and makes the number line "wrap around" into a circle. Another model where we add two separate numbers +∞ and –∞ at the two ends of the number line.
Adding some kind of infinitesimals to the reals can be useful, as is done with hyperreal numbers, dual numbers, and other kinds of hypercomplex numbers, but it doesn't allow you to divide by zero; there is no hyperreal or hypercomplex number x that solves x = 1/0.
As I understand it, if you add 1/0 = ∞ you can't treat that ∞ as a quantity in the usual algebraic ways. In particular you can't multiply ∞ by 0 and get back some well-defined quantity such as 1; if you allow that, you quickly find that you can prove that all finite numbers are equal. The standard tricky example of this is in https://www.math.toronto.edu/mathnet/falseProofs/first1eq2.h...:
If we then divide both of these last expressions by a² - ab we get 2 = 1. This is only invalid because a² - ab = 0. Adding a different ∞ₙ for each value of n/0, as lisper suggests, doesn't help.
So, if you want to extend your number system with 1/0 = ∞, you either need to throw out some of the standard laws that permit algebraic manipulations like that, or you end up with all finite quantities being equal.
I am not at all a mathematician, but I just can’t wrap my head around how 1/0 isn’t infinity. 1/1=1, 1/0.5=2, 1/0.25=4, etc. The closer we get to 0, the larger the number. I just can’t accept that if we actually get to 0 in our progression, it is anything besides infinity. Perhaps it becomes its own class of number, like what happened with imaginary numbers. That is, infinity times zero is 1n (where n is a bookkeeping marker like i in the square root of negative numbers), or like with limits, where you have a “+ C” in there.
From above. What if you approached 0 from below instead?
1/-1=-1, 1/-(1/2)=-2, 1/-(1/4)=-4.
So you just proved that 1/0 equals both positive and negative infinity, and therefore you also just proved that positive and negative infinity are equal. This is one of the reasons why giving 1/0 a definition is rarely seen as useful.
Haha, ok this blows my mind a little more now. This definitely helps me conceptualize the problem a bit better, even if I’m still not satisfied with it. I still feel like there is a better solution, like giving 0 a sign or something.
That is in fact what we do in IEEE 754 floating point! We have +0 and -0; 1/+0 = +∞ and 1/-0 = -∞. However, this means that IEEE 754 floating point violates substitutability, the most basic property of equality, because +0 == -0 but in some cases f(+0) ≠ f(-0).
Okay, but then you'd show that 1/-1=-1, 1/-0.5=-2, 1/-0.25=-4, etc. The closer you get to 0, the more negative the number. Then you just wouldn't accept that if you actually got to 0 in your progression, it is anything besides negative infinity. And then you've shown that negative infinity is equal to infinity.
Sure, I definitely agree that 1/0 doesn't give you a finite quantity. What I'm saying is that it doesn't give you a number at all unless you're willing to sacrifice major parts of algebra. And I'm not much of a mathematician either, so I might have gotten something wrong.
The problem with any finite number of infinities is that you can't maintain the identity (a/b)*b = a. To make that work you need a different number to represent a/0 for every value of a, including all of your infinities, so you end up with an infinite hierarchy of infinities.
You already can’t maintain the identity (a/b)b = a when you have a zero element, except by declaring "division by zero is illegal". Adding ∞ is not fundamentally different, but just takes a slightly modified list of exceptions.
Well, yeah, but the whole point of inventing a new kind of number to represent a/0 analogous to inventing a new kind of number to represent sqrt(-1) is to eliminate these you-are-not-allowed-to-do-that kind of exception. With sqrt(-1) it turns out to be straightforward. With a/0 it isn't, which is why sqrt(-1) is a thing in math and a/0 isn't.
For me, the whole point of modifying the number system is modeling different kinds of structures. Both complex numbers and projective numbers are interesting in their own right and also useful for making models of both physical reality and other kinds of mathematical patterns. Each system has its own internal logic with its own quirks.
>to eliminate these you-are-not-allowed-to-do-that kind of exception.
I'm not a mathematician by any stretch, but I never understood the inability to divide by zero to be some arbitrary rule, rather that it's just simply not possible to proof (because you can't multiply any number by 0 to get a non-zero answer). Then again I tapped out after Calculus so maybe that's just ignorance on my part.
We can divide by negatives though? They are just as unreal as zeros, so the only difference seems to be that division by zero is undefined, while division by negatives is well-defined. I think the real difference is that dividing by zero is not useful, so it has never needed defining.
> I think the real difference is that dividing by zero is not useful, so it has never needed defining.
Incorrect. The result of dividing by zero is provably "not well defined" - for standard mathematical definition of what is and what isn't "well defined".
Specifically, Limits (1). for 1/x, as the x approaches zero from above and below, the two limits are not the same, therefor the limit is not well-defined (2)
As x tends to zero from x > 0, 1/x tends to a limit of +infinity. Calculate 1/x for values of x = 2, 1, 0.5, 0.1, 00.1 and so on as you approach zero: the results are larger and larger positive numbers, without bound.
As x tends to zero from x < 0, 1/x tends to a limit of -infinity. Calculate 1/x for values of x = -2, -1, -0.5, -0.1, -00.1 and so on as you approach zero: the results are larger and larger negative numbers, without bound.
Even allowing "infinity" as a valid number, this isn't converging to the same answer. (2)
I suppose that you could "define" the answer in much the same way that we define i as "the imaginary square root of -1" and thus get complex numbers.
The question that would follow though, is: If we e.g. define "q" as the limit of 1/x as x -> 0, this is a number that is both positively and negatively infinite at the same time, what then? What can we do with it? It turns out that complex numbers are useful (3) and look a bit like quarter-circle rotations. But this q, the "self-contradicting infinite number" doesn't really seem to be useful.
Yet another not a mathematician here. Division by zero never converges (as a limit of any generating process, numerical or symbolic) in contrast to division by anything else. There is no amount of zeroes that can add up to a whole number, by definition of zero. Not an infinite amount, just “no” amount.
If you want to handle infinities as numbers, you have to maintain a “progress” somehow, or you end up comparing different stages of processes and get something like 2=1. Much easier to not dig a pit to fall into.
I'm saying I understand the idea that we can't divide by zero to make perfect sense. There is nothing arbitrary about it, which the parent comment seems to say. We don't just have a special rule saying "You can't divide by zero because we say so". It's a mathematical proof.
The concept of "division" or "ratio" has different interpretations depending on context.
The Euclidean (from The Elements) concept of ratio can certainly admit a 1:0 ratio, which we can write as 1/0 or ∞ if we like, and which is equivalent to the ratio of x:0 for any non-zero number x. There’s no inherent reason why the ratio of "apples per person" should be allowed to be 0:1 but not 1:0. From first principles it’s just as problematic to divide 0 apples among x people as to divide x apples among 0 people.
When we e.g. write the values of trigonometric functions, these should ordinarily be interpreted to be among the extended reals, so that tan(π/2) = 1/0 = ∞ (i.e. the ratio 1:0) is entirely reasonable.
Is that +∞ or -∞ though? The problem with division by zero is not that it's infinite, it is that it limit could be either +∞ or -∞ depending on which side of the limit you look at. Therefor, it is not well-defined.
For example, when you look at a world map under the Mercator projection and you want to know where the north and south poles should be on the map, the answer is +∞ and –∞, which are not the same.
No, but there are algebraic structures that allow for this, like the Riemann sphere. The proper way to talk about this is the concept of "zero dividers". For Z the only zero divider is zero, 0/0=0.
The closest thing to what you describe is the the dual number (that together with the imaginary and hyperbolic numbers make the geometric numbers), which has zero dividers, and is defined as k^2=0 (where k is not in R).
This is a very interesting number and it can help clean up a lot of problems when using complex numbers.
No, or at least not in a field. From the field axioms you can directly proof that 0x=(a-a)x = ax - ax =0, and therefore 0 doesn't have a well defined inverse. You can look at other algebraic structure, but those behave less like numbers.
Yeah. Great mind teaser. I can imagine an imaginary number (-/+) "inf" for divisions by number approaching zero from left/right, yet the algebra would not be possible to define properly because you can approach zero with a different rate.
The undefined cases, where the left/right limits are not equal coulf get imaginary number "shrug" because it would be even less useful.
Of is anyone able to define a useful algebra for these? I'm really curious.
yes, check out Yaglom's "Complex Numbers in Geometry" for a fairly complete treatment of this topic. You can extend the dual numbers and double numbers with their own special infinities, similarly to how the complex numbers can be extended to include a single point at infinity.
It is! Calling them 'imaginary' seems like a misnomer, they're just a different _kind_ of number. It becomes much clearer when you plot it in 2D space.
The number line is just one dimension of the possibly infinite dimensions you can plot a number in.
Apart from the waffle the actual news [0] was that some people claim that they need complex quantum mechanics, quantum mechanics over C instead of just R. For some reason our universe might require complex structure.
Dubious. Any Computer Scientist can tell you that any Turing complete system can emulate any other such system. Perhaps the Quantum aspect of reality makes the difference, but I deeply suspect it’s the same.
Quote: "...or millions of times colder than the insides of your fridge."
I smiled. Eh, modern journalism, just a few order of magnitudes wrong here but who's counting anymore, yes?
It's funny that the article uses quantum mechanics to argue that complex numbers are "real", since I would argue that quantum mechanics shows that irrational numbers aren't "real".
Quantum mechanics belies the idea of space bring a continuum and so you cannot actually physically manifest numbers like sqrt(2) and pi, because Euclidean right triangles and circles do not actually exist.
Space is quite continuous in quantum mechanics, it's only the interactions between particles which are quantised (in fact the governing equations of quantum mechanics are almost all continuous. It's only when you try to solve them that quantisation appears, in a similar way that a guitar string is continous but only certain shaped standing waves can exist on it).
And all the way up to the octave which is an interval of seven notes.
What I find interesting is that we double-count the end notes in a diatonic scale, but we don't double-count them in a chromatic scale. No one says there are 13 notes per octave in a chromatic scale, it's 12.
... well, up to the thirteenth, at least... I suppose one could continue to name diatonic intervals above that for really convoluted chords.
Without the off-by one error, one could do interval arithmetics in a way that intuitively makes sense... It's really weird such a bad convention caught on to begin with.
Edit: going down the rabbit hole... It looks like the off-by-one error for diatonic intervals goes back to the greek antiquity. The Romans named the fourth and fifth diatessaon and diapente, which have greek roots, but their chromatic intervals start a zero (tonus, ditonus, tritonus which gave the (semi-)tone and the tritone). The "semi" prefix means "subtract half a tone", not divide by two...
https://en.wikipedia.org/wiki/Interval_(music)#Latin_nomencl...
Neither the Greeks nor the Romans had a concept of zero, so it starts to make sense as to why we started with a crooked convention. Inertia and habbit made it stick around.
"That this subject [imaginary numbers] has hitherto been surrounded by mysterious obscurity, is to be attributed largely to an ill adapted notation. If, for example, +1, -1, and the square root of -1 had been called direct, inverse and lateral units, instead of positive, negative and imaginary (or even impossible), such an obscurity would have been out of the question." — Gauss
Yes, I hope the consensus eventually moves away completely from "imaginary". Complex numbers is already ok, so it doesn't need to change, only the terminology for that i axis.
So what can we contribute, do you know any authors that have already found some new and better terminology and promoted it?
Pretty sure by saying this about a set of numbers that exist and that are distinct from the Real set, you're actually making a point opposite from the one you think you are making.
You should know the basics of the subject before stating that everyone is wrong and that the current standard is "ridiculous". You apparently confused "complex number" and "imaginary part". The latter made sense in the historical context, which was finding real solutions of quadratic equations (with real coefficients, of course).
And please keep in mind that complex numbers are not rotations, and they do not map well to them. For instance, which rotations would be represented by the complex numbers "3", "4", "1-2i" and "8i"? You can map {plane rotations} to { r, |r| = 1 } using z ↦ r*z, but that's a circle, not a 2D space.
To summarize, complex numbers can be thought of plane vectors, with an obvious geometric way to add them, and a non-obvious way to multiply them. The set of "rotations" is too vague and loosely related to complex numbers, e.g. 3D rotations are often represented by quaternions, which are more "complex" than complex numbers.
> keep in mind that complex numbers are not rotations
Complex numbers are (isomorphic to) "amplitwists": similarity transformations between plane vectors. If you want a pure rotation, you need a unit-magnitude complex number.
The complex number 3 represents scaling by 3. The complex number 4 represents scaling by 4. The complex number 1 – 2i represents a scaling by √5 combined with a rotation clockwise by arctan(2). The complex number 8i represents a scaling by 8 combined with a quarter-turn anticlockwise rotation.
> complex numbers can be thought of plane vectors
No, (physics-style displacement) vectors and complex numbers are distinct structures and should not be conflated.
Complex numbers are best thought of as ratios of planar vectors. A complex number z = v/u is a quantity which turns the vector u into the vector v, or written out, zu = (v / u)u = v(u \ u) = v. (Concatenation here represents the geometric product, a.k.a. Clifford product.)
Mixing up vectors with ratios of vectors is a recipe for confusion.
> non-obvious way to multiply them
Multiplication of complex numbers is perfectly “obvious” once you understand that complex numbers scale and rotate planar vectors and i is a unit-magnitude bivector.
> 3D rotations are often represented by quaternions, which are more "complex" than complex numbers.
Analogous to complex numbers, quaternions are the even sub-algebra of the geometric algebra of 3-dimensional Euclidean space. Used to represent rotations, they are objects R which you can sandwich-multiply by a Euclidean vector v = RuR* to get another Euclidean vector, where * here means the geometric algebra “reverse” operation. Those of unit magnitude are elements of the spin group Spin(3).
Thanks for this. It always bugged me that quaternions didn't fit well with the linear algebra of computer graphics. You absolutely need quaternions, but every time you do a rotation you have to convert to quaternions, do the rotation, then convert back.
With geometric algebra (Clifford algebra), it's all just one system with the quaternions being a special case within that system. All of it makes more sense and computationally it's only slightly more work than linear algebra.
Why don't we use Clifford algebra instead of linear algebra today? Because back in the 1800s there was something of a war between the people who liked Hamilton's formulation (quaternions, tensors, Clifford algebra) and those who liked vectors and matrices. Except in a few fields like relativity, the vector/matrix crowd won the war. And they probably shouldn't have.
Matrix algebra and geometric algebra are two different languages for many of the same topics/structures. If you like you can embed either one in the other (though embedding geometric algebra into matrix algebra is extremely cumbersome, sort of like rewriting your Python programs in assembly language).
Geometric algebra has a much richer geometrical structure, but in some contexts the more basic structure of linear transformations is all you need, and matrices work just fine.
In geometric algebra, you can extend a linear transformation of vectors by "outermorphism" to act on arbitrary multivectors.
I suspect that if people in the 1800s had been developing 3D computer graphics rather than electromagnetic theory, Hamilton's camp would have won the battle. But most people would probably say that vectors and matrices are a better fit for EM work.
Clifford’s geometric algebra/calculus is a much better fit than the Gibbs/Heaviside concept of "vector" analysis for doing electrodynamics problems. Especially if you try to handle quantum mechanics and special relativity.
Cross products, pseudovectors, etc. are tremendously confusing for students. Then throwing it all out the following school term in favor of cumbersome matrix representations makes things even worse.
I suspect if Clifford had lived past ~35 years old we would have avoided a lot of the confusion of the 20th century.
Of course it's a joke, but the problem with that argument isn't that imaginary numbers don't exist, but that the rule sqrt(a * b) = sqrt(a) * sqrt(b) doesn't hold for them. Or, rather, it does hold, but (as we should for real numbers, too!) we must regard sqrt(a) as being not a single number but a set of square roots of a (something we avoid doing in the real case by preferring non-negative numbers—a convenience not available for the complex numbers).
With this understanding, the rule sqrt(a * b) = sqrt(a) * sqrt(b) holds, and your instantiation of it (with a = b = −1) shows that 1 and −1 are both equally valid square roots of 1, which is certainly true.
Excellent. Indeed, as you say already for the real numbers one could have chosen sqrt(9) = -3: after all, (-3)^2 = 9 just as much as 3^2 = 9. Taking into account this inherent ambiguity in the definition of the square root function becomes crucial when extending it to the negative numbers, as my example shows.
So, long story short: although not everyone agrees, personally I prefer to define the imaginary numbers by means of an entity i that obeys i^2 = -1, and avoid talking about "square roots of negative numbers" as the (otherwise fine) article does.
> we know that the subset of R that can be used directly has measure zero
The cardinality of the natural number set is the same as the cardinality of the rational number set. So, in some sense, you are saying that we know that the natural counting numbers are "real," which is a self-evident truth. In another sense, you are saying that fractions of an inch/cm/etc are "real" which is another self-evident truth.
The question is whether uncountable infinity somehow exits in nature (in the case of real numbers, it is a question about the infinitesimally small scale). To say that it's a "polite fiction to make some proofs works" is too strong a statement, we do not know the answer to that question. At the same time, our measurement instrument will always be discrete and bounded, so the question is seemingly beyond science itself.
I'm not making a statement about the real world other than "it's impossible to communicate infinite information." I suppose that's a bit of a leap of faith, but I'm comfortable with it. Everything else I mean comes from math, with no connection to the real world.
We don't know that - some people posit that but it's far from being provable.
All physically measured numbers have uncertainty, meaning the value obtained is not an actual number, but is a range, perhaps with some associated probability spread.
This is not measure zero.
We are also very capable "directly" using various things that may be continuums, such as energy, or time, or velocities, or many other physical quantities.
No, I mean something simpler. The subset of R that can be identified is countable, because language is countable. No matter what system you devise for describing numbers, it will be countable. And that means that it cannot describe approximately 100% of the numbers in R. They can't be uniquely described; they can't be used in computations. They're phantoms, at best. You know they're out there, but they will never be usable the way numbers you can actually describe are.
The only thing they give you is the ability to declare R to be Cauchy complete in some proofs. They're a polite fiction.
Yeah, I'm well aware of these style arguments ala Gregory Chaitin.....
>The subset of R that can be identified is countable, because language is countable
Define "identified" in mathematically precise terms without using circular reasoning - and there is the flaw in this line of claims. You will find such definitions miss common uses of real numbers in the same way no finite set of axioms catch all true statements about integers. You will maybe get a nice consistent math subset of the reals with measure zero, but it will not cover all the cases you want it to. Thanks Godel
For example, using your "proof" of "because language is countable" would imply there can be no infinity, yet we use it all the time. Time, for example, may be a physical continuum, so a finite interval of time contains infinitely many time steps, and if your math cannot even represent physical reality then it's a pretty weak system.
>You know they're out there, but they will never be usable the way numbers you can actually describe are
Plenty of formal computational systems are capable of using the same set of numbers I can describe.
You're conflating being able to list every number in use one at a time, and operating on sets of numbers, like computation has done almost since day 1 using interval arithmetic.
>they can't be used in computations
Interval arithmetic, formal proof system - both capable of using all the numbers as sets that I can use - well beyond (Lebesgue) measure zero.
And to be pedantic, even countable numbers have infinite measure for certain measures :)
Interval arithmetic doesn't use those numbers. It works just fine in the computable reals, for instance. (We can of course uniquely refer to elements of R beyond the computable reals, but it suffices for the purpose of demonstration.) If there's no difference between the number being part of the set vs being absent, is it contributing anything?
Well, they contribute "this sequence has a limit" for all Cauchy sequences. That's it. Some proofs that depend on the existence of those limits work. But you never do anything else with those numbers. Their "existence" is about faith more than it is utility. That's a very different kind of "existence" than a number like 2, or ⅓, or π, or Chaitin's Ω, or anything else that be communicated uniquely in finite space.
If this is your definition, it falls to the barber paradox.
In finite space, if we're this sloppy, I can define the simplest real number not definable in finite space.... But I did it in finite space.... This this notion needs to be made more precise ... Then the next one falls... And so on.
Go ahead, you seem to know enough math to know what constitutes a solid definition and a proof. Try to define the set you're talking about, making sure it's not circular nor excludes any numbers you think it should contain. It's not trivial, nor even perhaps possible.
This is what I mean by you're going to have a very hard time making your definition precise enough to be a true statement.
People have tried your path many, many times. From my poking at the lit on it it never works. Chaitin has spent decades on these things, and the result always falls just outside precise enough to make a proof.
"The successor of the largest natural number that can be written in less than twenty English words" does not uniquely identify a number. The only thing that fact proves is that not every string of English words uniquely identifies a number. But it's a pretty weird proof of that, "watermelon" would suffice just as well.
The fact that not every finite string of symbols in every notation identifies a number uniquely doesn't break anything.
This is in fact just like the barber paradox. "I cut the hair of everyone in town who does not cut their own hair" is also not a paradox unless you add in assumptions not present in the original formulation. (Hint: where does the barber say they don't also cut the hair of people who do cut their own hair?) All you're doing is constructing straw men to be smarter than.
But you haven't actually addressed my main point: the only thing those numbers provide is making the limits Cauchy sequences exist. And that isn't actually useful outside of writing proofs that assume they do. For any other purpose, those numbers might as well not exist.
>All you're doing is constructing straw men to be smarter than.
No - for example, the barber paradox (in the form of Russell's paradox) is what led to a revolution in set theory around 1910 and Whitehead's Principia Mathematica. It is a real problem, and led to the notions of classes and sets being distinct (and universes? I forget..)
The sloppy "definable number" game runs afoul of exactly these issues. If you look on google scholar for "definable reals" or "non-constructive reals" you can get a taste of the research still being done in this area. It's not as simple as I think you think it is.
The short of it is that there are many, many notions of definable, and they are not equivalent. Thus I asked you to provide one you think is suitable for your claims.
> the only thing those numbers provide is making the limits Cauchy sequences exist.
This is far from true - depending on the (set-theoretic) model of the reals, you end up with all sorts of interesting pure math questions. If I recall, certain sequences of modules over the reals end up with different homology depending on crazy subtle questions of the model of reals.
There's also a lot of equivalent methods for defining reals, most require proof to show they are equivalent.
There are lot of interesting questions on the constructions of the reals (Conway called his surreal numbers his best discovery, and they are spectacular - they spit out the reals in a really interesting way from only two rules - no Cauchy sequences or Dedekind cuts or common constructions in sight. They were discovered analyzing perfect information two player games....). Here's a paper with many cool constructions of the reals [1]. Claiming the only thing they provide is Cauchy sequence limits is missing a lot of other useful properties.
So you may not find the reals useful, but they certainly are incredibly deep and fascinating. But they are useful far beyond the concept of Cauchy sequences.
The Bekenstein limit would seem to imply that the numbers / probability spreads we deal with in the real world are not infinite precision and therefore form a countable subset of R
A nice example is Chaitin's constant [1], which I can use in proof and books and define and on and on....
And it's explicitly and most definitely NOT computable :)
There are lots of numbers in lots of areas of mathematics, even symbolic mathematics that are not computable in the sense you want them to be computable. Chaitin's constant is the tip of a very big iceberg.
You're using circular logic by claiming the only numbers I can are are the computable ones then claiming all numbers I can use are computable. That's not true. It's a circular argument.
>all physically measurable quantities are rational
That's not even clear, unless you also assume you can measure exactly a unit length, which is not physically possible. Assuming you can measure something as a perfect rational value implies infinite precision, which is not possible.
All physically measurable quantities have uncertainty is what I think you mean, but that doesn't say anything about possible cardinalities.
I think you're saying that all physically measurable quantities are rational numbers with terminating decimal expansions. The set of rational numbers with terminating decimal expansions is a subset of the set of rational numbers, so my statement that all physically measurable quantities are rational is true. I also agree that there is some uncertainty in all measurements, I don't think this is at odds with what I said.
> I think you're saying that all physically measurable quantities are rational numbers with terminating decimal expansions.
No, this most certainly isn't true.
For example, take a meter defined (as SI does) as a fraction of the speed of light.
Now, for every length in the universe to be a rational fraction of this length is most certainly not true, because lengths, under relativity, form a continuum. Any speed is possible, so any length (via contraction) is possible.
There'd be no physical reason all physical processes would be constrained to a subset of rational grid points - things can move freely.
Heck, if all physical quantities are rational, are you claiming velocities only occur as rational numbers? That time is only rational numbers?
It's far more likely that they'd be irrational, and our man made units are the weird things.
Suppose two diagonal corners of a square with rational sides are somehow physically exact.... What is the physical diagonal length?
Oh yeah, irrational.
So no, physics isn't formed from rational numbers.
I'm making a statement about all possible experimentally measurable quantities, which as you pointed out have a finite precision and so can be represented as having terminating decimal expansions. This as far as I'm aware this is not a controversial statement.
I'm not making any kind of statement about whether space-time is continuous or not. You are modelling spacetime as continuous and seem to be saying this is the only possible option. That's not true; there are a number of different models of how space-time behave at the smallest length scales. Try reading about loop quantum gravity for example; in that model spacetime is taken to be a discrete lattice at the smallest scales. In your model with continuous space-time then it's true that a square of side length 1 would have irrational diagonal of length root 2. In reality we don't currently have experimental evidence to say either way, and it could be that your square of side length 1 has a rational diagonal which approximates root 2.
You can always express some calculation that uses imaginary numbers in a way that avoids them. It might make things vastly more complex to so so, but it's possible.
People used to do all sorts of math without negative numbers or zero.
One of the ways to avoid imaginary numbers is to use matrices instead. You wouldn't exactly solve sqrt(-1). You'd solve the sqrt of the negative of the 2x2 identity matrix (-I)
You can't solve sqrt(-1) without a second dimension. So even with imaginary numbers you kind of have an implicit -1+0i when solving sqrt(-1).
Complex numbers is just a way of working with two numbers at once, just like 2 dimensional vectors and matrices. There nothing special to it, other than some syntactic shorthands.
The use of the word "imaginary" to describe parts of complex numbers is quite unfortunate [1], because it misleads people into thinking there's something fake or weird about them, leading to confusion and entire articles trying to convince people they're valid. They're a totally normal, expected, and widely used feature of number systems as we understand them today.
The name they've got comes from how surprising and foreign they seemed to the people who first encountered them. Irrational numbers were similarly surprising and foreign to the people who first encountered them. Today we're perfectly comfortable working with irrational numbers like π, but the original reaction lives on in other meanings of "rational" and "irrational": based reasoning, and not based on reasoning, respectively.
"Imaginary" seems like a tougher hurdle, and while the article says "imaginary numbers" many times, mathematicians and physicists don't really. We talk about complex numbers, and describe complex numbers with 0 real part as "pure imaginary". "Imaginary" and "real" are just labels for the axes of the complex plane, and neither is literal in this context.
[1] As is the word "complex", which most commonly now means "complicated", but is used here with another meaning: made up of multiple parts.