Well, we can define mathematical objects for every gap (impossibility), but most of them will turn out to be inconsistent with our existing mathematical objects, and thus not very useful or interesting. I'd consider that mathematics is the study of consistency and what can be discovered using the simplest possible starting points (axioms).
The classic case would be if mathematicians wanted to assign a value to division by zero. It turns out that if you do allow that to take a value, then it becomes possible to "prove" that any number is equal to any other number. Quite simply, it makes maths less interesting to allow that, but instead having division by zero be undefined appears far more useful/interesting.
> The classic case would be if mathematicians wanted to assign a value to division by zero. It turns out that if you do allow that to take a value, then it becomes possible to "prove" that any number is equal to any other number. Quite simply, it makes maths less interesting to allow that, but instead having division by zero be undefined appears far more useful/interesting.
There are multiple extensions to the real numbers that allow division by zero. One is a real projective line, which has only one infinity so that 1 / 0 = -1 / 0 = infinity
Another is the extended real number line which has positive infinity and negative infinity, so 1 / 0 = +infinity and -1 / 0 = -infinity and they are different from each other
> There are multiple extensions to the real numbers that allow division by zero.
Well, the gotcha is that they redefine the operations so that none of addition, subtraction, multiplication or division are total. Those operations just break in a different number than zero.
You might not have much use for the real projective line when tallying up prices in the grocery store, but projective geometry is definitely very useful. https://en.wikipedia.org/wiki/Projective_geometry
A similar trick (point at infinity or ideal point) is used in projective geometry to distinguish between directions (vectors) and places (points) by using coordinates only: https://en.wikipedia.org/wiki/Projective_geometry
But if you actually want to do calculations with infinities and infinitesimals the surreal numbers might be better suited for that: https://en.wikipedia.org/wiki/Surreal_number
They could have been more precise, but they probably shouldn't have to in the space of a comment. The Riemann Sphere defines a value for the expression x/0, and it's often useful, but it fails to uphold the most important property division should have -- that it undoes multiplication. Division by 0 (with some assumptions about not being in a trivially small space and how those operations behave with respect to addition) does lead to contradictions in that latter sense.
"but it fails to uphold the most important property division should have -- that it undoes multiplication"
I'm not sure I follow that as it's most important property. I'm not sure if division could even be defined as an operation that undoes multiplication.
Number theory, fields, and rings I believe make it clear while subtraction and addition can be viewed as the same function; multiplication and division cannot.
Apologize if that's not clear as to why that is; it's been a while since I read up on those being defined.
However I recommend One, Two, Three: Absolutely Elementary Mathematics by David Berlinski that gives in my opinion pretty good layman understanding of these nuances and number theory.
Take a look into division rings as a concept. The usual definition for division in rings and fields is via multiplicative inverses for some subset of the nonzero elements. Not all algebraic spaces have division, but that doesn't change what it is, especially from the "number theory, fields, and rings" point of view.
Unless you're talking about some higher-order concept?
Edit: For a bit of completeness, what's happening with the Riemann Sphere is that the algebraic definition is being extended in a way that has some useful analytic, topological, and quality-of-life properties, but which is no longer wholly compatible with the underlying algebra. The algebraic issues are isolated to the extra point at infinity, so they're not terrible to work around, but the operation in question is a proper extension of the underlying algebraic definitions -- much how the gamma function in no way can be defined as multiplication of integers but is a useful extension of the factorials nonetheless.
Division is multiplication by the multiplicative inverse. Subtraction is addition by the additive inverse. Both division and subtraction undo their corresponding operation. Multiplying by a (provided it’s not zero) is undone by dividing by a. Adding a is undone by subtracting a.
In a ring the elements form a group under addition and thus every element has an additive inverse. The additive identity element, let’s call it e, has the property that ea = e and ae = e. For this reason we use 0 instead of e. In a nontrivial ring 0 can’t have a multiplicative inverse because if it did then every element would be equal to the multiplicative identity (which is unique).
Okay so if you can get to a ring without a multiplicative inverse and then applying that operation to the ring forms it into a field then wouldn't it be fair to say that division is not really the opposite of multiplication the same way that subtraction absolutely is for addition?
The definition of division is multiplication by the multiplicative inverse. It may be the case that some elements don’t have such an inverse but the definition is analogous to that of subtraction. The analogy is not perfect because every element has an additive inverse while not every element had a multiplicative inverse.
What you're saying is that the analogy between subtraction and division is good as far as it goes. So why should "as far as it goes" end at zero not having an inverse, rather than division by zero producing something other than the multiplicative inverse of zero? The two choices end up having different structure, and so they end up being applicable to different things, but there is nothing wrong with either choice.
The word division means something in mathematics. There is general agreement in what that word ought to mean. You can define a binary operation in such a way that it doesn’t look like what we normally think of as division and label your operation division. In the same way you can define the symbol duck to refer to what most people call a chair. You won’t get anyone else agreeing with your new definition though.
I think I understand better where you are coming from. In computer science I don’t know what they typically mean when they say “division”. I’ll be more precise. In abstract algebra division means multiplying by the inverse. All of the notions of division mentioned in the Wikipedia page come from this idea. Computers can’t work with within the realm of the entire real number system. There they have notions of type. They like to extend common operators like “/“ to things that normally it doesn’t apply to. A computer language will sometimes return a value of int or some other type when the integer 5 is divided by 3. Depending on how the language designer wanted things to work. This isn’t division in a mathematical sense though.
I am not at all concerned with what is or isn't possible in a computer for the purposes of this discussion. My only point with the link is that dealing with inverses in particular situations (i.e. where multiplication has or doesn't have certain properties) frequently requires particular considerations, and the properties of division defined as multiplication by the inverse will have different properties as a result.
To be clear, do you disagree that it is commonplace in complex analysis to extend the complex plane by {infinity} and define 1/0 = infinity, 1/infinity = 0? I find it hard to imagine that you can't have encountered that given how much you seem to know about abstract algebra. Or do you just think that it is a bad idea, despite being commonplace? In either case, to say that mathematicians would not call that operation division as a result is contradictory to my experience, even if those two special cases don't fit the category of multiplication by the inverse.
Also to be clear, I know of no counterexamples in abstract algebra and it would make sense to me that in that context division would mean something very particular, in order to be able to talk about it with any generality. But as it happens, abstract algebra isn't all of math.
This is getting very far from where the original question came from. When talking to a layman one would say division is always multiplication by the inverse. There are nuances involved that a lay person simply can’t appreciate or understand. Had I known you knew about the extended complex numbers I would have answered differently. The extended complex numbers are not a ring, not a group, not an algebra, and so…is it really division then?
In math often times the answer we give depends on the knowledge of the person asking the question. For instance we tell calculus 1 students 1/x is not continuous as a function from R-{0} to R. Of course in the standard induced topology it is a continuous function but explaining this to calculus 1 students would be very difficult.
Giving the answer that is relevant to the situation is very sensible. Saying that "this is the only thing division can mean in mathematics" is evidently false, though (I take it that you agree with me on this now?), and false in a way that is very relevant to the original question, which did not specify a context of abstract algebra and seemed to me to be very interested in expanding the mathematical horizons of the questioner, not restricting them.
The extended complex plane is a great example in my opinion, because it shows that yes there are reasons to extend the numbers in various ways, that can give useful structure, but you may have to give up something else in order for that make sense. In my opinion that is a much more complete answer to the deeper question. (Similarly for the reals mod 1, which do have the property that x + 1 = x).
The one point compactification of the complex plane is not a number system in the normal sense of what that means. Calling the use of the notational convenience 1/infinity a true division operation defies the common usage of the term in mathematics. You may call it whatever you want to though.
The answer given to the person who asked the original question was the correct one. You can’t do it because doing so would break consistency and that is of paramount importance when doing new things in mathematics. There are agreed upon usages of terms and symbols in mathematics. Why call something division in the true sense of the word when it breaks the conventional usage of what that term means? But, also, why invent a new symbol to denote what is analogous to division? So we abuse notation. This is done all the time. So on the one had we’ll say to calculus 1 students 1/infinity is 0 but also say infinity is not a number. Things are done for convenience but when asked, “Is this really division?” the answer is no.
Of course you can redefine all terms you desire and say things like: A circle can be squared, I just mean something different when I say circle than when you say it. But why do that? All of this is my opinion. You disagree and that is ok.
I didn't realise this, but apparently it is also possible to do good algebra on this kind of structure by adding an element 0/0: https://en.wikipedia.org/wiki/Wheel_theory (which someone pointed to in one of the discussions -- I forget which one).
Mathematics is a vast subject and I can’t keep track of all developments. In 2010 there was a paper on meadows. I’ve never heard the term before. In that paper it is written:
As usual in field theory, the convention to consider p / q as an abbreviation for p · q−1 was used in subsequent work on meadows (see e.g. [2,5]). This convention is no longer satisfactory if partial variants of meadows are con- sidered too, as is demonstrated in [3].
So, as I’ve stated many times, I talked about convention and indicated you can use whatever terms you want. In the paper quoted above they acknowledge what the convention is. That is that division is multiplication by the inverse. They are arguing that it is worthwhile in this new algebraic object to change the usual notion a bit. If people agree to a new usage of the word division then definitions will change accordingly. None of this is pertinent to the spirit of the original question given the context under which it was asked. All of this is highly technical.
Definitions and notions change as new mathematics is created (discovered?). This happens all the time. All you have to do is convince other mathematicians to go along with it.
EDIT: Regarding what you wrote in your other comment: The analogy is not apt in my opinion. It’s hard to say zero can’t exist because the nonzero…. The moment you say nonzero means it does exist. I think a better way to look at the situation is:
I have an object that is a group under a binary operation f. There is another natural binary operation on that object that operates with f in a consistent way. That operation doesn’t form a group but if I add a symbol to my set and give these rules then both operations interact in a consistent, natural way. I get a group under the new symbol with the second operation while preserving the group under the first operation minus the new symbol.
With extended complex numbers you don’t quite preserve the structures or properties that one normally wants so I’d say it isn’t true division. It is division like.
I'm happy to agree to disagree about where the line between "division" and "division like" should be placed. As you say, it is a question of convention and not really a question of math. But I don't agree that a student with the curiosity to ask about extending the numbers in various ways would not find something "division like" with the properties they're interested in to be relevant to the question (even if it is missing some other properties that most mathematicians consider to be essential to the notion of division).
Honestly saying you can't have a number 1/0 because it breaks the ring axioms seems exactly analogous to saying 0 can't exist because it breaks the group axioms for multiplication on the non-zero reals. Is ring multiplication "not really multiplication" because it doesn't satisfy group axioms? That doesn't seem consistent with normal usage to me, but you could imagine a pedantic student coming out of their first group theory course and trying to make that argument.
That's a good example of where defining division by zero leads to interesting maths, but it ends up sacrificing some of the usual rules of arithmetic, so it comes down to a choice of which is more useful in the relevant circumstance.
This just goes to show that you really have to be careful when slinging out math facts. I've done some under grad maths and the only line on that page that I understand is
"The extended complex numbers are useful in complex analysis because they allow for division by zero in some circumstances, in a way that makes expressions such as 1 / 0 = ∞ 1/0=\infty well-behaved."
It clearly does not satisfy a primitive understanding of 1/0.
1/0 is the limit of x/y as x approaches 1 and y approaches 0. It works fine if you choose to put a a point there called \infinity, with an appropriate notion of nearness.
An example where this does work quite nicely has to do with Bring radicals or "ultraradicals [1]. One of the most important results from Galois theory is that the quintic equation has no solution using standard radicals. But the introduction of "Bring radicals" allows quintic equations to be formally solved. As far as I'm aware though, Bring radicals only work for quintic equations in general and don't work for 6th order or higher polynomials, so your bang for the buck is a somewhat limited.
This is assuming that Θ interacts with arithmetic operations the usual way (that is, ℝ ∪ {Θ} is a field), which the person you're replying to did not say.
The most common definition of division being the inverse of multiplication.
if b ≠ 0 then the equation a/b = c is equivalent to a = b × c. Assuming that a/0 is a number c, then it must be that a = 0 × c = 0. However, the single number c would then have to be determined by the equation 0 = 0 × c, but every number satisfies this equation, so we cannot assign a numerical value to 0/0
Thanks, this definition does seem problematic. In any case, it is not the only possible definition and in a/0=c, c does not have to be defined as a real number. We can define it as similarly to complex number with new rules that do not collide with existing reals.
There's a couple of mentions in other comments about the Riemann Sphere (https://en.wikipedia.org/wiki/Riemann_sphere) which does define division by zero, but sacrifices the numbers forming a field under addition and multiplication.
is a kind of sentence that is almost never true, and even if it were, it would be impossible to prove that someone hadn't jotted a valid definition on a napkin somewhere. In this case it is certainly not true (as others have mentioned: https://en.wikipedia.org/wiki/Riemann_sphere ). Now, specifying a definition for division by zero does require you to be careful about how the other operations extend to this new number, but there are perfectly consistent (and useful!) ways to do so.
Thanks - I was not aware that theorem provers often allow "division" by zero.
Looking at https://xenaproject.wordpress.com/2020/07/05/division-by-zer... I see that they don't use mathematical division, but define a slightly different operator with an additional condition for handling zero. This appears to be far more convenient for theorem provers.
The trade-off would be that "division" is no longer the inverse of multiplication.
Ah, thanks for the link. I suggested the reason that Isabelle/HOL does this is because it requires total functions and you don't have a convenient way to do refinement types. But that's not an adequate explanation, because Lean does allow such refinements, but it still turns out to be inconvenient for division.
I will note that setting a - b = 0 for a <= b is pretty standard, and is often called "partial subtraction."
I believe you but that’s kind of mind blowing. How do they avoid the seemingly-obvious corollary that 0*0 = X, for all values of X? That is, just multiplying both sides of “x/0 = 0” by zero.
Functions in these logics are total, so if you want division to be a function (and you probably do), it has to assign something to division by 0.
It would be acceptable to assign an unspecified object from the domain, for which you have no non-trivial theorems, and so all your real theorems must have a precondition about the denominator being non-zero. But if you specify a candidate like 0, you can get some theorems which don't have the precondition. Consider:
I appreciate the explanation and I’m in no position to disagree, but ugh. Seems like it would work just as well to define x/0 as 6, or e, or -15. I’m sure that’s not the case. But as a long time tech person who’s always considered underflow/overflow to be a hack to get around limitations of hardware, it offends be a bit to find conditionals in abstract math. Undefined seems cleaner, like null, since it implicitly says “don’t treat this as a normal value that you can operate on”.
I suspect the real math people know what they’re doing more than I do, though.
The theorem a/b * c/d = ac/bd doesn't hold if x/0 = 6, though.
The theorem prover HOL Light is a close cousin of Isabelle/HOL and doesn't adopt this, and just says that x/0 is some unspecified number. You can't prove much interesting about it. You can prove, say, that x/0 * 0 = 0, but you can't prove whether or not x/0 is, say, positive or not.
If you prefer null, there was a logic that allowed for undefined terms and partial functions that became the basis of the IMPS theorem prover. I found it most notable for the fact that it doesn't have reflexivity of equality: 1/0 = 1/0 is false in IMPS.
It's not making a multiplicative inverse of 0 exist though, it just defines a '/' operator that is slightly different from our usual one (i.e. a/b = a*b^(-1))
On the contrary, the extensions can be very useful and interesting. You do typically have to sacrifice something, like commutativity in the case of quaternions, but it will often be worth it.
Yep, an extension is only interesting if it is a true extension, i.e. retains the properties of the thing being extended. So complex numbers are interesting as an an extension of reals since reals are isomorphic to the subring. Likewise with quaternions and reals / complex numbers.
> we can define mathematical objects for every gap (impossibility), but most of them will turn out to be inconsistent with our existing mathematical objects
Is the short answer it's not parsimonious or useful?
"too complicated" is a weird way to say "provides a concise and consistent way to model superficially diverse phenomena and show how similar they really are" .
matrices over reals are ok especially if you keep to SO(n) but you can get very weird maths as polynomial quotients. they do not look to me like they are very similar. complex plane and extensions of all kinds of weird. seems hacky rather than illuminating to me. but then i only really like complex numbers as a field since analytic functions are so nice
Given any polynomial P (e.g. x^2 + 1) over a filed F (e.g. reals) we can form: `R = F[X]/P`
This is an algebraic "set" that supports addition, substraction, multiplication and has 0,1 but not division in general. Elements are elements of F and a new symbol X that satisfies "P(X) = 0".
Examples:
R[X]/(x^2 + 1) = C
R[X]/x = R
C[X]/(x^2 + 1) = C + C.x
R[X]/1 = 0
# Properties
- If the polynomial P is invertible, i.e. has degree 0 and is not zero, then the resulting ring is zero R[X]/P = 0. This is what happens in the example x = x-1 (which corresponds to P = x - 1 - x = -1).
- If the polynomial P has degree 1 (i.e. P=aX+b), then the equation P=0 is equivalent to x=-b/a, representing an element already present in R, hence the ring R[X]/P is equal to R.
- If the polynomial P is irreducible (i.e. not a product of two proper polynomials) then the quotient R[X]/P is a field. This happens in the case R[x]/(x^2 + 1) which results in the complex numbers.
- If the polynomial P is a product of two polynomials P1,P2 which don't have common divisors, then R[X]/P = R[X]/P1 + R[X]/P2, this happens in the case that C[X]/(x^2+1), since P = x^2 + 1 factors as (x+i)*(x-i) in C. The equivalent result for integers is known as Chinese Remainder Theorem.
You mean like $x^{-1} = 1/x$? That's called a rational function[1], but not a polynomial, so it's not an element of the polynomial ring[2]. Of course you can also consider the algebra of rational functions, but this is a field[3] (almost by definition: you make every polynomial invertible), which means that modding out anything other than 0 yields the zero ring[4].
Thanks for this comment! Quick note - for clarity and conformity with standard notation, it would be good to have parentheses around the denominators of those ring quotients (in those cases like x^2 - 1 where they contain multiple additive terms).
both of these are reasonable. if you have an `x` such that `x + n = x` implies that `n = 0`. (assuming x still has an additive inverse)
in other words you just invented modular arithmetic which is a very reasonable thing to invent.
1/0 is maybe a bit trickier and leads you to invent projective spaces.
Negative numbers are sort of imaginary to begin with come to think of it. Actually I think I'm getting flashbacks now to my childhood when my older brother blew my mind with this concept.
You can do that, but there's a tradeoff of losing properties that otherwise hold.
For example, by adding the imaginary numbers, there is no longer an ordering compatible with addition and multiplication (ordering compatible with multiplication means that z > 0 and x > y implies x * z > y * z: assuming that, if 0 < i, then 0 = 0 * i < i * i = -1, absurd, or if 0 > i and thus 0 < -i, then 0 = 0 * -i < -i * -i = -1, absurd).
You can certainly add a number x such that x = x + 1 (e.g. what is commonly called an infinity or NaN), but that implies no longer having additive left inverses assuming you keep associativity of addition and 0 != 1 (since otherwise 0 = -x + x = -x + (x + 1) = (-x + x) + 1 = 0 + 1 = 1).
We didn't invent 'i' to "solve sqrt(-1)". This is an extremely common misconception about maths and how it progressed that unfortunately people get led into believing by lazy teachers every day
Square roots of negative numbers came up when solving cubic equations, even if the final solutions were all real. This meant the square root of a negative number was not something nonsensical the way you might claim for x^2 = -1, but actually...real in some sense.
Specifically I believe it involved a geometric construction for solving the cubics, which in some cases could not find a solution unless you allowed a square with "negative area".
every polynomial with algebraic coefficients has 'n' solutions (counted with multiplicity)!
so e.g. x^121 + sqrt(7)x^9 + fithroot(22)x^7 + (1+i)x^3 + 22/7 = 0 has 121 solutions. and they're all algebraic numbers: nothing weird like pi in there.
Those are all just normal imaginary numbers. The question is why, when we can't answer a question, we don't just invent a symbol, say it's the answer to the question, and call it a day.
It's a stupid question, but it's not related to your response.
The question has 300+ upvotes. That’s a proxy for how “good” it is. A person is curious about an aspect of mathematics and posed a well stated question. It is not a stupid question. From their perspective mathematicians appear to do something and they wonder why it can’t be done in other situations. Such a question is the basis of understanding. It is by wondering such things that enables one to gain true understanding of a topic.
Most questions asked by beginners in an area are “stupid” and few as insightful as this one. I’ve taught mathematics at a community college for 20 years and I would be delighted to have been asked this. Usually questions are mundane like, “Why did you add x to both sides?”. Here the person is trying to understand what mathematicians do, what the basis of expanding a number system really involves. This is a fantastic question.
Peoples’ curiosity ought not be labeled as stupid.
> Peoples’ curiosity ought not be labeled as stupid.
Correct. That is why I feel more comfortable asking "stupid" questions to chatGPT. I clarified a lot of concepts in economics through repeatedly asking questions about each concept that pop up in its answers and trying to push it to the limits of what can be defined, explained, etc. One cannot be sure of the truthfulness or soundness of the answers, but they may help.
> It is not a stupid question. From their perspective mathematicians appear to do something and they wonder why it can’t be done in other situations.
I mean, you've already gotten it wrong. This can be done in other situations. Where it isn't done, it isn't done because doing it is pointless, not because there's some bar to giving names to opaque labels.
If something doesn’t behave like 0 in a ring or other algebraic structure then using that label is confusing and simply not done. You are free to use any symbol you want but mathematics is a human endeavor and as such communication is important. Using the symbol 0 signifies something to those with mathematical training. Zero can’t have an multiplicative inverse because anything you call 0 that has an multiplicative inverse makes it behave like something other than zero. So no one would use 0 to describe such an element. In a ring, or abelian group, the symbol 0 is reserved for the additive identity element.
Similarly, I could say snkwoo is what most people call a chair. A grammarian would say there is no word snkwoo even though I just defined it.
Your original comment was wrong and bad. Instead of just admitting it or moving on you’ve decided to double down and make another bad comment.
I'm having trouble following the argument from your premise "it is a stupid question to ask why I referred to a chair as a chair instead of a snkwoo" to your conclusion "it is not a stupid question to ask why, when we have no answer to a question, we don't just say that we do have one".
The answer (to both of those questions!) is, of course, that we could do that, but it wouldn't accomplish anything. Asking the question just means you have no idea what you're saying. Or in other words, it's a stupid question.
> Asking the question just means you have no idea what you're saying. Or in other words, it's a stupid question.
So, to be clear, you're saying that the only kind of question that isn't stupid is the one where the querent already has perfect knowledge of the discipline?
I'm saying that to avoid asking a stupid question, you need to know the meaning of your own question. Stringing words together at random isn't going to get you there.
Compare the famous anecdote from Charles Babbage:
On two occasions I have been asked, -- "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" [...] I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
Is it necessary to have perfect knowledge of the workings of the Difference Engine to avoid asking that question? Of course not. Any knowledge at all would do the trick. If you put gravel into a water mill instead of grain, will you still get flour out of it?
A child asked her mother, “Can I put my hand in the fire?”. The mother responded, “That’s a stupid question. Of course you can.”. The child put her hand in the fire and got severe burns on her hand. She learned then that instead asking “Can I…” she should have asked, “Is it advisable…”. Unfortunately for her she lived in a society in which people frequently say things like, “You have to file taxes on or before April 15.” when they mean, “You can file taxes after April 15 but you may incur fees and penalties if you do so.”. She later became a teacher and was very patient with her students.
The classic case would be if mathematicians wanted to assign a value to division by zero. It turns out that if you do allow that to take a value, then it becomes possible to "prove" that any number is equal to any other number. Quite simply, it makes maths less interesting to allow that, but instead having division by zero be undefined appears far more useful/interesting.