I think numbers are wonderful and cool abstract thingies. We grow up being taught them (in my case Arabic numerals in base 10) and they become such part of our being that we start to think that we know what “5” is. Then I was introduced to Roman Numerals. Didn’t like those much. That one taught me that 4 is more related to 5 than it is 3. Then some of us learn a base 2 number system or base 16 number system and for a brief moment we question whether we actually knew what “5” was. And then we may learn about Peano and the successor(s…) of zero. Did we really know what “5” was then? What is five-ness really? Perhaps fiveness is a set within a set within a set…
Every interaction our human minds have with fiveness is with a representation or encoding of 5. With each new representation, I’ve experienced fiveness slightly differently.
But I like computers and math. For me, the best to experience oneness with the fiveness is through computation with Lambda Calculus and Church Numerals. Very fond of that encoding.
I lost one of my fingers when i was a child, so after getting used to it i don't experience sadness when i look at it. That statement also checks out for four, in this case.
This presumes the existence of a category like 'cow' or 'finger'. I'm pretty sure there's nothing intrinsic about 'cows' and 'fingers'; these are artifacts of human cognition.
I base reality, there are no such things as 'cows' and 'fingers'. Just systems with similar properties. And if you have to be very pedantic, there are not even discrete systems, since everything is linked by information. You cannot separate a river from the ocean - it's part of the same system.
So, given that there are no categories to put discrete things into - they just exist in our imagination - and there are probably not even discrete things - these also just exist in our imagination - the question of five-ness is not really answered.
More to the point, the idea that "you cannot separate a river from the ocean" seems to be ostensibly wrong - otherwise we wouldn't have the (very useful in practice) notions of 'river' and 'ocean' in the first place. If you look at a map, the differences stand out pretty clearly - e.g., a river has a starkly different topology from that of the ocean. So, no, these differences are not just figments of our imagination, as, in particular, everyday practice shows.
In general, the failure to perceive emergent phenomena as something different from the particular substrate, and consider it separately - for instance, the failure to see how nature is not just a bunch of atoms moving around, has a name - reductionism. It is a form of intellectual blindness (not to be confused with the ability to think abstractly).
But "world", "separate" and "parts" are all language concepts too. I feel the inconsistency in your (circular) argument is not getting through.
That divisions are arbitrary and "exist just in our imagination" doesn't line up with your admission that imagination is real, and using words to describe it. That line between "real" vs "arbitrary / imaginary" is not as clear cut as you (unconsciously, apparently) draw it.
> I feel the inconsistency in your (circular) argument is not getting through.
I guess it's not, since I'm not convinced that mine is a circular argument.
My argument - to put is simply - is that we are all part of the same system and that there are no divisions. Without divisions, no numbers. I don't need any distinction between 'real' and 'arbitrary' for this to hold, that's a dichotomy you assume on your part.
Yes, we may be all part of the same whole. But there definitely are divisions – you are manifesting them with your own words. QED.
Let me try differently. Your premise seems to be that "base reality" (your words) is a single teeming interconnectedness, indivisibly unique, from which it follows "there are no discrete systems", no categories, from which it follows that counting discrete things is an "artifact of human cognition".
Correct? Did I get your position right?
I simply pointed out that human cognition / imagination, including language and categories and logic and numbers, is as much a part of the base reality (that same teeming interconnectedness) as anything else. You manifest your words = they capture a pattern, patently recognizable from other patterns, transmittable (e.g. to me), with a potential to affect me and others and our future.
The world being interconnected doesn't mean it's undifferentiated. All work is still ahead of you in showing that categories and words are somehow "not intrinsic" (again, your words). Yes they may be a teeming that refers to other teeming, but you haven't shown that's anything special, worthy of singling out as extrinsic.
> Let me try differently. Your premise seems to be that "base reality" (your words) is a single teeming interconnectedness, indivisibly unique, from which it follows "there are no discrete systems", no categories, from which it follows that counting discrete things is an "artifact of human cognition".
Yes, I think you summarized it pretty good! Thanks for that.
> Yes, we may be all part of the same whole. But there definitely are divisions – you are manifesting them with your own words. QED.
Woah, not so fast, there! Where exactly do my words manifest divisions? The words you read on the screen, that manifest in your mind as meaning are not discrete things themselves. No word stands for itself, else we would not need dictionaries. They are fuzzy things that often change their meaning, depending on context, place and time, on the reader and what she ate in the morning. To claim that words are discrete things that exists
> The world being interconnected doesn't mean it's undifferentiated.
I agree. Just as there are patterns in stellar nebulae that swirl and dance but never quite separate, all the world manifests its decay in wonderful shapes and patterns.
But none of these patterns can be isolated from the others, for non of the patterns can there a definite line be drawn to say "your existence starts here and ends here". The boundary of every definition, of every category can be shifted and shifted and shifted some more. They are - as already mentioned - artifacts of cognition that allow us to form a mental model of the world that surrounds us. Replacing the unfathomable interconnectedness with a simple game of blocks and strings and forces that fit into the limited capabilities of the simulation we run in our heads.
> All work is still ahead of you in showing that categories and words are somehow "not intrinsic" (again, your words).
Categories and words, numbers and letters and the human mind are fleeting. How can they be intrinsic if they exist for only an eye blink?
In your world without numbers, do letters exist? Does the letter "n" exist? Is there any relation between "n" and "nn" and "nnn"? If so, how would you describe that relation?
Do letters have an existence outside of our heads? I doubt it.
With regards to the relationship between repeating sequences; we (humans) invented funny games to describe these patterns as we perceive them. Some of these games have very strict and elaborate rules, although most of them are either inconsistent, incomplete or undecidable or trivial.
Even constructive mathematics offers no help here.
> I'm just pointing that without arbitrarily dividing the world into separate parts, numbers don't arise.
I agree. Though I think a stronger statement is also true: Without "arbitrarily" dividing the world into separate parts, non-trivial thought is not possible.
Assuming by "parts", you mean "categories", I think my statement is still true. And, once we have categories, we can use numbers to compress information, which in terms allow us to perform more complex computation/though with our limited computational power.
As to how arbitrary our categories are, one could argue that some are hardwired to our DNA, as claimed by Chomsky(1).
> This presumes the existence of a category like 'cow' or 'finger'. I'm pretty sure there's nothing intrinsic about 'cows' and 'fingers'; these are artifacts of human cognition.
I base reality, there are no such things as 'cows' and 'fingers'. Just systems with similar properties. And if you have to be very pedantic, there are not even discrete systems, since everything is linked by information. You cannot separate a river from the ocean - it's part of the same system.
When I cut my finger the cow does not share in my pain. When we kill the cow for its meat, we do not share in its pain. That the cow becomes part of us through its consumption does not seem to invalidate this point—discrete systems do exist in our experience.
Of course, if you zoom out far enough you might refer to the sum of those discrete parts as some singular, complex system, but it seems the human experience is fairly limited in exposing this subtlety (not to mention that it is often useful to discuss the parts themselves without considering their relationship to the entire universe).
I don't know about that. Reality is strongly suggestive. (Of course you can argue that "reality" also is a figment of our imagination or, for example, it's not a thing in the first place, but I personally wouldn't go that far.)
Does the base really have anything to do with 5? I can represent 5 in base 10 as 5, or I can represent it in base 2 as 101, but both encode the same information.
Using different bases, to me, is just about storing information. How many symbols do you want to create before you increase the length of your number and start re-using them.
It is everything that has some relation to something that has that same relation to something that has that same relation to something that has that same relation to something that has that same relation to some unique thing that does NOT have that same relation to anything.
I would say that positional notation, Zermelo ordinals and Church numerals are just ways to encode numbers. I would never call them numbers in the strictest sense of this word. Just how your C code is not an algorithm.
I would say I knew what 5 was even before I learned any notation or written language whatsoever.
Yes; the difference between token and denoted concept. They are not the same thing (as René Magritte probably wanted to say with his "This is not a pipe" painting). In computer contexts: each representation of 5, in whatever base or language would point to the same element in a lookup-table.
But for us humans, the tokens and concepts blend to some degree: maybe not so much with regards to numerals (although there seem to be exceptions ...). So even if 'Amour', 'Love' and 'Liebe' essentially point to the same concept, we might conceive of them differently, ever so slightly, depending on the language we chose to use.
Which gives an inkling of the vast difference between computers and humans, at the current stage.
I like that he keeps it open-minded, but I've thought a lot about this and to me the case for numbers being inevitable is decisively yes. Every adaptable system evolves to adapt to a changing environment by detecting "modes" (categories) and adapting to each mode. Then we start noticing categories come in instances. There's a tree, there are more trees. So now it's useful to count them... Then it's useful to have fractions. And measurements (like weight and length). Basic operations like + - emerge, and with that multiplication. And with that division... and so on.
I think the fundamental properties of math are one of those things that will keep emerging for every thinking system. Not because numbers are fundamental to the universe. Rather they're fundamental aspect of intelligently adapting to (i.e. thinking about and making predictions about) the universe.
> There's a tree, there are more trees. So now it's useful to count them...
It's not that useful to count them. My personal interactions with trees have likely never involved counting them; not even when I was involved in woodland management.
Trees control their numbers according to whether or not they can reasonably live in a particular density of tree coverage.
Only humans want to count trees, perhaps in general to claim (abstract) ownership of a particular large number.
Lets say, based on your nick, you are teaching two classes. Both consist of individuals.
If we were to count the classes, we would say that one class consist of 2 people, and the other of 20000.
But those are just numbers.
My point is that numbers are useful for us in understanding how to interact with the world. If you don't plan to interact with the trees, you probably don't need to count them. But if you are going to use them for firewood or building materials, the knowing the difference between 2 trees and 20000 is useful.
We agree that the difference between 2 and 20000 is significant, but I can easily observe this difference without counting all the way up to 20000.
Identifying the difference between 19,999 and 20,000 is certainly not significant in this way. 19,999 is already "too many" students; or, "plenty" of firewood. I don't need to count them to know this.
I'd say the difference between 5 and 6 is already relatively unimportant in these contexts.
My point is not that you need the number to be exactly accurate, my claim is that for the number 2 you don't even condider the student body as a quantity, but for 20000 it very much becomes a quantity.
Also, if 20000 students need training, as a teacher you can still provide training, you just need to do so in other ways. Either, you can recruit more teachers to help you (and then it matters if the number is 10000 students or 20000 students), or you can switch to another medium (a book, a video, on-line training).
Treating a student body as a number will make it more managable for you to provide/organize training to them than just hireing more teachers at random until each student gets enough attention.
Likewise, 2 trees may keep you warm for a week, 20000 may keep you and half your (small) village warm for a year, with enough trees left over for the forest to replenish.
> Every adaptable system evolves to adapt to a changing environment by detecting "modes" (categories) and adapting to each mode. Then we start noticing categories come in instances. There's a tree, there are more trees. So now it's useful to count them...
I really like this part of the explanation, although not all adaptive systems are intelligent. E.g if I add a grain of salt on a heap of salt and it suddenly collapses, that is an adaptive system but not as a result of intelligent modeling of how it should behave.
> Not because numbers are fundamental to the universe. Rather they're fundamental aspect of intelligently adapting to (i.e. thinking about and making predictions about) the universe.
Here the problem is assuming intelligent adaptivity and the agent that has it is something separate from the universe. Being able to have a model of the universe inherently requires internally resembling the structural functional organization of the universe, a mutual conformity if you will, and therefore if numbers, counting, modes, categories etc are fundamentally useful constructs, I think they also at least resemble a fundamental part of the universe.
Well, in a very primitive and non-accurate sense, it counts/measures the number of (photo synthesis generating) photons hitting each leaves, and then grows in directions that maximizes its photo synthesis.
Since a tree has a much lower number of branches and leaves than there are photons, there has to be some aggregation of information involved in this computational process.
Another perspective could be that we are creatures emergent out of the physical and chemical processes of the universe.
There is some similarity or equivalence to some of these processes with mathematics, which allows us to model a subset of physics/chemistry with mathematics.
Because physics/chemistry can create us using rules that follow math (or even some undiscovered rules), this same physics/chemistry gives us the faculties of reason that lets us think about and apply math in some universal way
The physical, chemical, and mathematical substrate to which we are born allow us to even discover that the rules of math can be applied to some subset of the laws (or chaos?) of the universe.
> Do we really know if adaptation is responding to discrete classifications of environmental pressure?
Yeah we do know.
Take any system of variables that interact, and change them smoothly, and inflection points will emerge. This is true literally any way you look at it. In space. In time.
Heck aggregate states of matter represent "categorical adaptation" of matter to smooth alteration of temperature.
Point being this process happens even before we even can call a system "thinking". It's absolutely inevitable.
We can keep our options open forever and say "I don't know", that's fine. But if we analyze what we do know... turns out we DO do know.
You are just saying that interacting systems have phase transitions.
I think this is a far cry from "emergent numbers".
Edit:
Maybe self organizing matter that lives on a phase transition boundary benefits from being aware of the boundary? Thats the only analogy I could think of that could logically connect.
> You are just saying that interacting systems have phase transitions.
Right.
> I think this is a far cry from "emergent numbers".
Because it was just a clarification remark on everything else I said before that, which did connect it to numbers.
> Maybe self organizing matter that lives on a phase transition boundary benefits from being aware of the boundary? Thats the only analogy I could think of that could logically connect.
That's also what I initially said, but think about it a little more broadly. Whether you "live on a boundary", or you live in a system that experiences such boundaries, or your own system internally has such boundaries, or the INTERACTION between you and your environment creates these boundaries, it doesn't matter. You benefit from being aware of them.
And here's the thing. You will experience such boundaries, because it takes infinitely more "resistance" to change, in order to survive a changing environment without changing yourself, than it takes energy to adapt to a changing environment so you resist less, and it resists you less (you become more compatible).
When leaves drop in winter it doesn't matter there's a specific day and hour and second we switch from summer to winter, but trees do benefit from recognizing the overall "shift" over time and adapting to it through its own shift. I'm deliberately using "non-thinking" adaptations to show that categories are precursors to how thinking works, rather than thinking inventing the idea of categories for no fundamental reason.
Our recognition of objects and entities are the same phenomenon. We recognize a boundary (inside and outside the entity/object) where there's a shift of overall behavior in that local timespace, compared to its surroundings. We benefit tremendously from recognizing that a field of grass looks and acts more like a hungry lion in a specific region of it. Technically objects/entities are not a perfectly defined thing. A lion is in consistant exchange and interaction with its environment, it's not a closed system. And anyway I don't feel like repeating the rest of this again.
TLDR; There be phases/categories in N dimensions/parameters. There be instances of them (repetition of patterns). There be adaptation by recognizing the phases/categories and their repetition and counting and measuring them, in order to optimize our predictions quality.
When I used to teach a class called “Mathematics for Liberal Arts Majors,” the first class began with having students “draw six.¹” I refused to give more explicit instructions. It was fascinating to see the results that came out of this, along with the ideas about numbers that it sparked.
1. I have to confess that the choice of six was not arbitrary.
Mathematically, 6 is the most interesting small number. For example, one might represent it as
XXXXXX
XXX
XXX
XX
XX
XX
X
XX
XXX
or the sides of a cube, hexagons or six-spoked wheels. Occasionally someone would draw a rectangular prism with sides of length 1, 2 and 3. I often had students who might also draw 六 or 陸 or ٦ or ৬ (I had a diverse student body).
Starts his article in the very first sentence with referencing another article written by himself where he praises his achievement of having invented interstellar travel during a hollywood movie shoot for which he was the science advisor and which showcased the language invented by him and called by his name.
Stephen Wolfram is so self-centred, I sometimes wonder if it's actually a stage character invented by a really talented comedian.
If Mathematics is a language to describe reality, maybe numbers are not its whole alphabet. Inherent complexity, things that are not discrete, and emergent properties may not be described adequately with numbers, and maybe a different alphabet or even language is needed.
Numbers may be (or not, it may depend on our biology) a good initial concept, but maybe something else may be developed, something more "correct" to deal with the tasks of describing reality, something like metric vs imperial units.
The idea of integrating time to your vision of reality in the way it is used in the Ted Chiang's The story of your life (the movie Arrival was based on it) could be a good approach.
It's not about numbers, but it's built on concepts which wouldn't exist without numbers as a foundational abstraction.
There may be entire categories of representational concepts which we abstract poorly, or don't abstract at all, or possibly don't perceive at all - because from our point of view the relationships are so complex and remote they're effectively invisible.
I've been thinking about this since I read about the Pirahã people. The concept of number enables engineering and business (accounting) (and perhaps timekeeping). Someone unfamiliar with numbers would find it very difficult to function in modern society and a group of such people wouldn't build modern civilization (unless they discovered numbers, which someone did at some point, of course). Now, here's an example of my crazy dream: imagine the possibility of currently inaccessible ways of perceiving or thinking, that, if they were to become available, would enable a four year old child to gain an understanding of, for example, elementary particle physics from zero to phd level in mere minutes. Of course, that way of thinking might be so different that said understanding might make no use of (our current) mathematics or the concept of elementary particle at all.
It's an aside, but it's worth remembering that mathematics was not humanity's first attempt at explaining reality. For a very long time we tried to explain it using will. There were a multitude of beings, all with their own different motivations, and they could be angry, or happy or jealous or any other emotion, and it was these feelings of theirs that guided their actions and explained how everything came to happen.
> If Mathematics is a language to describe reality, maybe numbers are not its whole alphabet.
Who would claim they are? Mathematics initially co-evolved with our ideas of numbers, but many other important objects have been thought about for a very long time now.
An analog computer with huge resources would better use geometry than numbers to do calculations and obtain exact results. For example, calculus was first discovered thousands of years ago using geometry:
There's the related concept of 'sortals', the preconditions for countability. From the Stanford encyclopedia of philosophy[1]:
The three main ideas are that a sortal
tells us what the essence of a thing is
tells us how to count things of that kind, which requires knowing which things are different and which are the same
tells us when something continues to exist, and when it goes out of existence
Does "easier" equate to "universal". I think it does, just thinking about your statement out loud.
Since to humans, explaining numbers with hydrogen would be harder, I think. The common ground between humans is larger, so we can rely on something less fundamental and abstract.
This was about finding some kind of common understanding to anchor the rest of the message to. Hydrogen, being the most abundant element in the universe, is likely to be known and studied by any civilization advanced enough to detect Voyager or its signals.
Not OP but I'm guessing OP is referring to the Voyager Golden Record attached to the first Voyager probe, specifically the playback instructions using hydrogen atom to derive time units for playing the record:
https://en.m.wikipedia.org/wiki/Voyager_Golden_Record#Playba...
Many of the ideas for the record came from Carl Sagan and a committee he lead working with NASA.
I can understand how natural numbers can be “constructed” (for lack of a better word) as a byproduct of counting, what I could never understood on a deeper level are negative numbers, I can’t see how a number i.e. a count can be lower than zero.
Maybe related, while I can also partially understand multiplication (syntactic sugar for adding) I could never understand multiplication by zero, meaning how come when you multiply a number (no matter how big) by zero you get zero as a result.
Maybe there’s some Wittgenstein-like material somewhere that will better explain this, in which case I’ll very happy for some references.
I'm not sure if it is what you want, but Intuitionism [1] is one area that challenges modern fashions in Mathematical thinking. It suffered greatly under the formalist approach lead by David Hilbert and still has little main-stream support despite Gödel and his incompleteness proofs. Veritasium's latest video on Gödel's incompleteness [2] gives a pretty fair account of how we settled on the current fashionable foundations of Math (including nods to Cantor and Hilbert). For a more formal history there is a book "The Philosophy of Set Theory" [3] that sketches out how we got to where we are.
I have always been unsatisfied with the current foundations of math and their obtuse basis in Set Theory. Although I must admit, Category Theory has alleviated that quite a lot, especially with the relaxing of equivalence compared to equality.
I am somewhat familiar with the standard foundation of maths in set theory, and slightly less familiar with the program to ground maths in Category theory (although I do know a fair bit of category theory).
Maybe I am biased but I do not find the category theory foundations any less obtuse than the standard formulations. Like its pretty cool that you can do this stuff with category theory, but I think there is a reason the set theory was done first.
I didn't meant to imply that Category Theory addressed the obtuseness of Set Theory. Rather, I was alluding to work in Category Theory that helps to redefine our ideas of equality and equivalence. A discussion of that is available in Quanta Magazine [1].
Positive operations and numbers move you to the right.
Negative operations and numbers move you to the left.
So a negative number is simply a count of a change in direction.
This works very literally. Ten miles west followed by five miles east is five miles west. There's nothing mysterious or weird about the "absence of westness" when you've turned around and started off in the opposite direction.
It generalises neatly to complex numbers where i is a rotation in the complex plane - instead of being pointlessly-weird-for-the-sake-of-it: "the square root of -1 which we've spent years telling you can't exist, and now we're telling you it can."
Technically this is a form of basis vector. Beyond that things get complicated.
But the idea is still valid - you define your direction markers (even if they're functions instead of constants) and then you can work out where you are in the space you're exploring, and what "movement", "counting" and "position" mean in that space.
Multiplying by zero is always a null movement of no distance.
It might be better to conceptualize negative numbers as numbers in a different direction or along a different axis.
If I cover lunch for you, you would owe me $5.
In a way, you now have -$5. You won't have $0 until you pay me back the $5 you owe. We can also say you have $5 of debt. That makes the number positive while still representing the amount. But when reconciling your books, it'll be subtracted from your total.
The thesis, broadly, revolves around contemplation about whether an number is an object, but the larger question is whether it's possible to have any small part of mathematics, even a single integer, which doesn't imply the whole (or at least, a significant structure).
It’s fun to posit these ideas by going in the intuitive direction, then going in the reverse direction and seeing what happens.
I recently found this to be helpful when investigation fixed and floating point numbers. If ”1011” means 2^3 + 2^1 + 2^0, which is 11 the “1011.101” means the same thing but with an additional 2^-1 + 2^-3 aka 5/8. Negative powers seem weird to begin with but it kind of just is there to just discover, due to the symmetry.
This kind of arithmetic is far less fundamental than what you and the article are talking about — it is just representation really, in computers — but I think it is a good example of how if you can tread a path in one direction then turning around and coming back to where you started then carrying on in the other direction is a useful tool for teaching and learning.
We can informally derive other types of number by extending operations on our existing set of natural numbers, seeking a "closure" for that operation, and seeing if we get consistent results.
So extend the concept of differences between natural numbers, by subtracting a large number from a smaller number and defining the result as belonging to a new class of number. Similarly, get fractions by defining non-integer ratios between integers, real numbers by defining non-fractional limits of infinite sums of fractions, imaginary numbers of defining non-real roots of polynomials with real co-efficients, and so on.
Each time, we have an operation that works for some subset of our numbers, so we look at what does not work, and see if we can make it work anyway by defining the result of such an operation as a new type of number. And then we repeat with a different operation.
But there are things we cannot consistently "extend", like division by zero, or 0^0, so we leave just those out.
On multiplication, it's better to not think of multiplication as being repeated additions, but instead to think of it in terms of areas. Taking that approach, it becomes easier to consider, e.g., (x² + 2x - 1)×(3x² -4x + c) since we can write the terms along the sides of a rectangle and divide the rectangle into the products of the terms.
One of the things I did in my math for liberal arts majors class I used to teach was also to do a fictionalized version of the expansion of the concept of numbers from *N* to the whole numbers to integers to rationals to algebraic numbers to reals to complex numbers. I say fictionalized because whole numbers and negative integers historically come after (positive) rationals, algebraic numbers and transcendental reals, but for pedagogical purposes, its easier to go in order of increasing supersets.
From what I read, negative numbers took a while to be accepted for the same reason, so you are not alone ;)
IMO negative numbers, complex numbers for circles, etc can be thought of as practical overloads. If we agree that the symbol minus means a debt, we can use it like that. We could agree, on *5 as meaning something, and if enough find it useful in 200 years we will teach it in school as obvious.
I think the negative number is obviously useful (you can add up positive and negative numbers and get a balance, it has an easy way to be plotted, etc)
So I'm no philosopher of math, but the intuitive way I think of it is that negative numbers are like borrowing a place holder.
Think about how electrical charge and currents work at the physical level. There's electrons, which have negative charge, or holes, which have positive charge. Holes are just an empty place an electron can go, rather than an extant particle.
Similarly, when we think of negative numbers in relation to counting numbers, we're just using a notational trick to keep track of a place holder or hole, a spot where a unary count can potentially go later to cancel it out.
There's a pretty fascinating book named Quantum Computing Since Democritus by Scott Aaronson, one of the leaders in that field. The idea of the book is "Could the ancient greeks have discovered quantum mechanics?"
Much of the higher level math in the book is past my familiarity, but the central theme is clear and intuitive: if you take ordinary probabilities, generalize them to allow negative probabilities, and then generalize those to allow complex numbers, out pops quantum mechanics quite naturally.
Why do these two generalization steps make sense? What the heck is a negative probability of an event? It's exactly just notational borrowing in the same sense as above. Why generalize to complex numbers? That's more tricky, but I think of it from two directions: 1. It allows you to model partial constructive and destructive interference of probabilities, rather than just simple union or intersection. 2. Complex numbers are algebraically closed, while more simple numbers are not. So it feels natural that our ultimate number system to model nature would need to extend all this way.
I realize some math heavy folks would find my way of thinking of this a bit hand wavy, but it really has helped me cut through the confusion and mystery. It also fits in very well with a bayesian perspective on probabilities.
This, the "fields are what's real" perspective on physics, and bayesian epistemology in general have greatly simplified the way I think of these big idea question topics.
I believe the concept of negative number doesn't arise naturally from counting, you need equations – even though we teach it all at once to kids when introducing the number line.
You don't really need equations, you just need missing elements, or debt. Do I have extra bricks, just enough (0 extra) of them or am I missing some? The case of missing bricks is naturally modelled with negative numbers.
edit: another intuition is that ordering is a property of numbers. Positive numbers are ordered in one direction. Can numbers be ordered in other directions?
If i'm at a party and everybody wants 2 beers, then if there is 1 person I need 2 beers (1 * 2), 2 people need 4 beers (2 * 2), but 0 people need 0 beers (0 * 2).
“Zero people” and “needing” is an oxymoron, nothing (zero) can’t be associated to any natural thing (like “needing”), that’s what I was trying to say in my not so clear comment above.
Writing this down I realized we’re still trying to write in fancier words what the pre-Socratics had a clear understanding of 2500+ years ago, and speaking of the Greeks is too bad that Wolfram didn’t mention Plato by name in the first few paragraphs, the chair example is practically taken from him (expecting a Parmenides quote would have probably been too much).
I think I understand what you're trying to say. Let me try to motivate algebra in less explicitly algebraic terms for you:
Zero is an algebraic concept of nothing. While it refers to no physical thing, its existence in algebra is necessary to describe several mathematical laws, and several properties it has in algebra inherently derives from its algebraic concept of nothing.
Let's define both addition and multiplication [1]. Addition is an abstract representation of combination: you combine a pile of 2 things and a pile of 3 things to get a pile 5 things. Now zero is the model for what you can combine with anything else that doesn't do anything: combining a pile of 0 things (that is, nothing) with a pile of 3 things leaves you a pile of 3 things. Negative numbers represent undoing a combination: combine a pile of 5 things with a pile of -3 things (i.e., "take 3 things from the pile") leaves with a pile of 2 things. Negative numbers and zero numbers may not necessarily have a direct physical analogue, but by introducing them for algebraic purposes, things actually become simpler: you use the same terminology and logic to deal with both adding and removing things, or perhaps coming to the conclusion that in the end there's no net effect.
Now, multiplication is scaling. A half a pile of 2 things is 1 thing. Scaling and combining interact with each other, too. Taking half of a pile of 3 things and half of a pile of 5 things is the same taking half of a pile of 8 things. Or I can say that doubling a pile and then adding another of the original is the same as tripling a pile (i.e., 2x + x = 3x).
This is where things get interesting. We can do nothing by adding a pile of something and immediately taking it away, leaving us with what we started (i.e., 0 = a·x - a·x). From above, we can also see that that is the same as adding a pile whose scale is 0 (i.e., a·x - a·x = (a - a)·x). Simplifying the equation a bit, we end up with 0 = 0·x: multiplying by 0 must yield 0 to make both addition and multiplication make sense. So the concept of nothing times anything yielding nothing isn't a requirement of nothing itself, but it's a requirement of how addition and multiplication works, and how nothing itself interacts with those operations.
Incidentally, the deeper you dive into mathematics, the more important you realize the concept of 0--of nothing--actually is. The most powerful ways to describe operations are based on how they arrive at doing nothing in interestingly nontrivial ways. And things that don't have ways to do nothing tend not to be very interesting structures to look at.
[1] I'm alluding to a vector spaces here, although by glossing over the difference between scalars and vectors, it could also be viewed as rings instead.
That’s over a thousand years, and that’s only the use of zeroes as a placeholder inside numbers. The use of a lone ‘zero’ symbol for the number zero seems to have taken over 2,000 more years (https://en.wikipedia.org/wiki/Brahmagupta#Zero)
As a kid, I recall that the thing that really made negative numbers click for me was underground floors, particularly when pressing on elevator buttons.
Because multiplication is just the number of times you add something to 0. If you add 2 to 0 3 times you get 6, if you add 2 to 0 0 times you get zero.
I’ve mentioned in a comment above, I find nothing natural in doing operations related to zero i.e. related to nothingness, meaning that I see “adding” and “zero times” as an oxymoron (you cannot associate an action, “adding”, to nothingness, i.e. to “zero”).
Well by that logic if you have nothing, there's nothing you can do to it, so you will always have nothing, which reconciles quite nicely with multiplying numbers by 0 resulting in 0.
I think humans could predict solar eclipses and solve quite a few differential equations before they managed to give that answer. The normal answer to that question was “It hasn’t rained this week” or even “What do you mean? It hasn’t rained this week”.
https://en.wikipedia.org/wiki/Brahmagupta#Zero: ”The Brāhmasphuṭasiddhānta is the earliest known text to treat zero as a number in its own right, rather than as simply a placeholder digit in representing another number”
> I find nothing natural in doing operations related to zero i.e. related to nothingness, meaning that I see “adding” and “zero times” as an oxymoron (you cannot associate an action, “adding”, to nothingness, i.e. to “zero”).
, where each iteration checks for the condition. But some folks might feel like a loop should always just do something first and then consider if it should repeat, i.e.
do
{
// ... body ....
}
while (condition)
From that sort of perspective, zero might seem kinda unnatural and contrived, sorta like
if (!zero)
{
do
{
// ... body ....
}
while (condition)
}
In other words, the fallacy might in some assumption that doing something multiple times means doing it once and then possibly more times.
"The brain does much more than just recollect; it inter-compares, it synthesizes, it analyzes. It generates abstractions.
The simplest thought like the concept of the number one has an elaborate logical underpinning; the brain has its own language for testing the structure and consistency of the world."
I guess one of the most fundamental difference between Wolfram’s model for fundamental physics and traditional physics is that Wolfram’s doesn’t have the concept of measure or of continuum at the fundamental level. Space and time, according to Wolfram’s model of the universe, are emerging properties of ‘the network’. Without such things as space and measures, there is no numbers in the fundamental “equations” that drive the system.
While I am not a big fan of Wolfram’s physics (I think it’s a bit backwards that he took something that he knows very well and somehow finds that it’s how the universe works), I kind of like the idea that space and time emerge from something more computational.
I think that’s what the article should have been focused on...
>I think it’s a bit backwards that he took something that he knows very well and somehow finds that it’s how the universe works
That's in a sense how every model works. All theoretical models of the world are human inventions that aid in making sense of the world in terms familiar to us and nothing more. If you get good results from thinking the world is made of atoms, then the world is made out of atoms. When someone comes along and explains the same thing with strings, then the world is made out of strings. Models are just 'manners of speaking'. It might very well be that you have a dozen entirely different, but equally accurate fundamental ways to talk about a thing.
It's so sad to see that he is taking an interesting question and turns it into a sales pitch of his computational universe theory.
It's as if Stephen Wolframs mental horizon starts getting more and more restricted over the years since he tries to frame everything he sees in terms of his physical theory.
Or maybe he has an interesting theory he understands deeply, and think its valuable to show how this theory can give insight to the problem in question. Why be so cynical?
I had that view initially. And then I followed his writings for quite a few years and it started getting old.
And long. Loooong. His sales pitches seem to always go in circles. Computational irreducability here, causal graph there, namedrop Wolfram Language a few times with pretty pictures, make sure that "in our models" Einsteins theory and quantum mechanics are mentioned as special cases and "surprisingly everything works out beautifully". Yeah no right.
Time to move on. I've read enough of his 20-mile-long self-praising sales pich novels. He is just going in circles.
Repetition or iteration is fundamental to mathematics. Symbolic mathematics is iteratively applying axioms and schemas to elements of a language. Computation is iteratively applying rules or functions to sets. The correlation between numbers and iterations is strong; zero iterations is identity, one iteration is the unit of computing/application/doing-something. More iterations yield ordinal numbers, and there we are at numbers being inevitable.
In theory a universe could be entirely continuous with no need to "compute" in any discrete steps but quantization in our universe suggests that numbers also have a place in correspondence to quantized aspects of reality.
‘If a lion could speak, we could not understand him’.
- Wittgenstein
So even things we understand as fundamental truths of the universe are so deeply rooted in human experience that it's possible another living entity could be so different than ourselves that the capacity to sympathize with one another about certain things may simply not exist. Would a lion understand a corsage? Would an alien know what 12 x 12 means? The universe is just one hella big ¯\_( ͡° ͜ʖ ͡°)_/¯
When we train neural networks, we don't tell them what to think and how to think about it. We mostly focus on overall structure, layers and we want useful outcomes. And yet we see lots of patterns that repeat ways in which we think.
I think if a lion could speak, we'd understand him, because we're much closer to "a speaking lion" ourselves than we realize. Our intelligence didn't just happen at the last mile between ape and humans. Our intelligence has been emerging from the simplest animal there is throughout our entire evolutionary history.
And even then we see birds, octopuses and so on use tools and solve problems very similarly to us, which have branched much earlier from us. There are many different types of minds that can evolve. But I believe the fundamentals will always be the same. They're fundamentals of information processing systems. And BTW, we know dolphins, parrots and so on can do basic math, like counting, and translating that count to another set of objects. So they understand numbers, that's what numbers are at their most basic.
We marvel at how enormous the universe is, and how varied life in it can be, how different the minds of aliens may be. But in that seemingly humble pondering, there underlies something very arrogant. We think our mind is special. We think it's unique. And other minds will think extremely differently.
I think we'll figure out sooner or later, that our mind is basically inevitable, the broad strokes will be replicated for every species throughout the universe. This doesn't mean it'll be immediately easy to figure out aliens and their culture. But those are the details, not the fundamentals.
The way I see the universe is that all this complexity is just the interplay of a finite few axioms (the fundamental laws of physics). Given enough time and scale, the seemingly unique complexity converges back into a few (relatively speaking) patterns. If that wasn't the case the universe would be pure chaos.
Wouldn't it suck if you and I and the rest of the universe are the inevitable result of a tweet-long equation, and any change to the equation renders it unviable.
So everything is just one formula, and me picking my nose right now came from that formula.
Related to this I have often wondered if childhood amnesia is (by a crude analogy) a "format problem".
In this analogy, the lion is our younger selves. Perhaps our brain changes so much that we cannot understand (or access) what we were thinking or experiencing anymore.
If we imagine that long term memories are encoded in a way that they can later be interpreted actively by the structures of the brain, we could expect that major developmental changes in brain function might impact the ability to interpret long term memory formed before the change.
Physical trauma, the major reorganisations of infancy and language acquisition all prompt changes in organisation and processing that could be what renders earlier memories inaccessible.
This (admittedly hand-wavey) idea seems to mesh well well with the observed details of the phenomenon. Frequently accessed memories may be re-encoded, leading to some continuity. Children themselves experience fairly continuous long-term memory. Types of memories where relevant processing might not have changed as much (smell, visual memory) might be more resilient. We would expect age of language acquisition to play a part. Etc.
While the post covers quite a bit of ground, it feels (to me) like it conflates knowledge representation, language, biological systems (i.e., the messiness of implementation), computability, and realism.
Regarding numbers in particular, there are a practically uncountably infinite number of mathematical truths that apply equally to numbers or to other abstract (non-numerical) mathematical ideas.
I would rather see depth in one area or another, rather than a conflation of ideas providing food for thought. Otherwise there are far too many variables to consider for a worthwhile analysis.
While the post covers quite a bit of ground, it feels (to me) like it conflates knowledge representation, language, biological systems (i.e., the messiness of implementation), computability, and realism.
I understand that many of those well-developed fields which exist on their own terms, have standard methods, standard questions and standard approaches to moving towards answers.
The article jump between these fields to ask and grope for an answer to a simple question that in many ways can't be asked or answered in these fields.
One thing to consider is that present day computers can follow the mechanical production of mathematical propositions close to completely. But computers have a lot of trouble producing or following arguments like this, in "natural language", which have a definite logic to them but whose operation is not based on only explicit, codified rules.
Edit: To me, this sort of speculation is what philosophy actually should be doing. The questions that are "ill-defined but compelling" are the questions that have lead significant intellectual progress. How Zeno's paradox lead (or at least related) to the invention of calculus, how Einstein's thought experiments lead to relativity, etc.
Because of the limitation of language there's only a countable number of mathematical truths that can be proven or written down. So for all practical purposes there's countably many.
There are countably many proofs, but a proof does not entail a single truth. Many truths are entailed by a single proof, in fact an uncountable number of truths can be entailed by a single proof.
That does not mean that all truths can be written down or enumerated, but strictly speaking it is not sufficient to conclude that due to the fact that the set of all proofs are countable, that the set of all truths entailed by the proofs must also be countable.
None of this should be taken to violate Godel's incompleteness theorems.
Finally, it's worth mentioning that not all formal systems are limited to finite proofs. There are formal systems where theorems as well as proofs can be countably infinite in length and where there are uncountably many proofs. These systems, known as infinitary logic, are often reduceable to second order logic and hence are incomplete.
No, because proofs have to consist of a finite number of words. Thus there are only countably many proofs of anything. In particular, there are only countably many reals between 0 and 1 which can be expressed in a finite number of words.
Are there a finite number of words? Language seems to grow and adapt to new concepts as needed, perhaps there is an infinity of linguistic descriptions available to us.
It's not. It's a single proof about a set, a set that's assumed to be uncountable in standard ZF set theory.
The "axiom system" that (supposedly) contain a countable number of axioms. But these too are constructs of set theory. We still create proofs one by one of theories about axiom systems with infinite axiom - so we have a countable/enumerable set of such theories.
The proof systems to we can see or touch involve this enumerable properties. Perhaps you could change that with an analogue computer that a person could input "any" "quantity" into. But that's outside math as things stand.
Do you mean the proof that 0.25 >= 0 and the proof that 1/e >= 0 count as the same one, because there's a more general proof that a set of values including those is >= 0? But then where do you draw the line? When can you consider 2 proofs different enough to count as different ones?
I think you have a slightly stricter definition of "a proof" than me. I would consider a proof that all the numbers in (0,1) are positive to also be a proof that the number 0.5 is positive, as well as the number 1/e, and Champernowne's constant.
Since the original question was about uncountably many mathematical truths I would say we have one proof that proves uncountably many mathematical truths.
It is an abstract proof of a generator for concrete proofs of specific assignments to variables. The potential is uncountable, but only a countable subset will ever be invoked.
I don't know what it really means for a proof to be invoked, and I also don't really like the idea of separating proofs into concrete and abstract proofs. Either it proves something or it doesn't.
Numbers is the consequence of conservation law. If you relax the definition of the natural number as something that can be stable at time and space, you will see another system is possible. Gosh, even periodic table is a good example of what is possible if no strict proton-oriented association is in mind.
> How many berries do we need to collect for each.
A handful for each.
> How big a foundation is needed for this building.
What building? You haven't even built the foundation yet.
And so on. You can do without numbers. And in fact people did all of the above without measuring quantities in any meaningful way in the history of humanity.
We use the natural numbers as an abstraction to understand computation. Arithmetic and number theory have isomorphisms to other areas of mathematics. The proofs-as-programs equivalence shows that math and computation are the same. So i like to think numbers are how we perceive the computational aspect of reality.
Arguably, number came from money. Not just the counting, but the generalization/fungibility - money can be exchanged for anything (unlike barter); number can represent a quantity of anything.
Trade and money have more direct survival advantages than number, an evolutionary gradient for improving the cognitive capacity supporting this generalization
Numbers are not fundamental. What is fundamental is laws of arithmetic. 1 + 2 == 2 + 1. Laws like that can only be expressed in some language, which we call "numbers". But only by knowing and using such laws "numbers" become useful.
So in some weird sense numbers allow us to define the laws and the laws define numbers. No?
IIRC (edit: and as alluded in the article) there are languages which count as "one, two, three, many".
But one two and three are also innate (immediately recognizable) quantities are they not? So does a person using such a number system actually count, or recognize and categorize only?
Eh idk something that moves uses energy and something that uses energy needs to replenish said energy. A natural question becomes how many recharge traveling a path needs, of course you can think about it as positions, but the stops themselves are intrinsically countable
To me the concept of numbers is inherent in the dualistic minds we all have. If there is "me" and "other", there is 1 and 1, together making 2... a grouping of similar "others" is 3, and so on. It's simply just our nature.
Odd, I was thinking about this earlier today. I came to the conclusion that an axiomatization of math sans numbers doesn't make sense because you have to have a certain number of axioms. I'm certain a counterargument could be made though.
Well [Kind](https://github.com/uwu-tech/kind) has only one "axiom" (the lambda), so, I don't know? This is a really inspiring thought and I'm glad people are debating it for the content and not attacking Wolfram just because.
Goedel showed that 'less than', 'equal to', and 'more than' (ie pairing) are essential. Individual natural numbers are cute but not compelling. Even if God created them.
One of my favorite thought games is trying to imagine an alien species who started off with a different way of counting/measuring things. Say, maybe going with complex numbers right away.
I've always thought of numbers, and the broader system of mathematics as equivalent to the Newtonian understanding of physics - an adequate tool to explain observed reality - but the edge cases that break the system are starting to pile up, and we are just waiting for someone to discover the Quantum/General Relativity theory of mathematics.
As someone who has recently started diving deeper into math recently, that is very interesting and something I was totally unaware of. Could you point me to some of these edge cases?
In the beginning was the primordial distinction between darkness and light, nothing and something, before and after. Its name was the Unit, its faces named zero and one.
And with the Unit came the Successor, the primordial operation, for what is cleaved may ever be joined. From one comes two, from two comes three, and ever on until forever and always, with each successor given a name as a number.
And with these numbers came a set comprising them, and the set was Natural and good, and from it came many wondrous things.
For from repetition of the Successor came Addition, and the set was closed under Addition, and the Counter saw that this was good.
And from repetition of Addition came Multiplication, and the set was closed under Multiplication, and the Counter saw that this was good.
And from repetition of Multiplication came Exponentiation, and the set was closed under Exponentiation, and the Counter saw that this was good.
But if a thing can be done it can be undone. What is given can be taken away. If there is Addition there must be Subtraction. A shadow fell over the face of the Counter for under Subtraction the set was not closed.
Yet the set of Natural numbers had its closure under Subtraction, and this closure was another set named Integers, and the Counter saw that the Integers were good.
But if Addition of a Natural number has an inverse, so too must Multiplication by a Natural number, and this inverse was Division. A shadow passed again across the face of the Counter for under Division by a Natural number the set of Integers was not closed.
Yet the set of Integers had its closure under Division by a Natural number, and the closure was another set named Rational numbers, and the Counter saw that the Rational numbers were good and rejoiced at their scope, for between any two Rational numbers was an infinity of other Rational numbers, each with its own name.
But if Addition and Multiplication by Natural numbers have inverses, so too must Exponentiation, and indeed, so must the combination of Addition, Multiplication, and Exponentiation in a polynomial with Integer coefficients, and this inverse was the finding of Roots. A shadow passed again across the face of the Counter for almost never were the Roots of polynomials Rational.
Yet the Roots of polynomials with Integer coefficients gave rise to a new set, the set of Algebraic numbers, and the Counter saw that the Algebraic numbers were good and rejoiced at their scope, for the Algebraic numbers have complexities that delight and amaze, and each has its own name.
And yet.
Almost no number is Algebraic.
Almost every number belongs instead to a Transcendental realm where there are many terrors and almost nothing can be named.
"We take in some visual scene. But when we describe it in human language we’re always in effect coming up with a symbolic description of the scene."
Human mind works in a qualitative manner, we need symbols to translate qualitative perception into a quantitative abstraction. This started as an economic and social organization need (geometry), but later evolved into Mathematics and got to overcome the shortcomings of natural language to describe reality (philosophy became science).
Numbers are just symbols that map human perception to a reality that is inherently quantitative.
I fail to grasp what Mr. Wolfram tries to explain here, but it looks to me as if he is regressing into philosophy.
All structures in our universe are trees^. Numbers are a metalanguage for describing those trees. It is possible that there are other independent universes that we can't perceive, but I'd expect any aliens that we eventually interact with to be operating only in the treeverse, and so also be fluent in a language like our numbers.
^ Perhaps there are other independent universes out there that we don't perceive, but human brains are trees and all structures we can perceive and communicate about also are trees. We live in a treeverse.
There is a brazilian tribe (piranhã) that knows no concept of quantity besides one, two an
d many. Also, in their language, verbs are not flexed related to time. This is probably du
e to their lifestyle that needs no long planning, discussions about the past or managing m
ultiple instances of the same resources.
Basically every single claim about Pirahã (not Piranha) needs to be taken with a huge grain of salt. There's just not enough people who have studied the language and the claims are so strong and unparalleled that we really ought to have more evidence between making any conclusive statements.
Hmmm... I stand corrected. Although I lived in the Amazon region for a few years (1993 to 2002 in Rondônia), never formally studied any native language and mostly blindly believed the wikipedia article on Pirahã people[0] and probably mixed things from the Hopi language time controversy.
Thanks for such info.
Ironically, my English was mostly learnt from people from Europe who came there to conduct researches. This is amazing and sad at the same time. But the fact that some tribes can whistle names, words and entire phrases and use such skill to communicate while hunting in the jungle is something I was told from an native speaker, can't remember what tribe or language it was; probably Caripuna or Suruí.
I'm not claiming that what Everett et al. are saying is wrong, just that the claims are so spectacular that I'd like to see more evidence before I believe them.
He's using a technical definition of irreducibility [https://en.wikipedia.org/wiki/Irreducibility] for which the claim is true. It doesn't mean "cannot be expressed more concisely at all".
Every interaction our human minds have with fiveness is with a representation or encoding of 5. With each new representation, I’ve experienced fiveness slightly differently.
But I like computers and math. For me, the best to experience oneness with the fiveness is through computation with Lambda Calculus and Church Numerals. Very fond of that encoding.