Many valued logic arises quite naturally in category theory. If you take usual set theory and change the sets to fibred sets you can easily get a subobject classifier with, e.g., four truth values. However, these don't tell us much about "truth" in the social sense.
Actually, my impression of what category theory has brought to mathematics is that structure itself seems more fundamental than logic; it is as though logic arises because of structure too (and not just the converse of logic leading to structure).
There are some semi-tenuous, but nevertheless interesting connections between this metamathematical line of thought and contemporary work on "grounding" in metaphysics.
I've long held the suspicion that there is some critical unargued for assumption in many of the relevant academic circles, here, that if reality 'bottoms out' somehow it has to bottom out in a monadic fashion. I've often wondered, why can't it be structure all the way down?
----
I should add that many/most grounding theorists likely hold some sort of monadic view. I only mention grounding because it is seemingly closer to a 'structure all the way down' view than the predominant views within analytical philosiphy on what is fundamental.
My pleasure. The first three are entries in the Stanford Encyclopedia of Philosophy. I suggest reading the first link (entry titled "Fundamentality") first, as that will help provide context. The second link (entry titled "Metaphysical Grounding") provides a somewhat dated, but solid overview of the literature and is what you should read right after "Fundamentality". I say somewhat dated only because the entry is from 2014, and 'grounding' sort of took off around 2009 and has accelerated past 2014. The third link (entry titled "Ontological Dependence") can be skipped, but if you want additional conceptual and historical context, then I suggest blitz reading it.
The rest of the links are to books or papers/articles which I think you might find interesting.
The very last link is to a rather obscure paper (I randomly found it a year ago) from 1997 that is not on grounding, but is related in some way to the 'structure all the way down' perspective I suggested. The papers' argument is incomplete, attempts to do too many things at once, and presents a rather naive and elementary framework for a 'structure all the way down' view. However, it is (in my opinion) a rare gem. It raises a number of important questions and issues that have (to my knowledge) not yet been addressed by anyone within academic philosophy, or other relevant disciplines for that matter. These questions and issues go unaddressed because the monadic assumption(s) is, seemingly, so deep seated and fundamental to most people.
The way I see it, either 'things'/'substances'/'elements' are primary and 'relations' are derivative/secondary, or they are on equal footing, or 'relations' are primary and 'substances' are derivative/secondary. There are only three possibilities. I'm inclined to believe either the second or third are true. If you put a gun to my head and made me choose one, I'd say the third is true.
I like this way of looking at things. I've been (as a studied amateur) reading and thinking about this topic for a long time. In some implementations of logic programming, disjunction is analogous to iteration or sequence (do A then do B) and conjunction is analogous to nested loops or map (map all A with all B).
If structure is more fundamental than logic, it's not mysterious why the procedural interpretation of logic programs comes out the way it does - it follows from the structure underpinning the logic.
Mac Lane started out in logic before category theory was invented and I think this had a positive influence on his pedagogical style in writing, for example, textbooks.
Does what you say have anything to do with probability theory (e.g.: the set space B given that A is true)? There are of course the basic adjunctions that apply to quantifiers and connectives (e.g.: the adjoints to the preimage). Is iteration and nesting as you mention similar to this?
When I was 18, I "discovered" three-value Boolean logic by thinking about what it would mean to have three signs in arithmetic. I called them positive, negative and urgative and had a symbol for urgative so I could write expressions. I worked out truth tables and had a lot of fun.
Later I brought this up with one of my math professors and he encouraged me to study methods of proof so I could understand my results better. To his credit, he didn't say "this is already already all worked out," he sparked an interest that led to me studying a lot more mathematics (like abstract algebra and non-euclidean geometry) than I had planned when I started college.
When I started thinking about it I made a number "line" that had an origin (0) with positive in one direction, negative in another, and urgative in a third.
The reasoning is that a different sign takes you towards the origin, then in the direction of the new sign. The answer to your question is:
-1 + +5 = +4
+4 + urg3 = +1
to illustrate some other examples:
+5 + urg10 = urg5
-5 + urg10 = urg5
urg5 + -10 = -5
urg5 + +10 = +5
I never actually proved that the arithmetic was consistent, or even useful in any way.
Multiplication was much easier to reason about because setting the sign was just the result of a truth table. This is when I figured out the true/false truth tables extended the 3 symbols pretty trivially. I worked through boolean logic and then got bored with it and forgot about it until that time talking to my Calc 3 teacher during office hours.
I replied to the commenter. I started thinking not about boolean logic, but about a third arithmetic sign. I was thinking, if there are conceptual dimensions we can't perceive, maybe there are directions in those dimensions we also can't perceive.
If this caught your interest, run out and buy Graham Priest's Introduction to Non-Classical Logic. https://www.cambridge.org/ie/academic/subjects/philosophy/ph... It's so good. It's just a pity that there doesn't seem to be video of Graham Priest teaching this material in person anywhere.
Oooh, dat interesting! I wasn't strongly aware of non-classical logics (other than modal and classical). Meeting intuitionistic logic... bloody hell that was very weird for someone brought up on classical logic. I will certainly take a look, thanks.
Right, rage hat on, I'm not buying this. It's an absolute fucking classic on how not to write a maths book. From the PDFs, some crap examples then.
"It is also standard to define two notions of validity. The first is semantic. A valid inference is one that preserves truth, in a certain sense. Specifically, every interpretation (that is, crudely, a way of assign-ing truth values) that makes all the premises true makes the conclu-sion true. We use the metalinguistic symbol ‘|=’ for this. What distin-guishes different logics is the different notions of interpretation they employ."
Wut the utter fuck does that mean? What could "preserves truth, in a certain sense" possibly mean? What the fuck is a metalinguistic symbol anyway? He goes on:
"The second notion of validity is proof-theoretic. Validity is defined in terms of some purely formal procedure (that is, one that makes reference only to the symbols of the inference). We use the metalinguistic symbol ‘|-’
for this notion of validity"
I think he's saying ‘|=’ is the human version of 'therefore' or 'we can deduce', and the formal version is ‘|-’, which corresponds to my understanding but if I didn't have a background in this I'd be fucked. This stuff is not complicated and it's mainly not hard, but it is fucking impossible if you write like this.
Later on we have " ⊃ " which is "material conditional" - da fuq? This is logical implication I think, it is standard in classical logic as an arrow '->' but he neither uses that nor relates it to the arrow symbol, nor even gives it the standard name - I have never ever seen it called a Material Conditional. And he doesn't give its definition so I can't tell. Later he gives a tree form of this I've never seen before but can sort-of understand so I guess it is logical implication.
And how does material equivalence '≡' differ from equality '='? Explain this shit please.
"An interpretation of the language is a function, ν, which assigns to each propositional parameter either 1 (true), or 0 (false). Thus, we write things such as ν(p) = 1 and ν(q) = 0." If I didn't know what this meant I'd be lost.
"A branch is closed iff there are formulas of the form A and ¬A on two of its nodes; otherwise it is open." I'm not sure what he's saying although I know it's actually very simple. It's something like 'if you get x on one branch and ¬x on another then you have a contradiction, which means that branch is junk because it's claiming something is true while the other is claiming exactly the same thing is false; they can't both be true. Except that's a bit wordy and more importantly I'd need to sit down and consider the scope; what a branch actually is and how far back up the contradiction goes.
This is a lesson on how not to do it. I'm getting lost and the 'branch is closed' example is on page 8! I would love to understand the stuff in this but the medium of conveyance, to wit this book, is a car crash. At best it's lecture notes but you'd better have the lecturer around.
This article links to an article on Aristotle's "paradox of the sea battle" [1].
However, I'm having trouble following it:
Aristotle: if a sea-battle will not be fought tomorrow, then it was also true yesterday that it will not be fought.
But all past truths are necessary truths. Therefore, it is not possible that the battle will be fought.
In classic logic the statement could be either true or false. But the statement "a sea-battle will be fought tomorrow" is neither. You can't say that this statement is true or false in advance, it's a prediction, it has only a sort of probability of being true or false.
The interesting outcome of the rejection of the excluded middle is a constructive logic (and math), where the proof that "statement is false is false" doesn't mean that statement is true (hence only the evidence of truthfulness could be considered a proof).
One solution to this is modal logic, where you have the modifiers "necessary" (written "[]" or as a box) and "possible" (written "<>" or as a diamond). So instead of "a sea-battle will be fought tomorrow", which is neither true nor false, you can only express "it is necessary that a sea battle be fought tomorrow" (which is false, you cannot know the future) or "it is possible that a sea battle be fought tomorrow" (which is true, unless it is impossible, in which case it is false).
> In classic logic the statement could be either true or false. But the statement "a sea-battle will be fought tomorrow" is neither.
I'm not buying it. The statement "a sea-battle will be fought tomorrow" is either true or false. Either it will be fought or it won't. You just don't know which one. It won't "maybe be fought".
Similarly, you don't know whether "a sea-battle was fought 3000 * 365 days ago". You don't have enough information to evaluate the truthfulness of either statement, and can only say what confidence you have the sea battle was/will be fought on the given day.
Today, we make the statement "a sea-battle will be fought tomorrow". Two days from now, the equivalent statement is "a sea-battle was fought yesterday". That statement is either true or false. As far as I can see, this leaves two possible views.
First view: The statement today is true or false, just as it will be two days from now. But today we don't know whether it's true or false.
Second view: The statement today is neither true nor false, but it will become true or false tomorrow, and will therefore be either true or false two days from now.
Pick whichever view you like. The argument is going to come down to differences of (unstated) definition of what it means for a statement to be true.
Roughly speaking, the aim of logic is to develop a formal language precise enough to reason about & access truths about the world. The idea that "a sea-battle will be fought tomorrow" is either true or false is not useful to this aim.
Yeah, and not all past truths are known. You don't know the number of hairs on Aristotle's head, you don't know if the sea battle was going to be fought the day after tomorrow yesterday.
The first tip-off that there's something funny going on is that it's temporal conditional and it's trying to run the cause and effect in reverse time. If something tomorrow, then something today and yesterday. And it then uses normal flow to say if something yesterday then today and for all following days.
It's basically self reinforcing saying if it's true then it's always true. The other unconsidered state is if it's false then it's always false.
This is really about 'modality', I'd say. If you have statements in the past that were about the future which now is already past, these statements can usually be verified with certainty. Like a weather forecast. This 100% certainty however breaks down, if you move the forecast far enough that it touches the "real" future from our current present. E.g. you have to observe the weather to see if the forecast was true.
So this "paradox" is just about how the modality of the statement changes discontinuously once it is really uncertain.
For recent years I've been using a kind of four-valued logic in my daily thinking: TRUE, FALSE, an UNKNOWN of limited contagiousness, and an INVALID of full contagiousness. I feel estranged somewhat after learning there is no such logic in the most commonly used ones.
There's a 5-valued logic usable in common everyday live, originating from some Buddhist philosophy I don't care to remember, that I actually use fine in everyday life, similar to yours. I has:
- {T}: true
- {F}: false
- {not-T && not-F}: neither true nor false (yet, for us): eg. "unknown" or NULL (so far we're in SQL-logic territory :P, still familiar)
- {T && F}: true and false at the same time: ERROR / paradox / invalid / contradiction / exception / malformed or invalid questions
- {}: "ununderstandable/uncommunicable" or "cannot be put in to words", but NOT error/exception/invalid - for a software system this would be "there is a true|false|null|exception value for this but there is no direct access to this information" eg. maybe "the value is somehow stored in a physical artifact or arises as a result of an agent doing and experiencing something, but it can't be communicated as information, you'd have to pass the physical artifact to other agents for them to 'grok it', or to engineer situations where they could have a similar experience" or "you can't explain to someone 'how it is to be inlove' or 'how it is to be on drug X', they need to have the experience or access to the drug themselves'
I'm not sure why pentaleans are not as natural to other people as booleans, since they seem way more intuitive to me when dealing with information for the real world...
I think of it in the four categories they typically teach in digital design classes for electrical engineers:
T - True
F - False
X - Don't care
Z - High-Z, essentially "don't know" or a null input
You can apply "don't care" to inputs to a logic equation to reduce the complexity of it. If you know a certain input will never be true while other inputs are false, then you can ignore all the states where that is the case. For example, the ECU in a car will turn on a certain light on the dashboard whenever the wheels slip. The wheels can't slip when the car isn't in drive and certainly not when the car is off so the ECU can reduce the logic needed to determine when to turn on that light. Instead of (car is on AND car is in drive AND power going to wheels AND wheel is slipping THEN turn on light) you can just reduce it down to (power going to wheels AND wheel is slipping THEN turn on light). This could would also cause the light to go on if the car was off while power was going to the wheels and the wheels were slipping but we know that case is impossible (engine can't send power if it's not on).
The High-Z is more of a "don't know" kind of input or output. It is an undetermined input that is neither true or false. We still care about it since it's not a "don't care" but we have no idea what it is. You could build a circuit to react to this state or have the circuit do nothing until it becomes true or false. You can also use this as an output of a circuit.
I understand the idea of {} that you talk about, but I don't know how it works in the context of logical reasoning. Can you give examples on how it interacts with the other 4 values under different logical connectives?
It does not. I think logic itself is limited. Mathematicians have something where like there's stuff in number theory that's "beyond" any logic system invented (so far), something like Math > Logic, even if intuitively you'd think that math it's based on logic it ends up being the other way around you can end up having algebras deeper than any logic system you'd try to base them on (ask some mathematicians for a better explanations, mine would be wrong) if I understood it well...
I think there's something similar at play in physics and in the real world.
Can only explain with a (likely flawed) computing metaphor: "the value is a pointer that you cannot dereference but opaque sub-systems of your mind can still compute stuff with it (eg. it's not truly unknown)".
If I'd try, I'd say that: there's stuff you can indirectly compute with but can't express logically or communicate in a logical language.
It's not uncommon. Many people use the truth values {T}, {F}, {}, and {T, F} of one of the four-valued Belnap systems taken as partial logic. It is very adequate for linguistic modeling of truth conditional content and propositional attitudes, see for example Muskens's great little book Meaning and Partiality.
If I only need the "third case", I personally prefer a bivalent logic with nontraditional predication theory developed by A. Sinowjew (1970) and H. Wessel (1989). It's great fun to point out this system to philosophers who weren't trained very well in logic and are dogmatically convinced that it's impossible to express a third case in a bivalent logic. (Admittedly, that's a very petty motive. Anyway, NTPT will not convince any real intuitionist, because the quantifiers remain classical, too.)
It takes time for me to find and try understanding the materials you mentioned. I'm unlikely to have a meaningful reply to your comment anytime soon. Let me just thank you here :)
Of course not. They are both in German, though, and it didn't help that Wessel refused to change his name when officials of the GDR asked him to, because that would have contradicted his views about proper names. [The best formulation of FOL with NTPT is in the 1999 edition of Logik.]
Wessel, Horst (1989, 1999): Logik. Logos Berlin.
Sinowjew (Zinov'ev), Alexander Alexandrowitsch (1973): Foundations of the Logical Theory of Scientific Knowledge (Complex Logic). Springer.
Sinowjew (Zinov'ev), A. A. (1970): Komplexe Logik. Grundlagen einer logischen Theorie des Wissens. Vieweg.
This sounds a lot like the abstraction that SystemVerilog uses to describe the electronic states that a piece of wire can be in. Namely, SystemVerilog uses 0 for a wire driven by a low voltage, 1 for a wire driven by a high voltage, Z (high impedance) for a wire that is not being powered at all, and X (unknown) for a wire that is driven by a 0 and a 1 at the same time.
X is highly contagious, if you connect any wire to one with an X value, the result will be X. On the other hand, Z wires have their value overwritten by anything (a 0 connected to a Z will result in a 0). However, Z is not a valid _input_ to a logic gate: 1 AND Z = X.
Z is usually used for buses which consist of multiple inputs and outputs connected to the same wire. When nobody is transmitting, the value of the bus is Z. When one device transmits, the bus takes the value of the transmission. And finally, if more than one device attempts to transmit at a given time the result is X and you get what is known as bus contention :)
~
The abstraction you mentioned isn't really similar to what I mentioned in parent. INVALID would be like X, I guess, but there is no counterpart of Z in the logic I used daily. Nonetheless it's interesting to learn about the abstraction :)
Lots of insanity here too. Like, “~(~(a)) = a” is no longer a correct rule (because when negated twice, z becomes x). But other than that it’s a cool system.
...that would be more of a belief theory (like Dempster-Shafer) than a logic.
Or maybe what you are thinking of is simply the situation wherein a logical expression is neither valid nor unsatisfiable. Valid would mean it's true in all models (all possible worlds), and therefore provable. Unsatisfiable would mean it's false in all models (all possible worlds), so that its negation would be provable.
so "p or (not p)" would be valid
"p and (not p)" would be unsatisfiable
"p" is neither valid nor unsatisfiable. it may be true, or it may be false. it's a contingency. Maybe that's what you have in mind by "unknown".
There are various mathematical theories for navigating the space between unsatisfiable and valid. If you start thinking about the proportion of possible worlds wherein p is true, you are thinking of frequentist probability theory. If you start reasoning about whether p is satisfiable whereas not p is also satisfiable and distinguishing that from the case where p is either unsatisfiable or valid, then you are thinking of possibility theory.
Yes, I use INVALID for scenarios where "p" is unsatisfiable, as well as scenarios where I encounter other types of nonsense sentences. For example, "the uppercase is having a daydream", or a more subtle one, "the country is proud of its own products" when considered literally (as a country is incapable of feeling proud). I should have called it NONSENSE instead.
Yep. I use INVALID in many different scenarios. One type of the scenarios is like encountering someone saying "the uppercase is having a daydream" in a serious discussion and no one really knows what "uppercase" means here. The usual usage would be more subtle. Maybe NONSENSE would be a better name.
Hm. Most of the limited amount I ever knew about this I've forgotten atm, but in the face of general misunderstanding I'm going to chance my arm on the theory that I have a >50% chance of making things better rather than worse. Here goes.
Usually, in philosophical logic, you don't use 3VL or any multi-valued logic to deal with something being unknown. If you don't know whether the proposition P is true or false, and so you don't want to assert either that it is true or that it is false, then you do this simply by not asserting or implying P while also not asserting or implying ~P. If you want a logic in which you can actually make the statement "it is unknown whether P or ~P" (or more precisely, a logic in which you can capture the structure of that statement: you could always just define the proposition Q to mean "it is unknown whether P or ~P", but that probably doesn't get you anything useful) then you'd probably use some variation on modal logic https://en.wikipedia.org/wiki/Epistemic_modal_logic , not a 3VL. What logicians actually normally use 3VLs for is to (try to) deal with truth value gaps or gluts. In a logic with truth-value gluts P can be both true and false at the same time. In a logic with truth-value gaps P can be neither true nor false, at the same time. I know right? (Intuitionistic logic is something a bit different again.)
The reason that computer systems like SQL use a third truth-value which sometimes(!) means 'unknown', is, to summarise, because the way those systems handle truth and implication is a garbage fire.
I played with the idea of "complex booleans", but it didn't work very well. The idea was, just as in a complex number you have a + i b, let a "boolean" be a + i b, where a and be are either true or false.
But the problem is, if you set true = 1 and false = 0, and then use multiplication for "and", then (true + i true) and (true + i true) = (1 + i) * (1 + i) = (1 + i + i - 1) = (0 + 2 i), which is not a valid answer. So this "logic" didn't have as many nice algebraic properties as boolean logic.
One idea would be to let {0, 1} be the integers mod 2. Then, AND is multiplication (i.e., a AND b = a b, since a b = 1 iff a = b = 1), NOT is inversion (i.e., NOT a = 1 - a), and OR follows from the previous two definitions (i.e., a OR b = 1 - (1 - a) (1 - b) = a + b - a b).
To get complex values, we can consider the Gaussian integers [1] mod 2. Then, we have (1 + i) (1 + i) = (1 + i + i - 1) = 0 + 2 i = 0. So these integers are a ring where there are nonzero elements that multiply to 0.
in {1, 0} representing true/false respectively, * is often used for 'and', and + for 'or'. So there may be a good interpretation if you can sort out what the plus sign in x+iy means - is it an 'or', or something else? Good luck!
Boole himself used '+' for 'or', but restricted himself as to when he was allowed to do it (i.e. there was no "true or true = true"; iirc it was Peirce and maybe others who came up with that).
We stumbled upon them, sort of, with Drupal. Consider the using the visitor pattern for access where each visitor needs to return a value. (Drupal is old so yes return values is what you had) You may want to allow access, deny access or tell the system you have no opinion at all.
I don't understand what the benefit of overloading truth values is compared to using functions and modal operators, except as a vector for introducing back-door obscurantism.
Fascinating read. I have often found myself using an enum in Java to present truth output as true, false and null where null means unknown. Not ideal but keeps things simple.
How is that not ideal? Isn't that a perfectly valid mechanism to implement a 3-valued logic?
Perhaps some people would prefer to wrap a standard bool. In Rust, a type of Option(bool) would work. An equivalent in Go would be *bool. But I don't know how the CPU would distinguish between a null pointer and falsehood in that case.
Actually, my impression of what category theory has brought to mathematics is that structure itself seems more fundamental than logic; it is as though logic arises because of structure too (and not just the converse of logic leading to structure).