Hacker News new | past | comments | ask | show | jobs | submit login
The Art of Abduction (direct.mit.edu)
98 points by Tomte on July 20, 2022 | hide | past | favorite | 54 comments



The types of reasoning:

1. Deduction - given premises and inferential rules, produces conclusions; as certain as possible.

2. Induction - given premises and conclusions, produces inferential rules; statistically probable, but a single counterfactual observation means throwing out the whole thing (e.g. black swans).

3. Abduction - given conclusions and inferential rules, produces premises; no guarantee of certainty.

Any time we retrospectively tell a story of how something came to be, that we did not immediately observe, we're applying abductive reasoning. It relies on the same inferential rules as deduction and induction, but applied in reverse.


While the name "abduction" has been chosen to resemble the traditional terms "deduction" and "induction", the name does not make sense.

The name for this kind of reasoning should suggest a reverse direction when compared to "deduction", but "abduction" has the same direction as "deduction".

The preposition "ab" is related to the English "off" and "of" and it can usually be translated by "from". The preposition "de" means the same thing as "ab" (which is why in most Romance languages, like French, "de" corresponds to English "of"), except that it suggests a vertical movement from up towards down, while "ab" suggests a horizontal movement.

I do not know which might have been a better word, maybe they should just have stuck with "reverse deduction", because "reduction" is widely used with other meanings, or maybe they could have used "adduction", which has an opposite direction to "abduction" ("ad" corresponds to English "at").

The names "deduction" and "abduction" make grammatical sense when their direct objects are the premises (i.e. the action is "lead from premises"), while the name "adduction" makes grammatical sense when the direct objects of this action are the conclusions (i.e. the action is "lead at conclusions").


IIRC the original English-language author who coined the term (C. S. Pearce) did use the verb form "adduce" as analogous to "deduce", so perhaps the original offending term, "abduction", is a poor back-formation.


From Latin abductiō (“robbing; abduction”), from abdūcō (“take or lead away”), from ab (“away”) + dūcō (“to lead”)[1]. Equivalent to abduct +‎ -ion.

"Away" and "to lead" indead does imply a reverse direction.

https://en.wiktionary.org/wiki/abduction


It does not imply a reverse direction with respect to "deduction", which was the intention of the name, as used in the book discussed here.

Both "abdūcō" and "dēdūcō" mean "lead away", except that the latter implies that the origin of the movement was somewhere above, i.e. the movement was descendant (the prefix "dē-" in descend is the same as the prefix "dē-" in deduce).

"Abduction" implies a reverse direction only vs. "adduction", which means leading towards something, not away of something, like "abduction".

The words "abduction" and "adduction" are frequently used to describe the movements of the human arms and legs, where the place that is the initial point of the movement for abduction or the final point of the movement for adduction is understood to be the median axis of the body. Thus there are abductor muscles and adductor muscles. (For example the abductor muscles of the thigh raise the thigh laterally, away from the other thigh, while the adductor muscles of the thigh bring the two thighs together.)

"Induction" also implies the same direction of movement as "adduction", i.e. towards something, and opposite of deduction/abduction, but "induction" implies that the final point of the movement is inside the target.


Latin dē, especially in etymollogy of deduction, usually means "of" - meaning: regarding, in respect of, concerning etc. (Look de facto, de jure, de dato, de futuro...);

with dūcō - meaning to lead, to guide forming deducere as present active infinitive;

in contrast with the prefix ab which usually means "from" or "away from" as in "ab ovo" where "de ovo" would mean something regarding the egg instead of originating from or away from the egg (also look ab initio, ab extra, ab hinc...).

Similair to the word absent coming from latin absens, absum where ab means "away from" and sum means "I am", taken together meaning "not present".

Thus in the context of logic where deduce would mean going in one way, abduce would imply going away from that way i.e. reverse direction.


There has never been any usage in which "dē" and "ab" signified opposite directions.

In all usages of both words, they designate the starting point of a movement. Anything that moves, moves away from the starting point of that movement.

The only difference in meaning is that for "ab" there was no information about the direction of movement in space, while "dē" implied that the direction of the movement comes from somewhere above towards somewhere below, e.g. something that falls from a tree.

In later stages of the Latin language there was a tendency to no longer pronounce the final consonants of both "ab" and "ad". So in classic Latin going "from X to Y" would be "ab X ad Y", which in later pronunciation would become the ambiguous "a X a Y". Because of this ambiguity, the formerly almost synonymous "dē" became completely synonymous with "ab" and it replaced "ab" in all usages, so that in the Romance languages going "from X to Y" became "de X a Y".

In classic Latin, the less definite "ab" was usually preferred when referring to real or concrete movements, while "dē" was usually preferred for imaginary or abstract movements, e.g. when saying that a child comes from its parents, a literary work comes from its author or a steel tool comes from a blacksmith.

"ab ovo" meant "from egg" and "de ovo" also meant in older Latin "from egg".

However, in the late Roman Republic, after it became fashionable to educate the children in the traditional Greek ways and typically with Greek professors (normally slaves), the necessity of translating the Greek texts has produced a very large number of new words in Latin, either by creating new composite words following the model of the corresponding Greek words, e.g. "hypothesis" => "supposition", or by giving additional new meanings to existing Latin words, to cover the meaning of the Greek words for which they were substituted in translations.

A very large number of Greek philosophical texts had titles such as "About X", i.e. "Peri X" in Greek. When translating such titles, "dē" has been chosen to translate Greek "peri" (which had a meaning close to English "around" or "about").

After Latin became strongly influenced by the Greek literature, "dē X" became frequently used to also mean "about X", not only "from X". However the sense "from X" has always remained valid, and which sense was meant has to be discovered from the context of "dē X".

One who would tell how someone had thrown an egg to the ceiling and the egg had broken and the yolk had fallen from the egg on his head, would say that the yolk had fallen "de ovo" with the meaning "from the egg", and not with the meaning "about the egg".

In "de facto", "de jure" and many of the similar expressions, "de" does not mean "regarding" or "concerning". It means "from", in the sense "caused by".

"De jure" does not mean "concerning the law", it means "from the law", in the sense that the cause why something has been done or must be done is because the law says so.

"De facto" does not mean "concerning the facts", it means "from the facts", in the sense that some present state has its cause in the facts (i.e. past actions) of some entity, e.g. the x86-64 ISA has become a "de facto" standard ISA for personal computers, because for many years the Intel/AMD CPUs in IBM PC/AT compatible computers have outcompeted everything else.


For others like me that wondered "Wait is proof by induction not induction?"

> Although its name may suggest otherwise, mathematical induction should not be confused with inductive reasoning as used in philosophy (see Problem of induction). The mathematical method examines infinitely many cases to prove a general statement, but does so by a finite chain of deductive reasoning involving the variable n, which can take infinitely many values.

https://en.wikipedia.org/wiki/Mathematical_induction


I think there might be a loose connection. When you want to prove something using mathematical induction, you must come up with an "induction hypothesis". This requires cleverness which could possibly be described as the product of inductive reasoning (in the philosophical sense). Essentially it amounts to finding a general principle that can be specialized to prove each of the inductive steps.

The main difference to me is that, with inductive reasoning (in the philosophical sense), you converge on a general principle but it might be wrong—it is only probable. Mathematical induction is the tool needed to close the gap—to turn a hypothesis which could be wrong into a bulletproof mathematical theorem. Of course, if your induction hypothesis didn't turn out to be right, you won't be able to complete the proof.

I use the Coq theorem prover to write induction proofs all the time, and often you have to try out different induction hypotheses until you get one that finally works. The process my brain goes through to come up with these induction hypotheses feels like "inductive reasoning" in the philosophical sense as I understand it.


Yeah, I think you nailed the reason why they both use the same word: "mathematical induction" is deductive once the proof is finished, but coming up with the proof is absolutely an inductive process.


Thanks, never thought about it this way: It seems like the NP of reasoning. If you know the solution everything seems much easier.


I was looking at that.

It seems like induction and abduction are pretty much that same thing but framed differently. Abduction looks for the "simplest explanation: and induction uses statistical machinery to argue for an explanation, which, if you break down the statistical machinery, is something like the lowest entropy, the most efficient encoding of the data, the ... simplest explanation.


The term "abduction" was coined by CS Peirce to describe the process that puts hypotheses on the docket to be tried by induction and deduction. Sometimes folks use it to mean something like "inference to best explanation", but that's not how Peirce meant it.

> Proposals for hypotheses inundate us in an overwhelming flood, while the process of verification to which each one must be subjected before it can count as at all an item, even of likely knowledge, is so very costly in time, energy, and money -- and consequently in ideas which might have been had for that time, energy, and money, that Economy would override every other consideration even if there were any other serious considerations. In fact there are no others. For abduction commits us to nothing. It merely causes a hypothesis to be set down upon our docket of cases to be tried.

— CS Peirce (Collected Papers §5.602)


> abduction commits us to nothing. It merely causes a hypothesis to be set down upon our docket of cases to be tried.

This is the perfect phrasing. Consider the contrast with deduction: if I have a premise, P, there are many deductive inferential rules I could apply to that premise to produce a new truth.

E.g. Given P, we can infer "not not P", and if we're also given "P implies Q" we can infer Q

Abduction flips this on its head: we have Q. And Q could be just the double negation of a P we weren't given. Or it could be the consequence of "P implies Q". There's really no way to know with certainty.

This is forensics. It's exactly the reasoning used in courtrooms: from the evidence (or "consequences"), we need to use science and logic to figure out how we got there (i.e. what were the antecedents).

This is why the most famous fictional user of "abduction" is Sherlock Holmes. But really it's what every forensic investigator (including a detective) is doing. Abduction is most common in judicial settings (i.e. when we are trying to assign blame).


Ah,

That is actually an illuminating perspective cutting through verbiage.

Abduction is the first "step", that the produces the ideas that can be considered ... by induction or deduction. It is the principle that we have to start somewhere.

Basically, an abductive step is something like "I can think of only ten hypotheses that explain X and only one of them works. I could consider an infinity of possibilities but it's not worth it. I will stick with my working hypothesis".


That's wrong. Induction is the process of empirically producing natural laws.

Our givens are the observed state of the experiment before (i.e. the antecedent, or "premise"), and after (i.e. the consequent, or "conclusion" of) an event. These givens are the only observations.

Induction is the process of creating an inferential rule (e.g. a causal hypothesis) that allows us to deduce the given consequent from the given antecedent (e.g. by applying a formula).

Abduction has different givens. In that case, we have observed consequences, and are given a set of inferential rules (e.g. natural laws) that could have produced the observed consequence. We then try to acquire evidence to support theories about which (unobserved) antecedent led to the (observed) consequent, by using one of our given inferential rules.


Your usage of "premise" and "conclusion" as shorthands for "the state of the world at a time t" is somewhat confusing to me, when I hear those words I think of propositions that have truth value, not events/states. You can sort of encode the state of the world at any instant as a proposition which is the conjunction of all possible true propositions describing the world at that instant ("The sky is blue", "Water is wet", "Temperature is 40C",...), but it's not obvious that this is a useful way to think in.

For example, suppose you're a mathematician with a conclusion (conjecture) C and a premise P (axioms), you search for a proof to get C from P, and you find it. That fits your definition of induction. Now suppose you're a pre-copernicus astronomer who have a "conclusion" and a "premise", which are actually just the state of the world recorded at different times, and you want to find a law that gets you the conclusion from the premise, and you find it in a system of epicycles. This too fits your definition of induction.

But those 2 situations are radically different. The mathematician's proof is much more correct and permanent than the astronomer's "proof". If I find another proof of the mathematician's claim, even a very different proof, that doesn't render the first proof obsolete, and it doesn't mean my proof is wrong either. But if I find another law that explains the states of the world the astronomer explained, then unless one of the laws is a special case of the other, either me or the astronomer is wrong, or the world is crazy and somehow obeys 2 sets of laws that aren't reducible to each other at the same time.

In other words, if logical propositions are imagined as a DAG where the edges represent valid rules of inference, then it doesn't matter which path (proof) got you to a node as long as you got there. But if you imagine the states of the world like this, then it actually matters what specific laws got you to a particular node. Maybe actual reality doesn't care and there are infinite systems of physics that all perfectly well explain the universe, but we very much care about path uniqueness, much more than we do in math, chess or other such systems. That's why I think the encoding of world states as propositions is lossy. A state\event is more than the propositions describing it.


> Your usage of "premise" and "conclusion" as shorthands for "the state of the world at a time t" is somewhat confusing to me

I agree, and the correct terms are "antecedent" and "consequent"; a premise is a type of antecedent, and a conclusion a type of consequent. But using those terms right away makes it harder to relate to formal arguments.


Given this definition, abduction seems to me to be the source of many, many arguments between significant others. "I know you! You must think [some random opinion] because of [some past unrelated event]."

In theory, you know the final opinion, and you know how your partner thinks, so you can logically guess what the source of the opinion is.

So abduction is armchair psychology?


Sometimes I understand abduction, other times, not. I'm a bit of a fan of logic programming so I try to describe the three inference modes using definite clauses.

Deduction: we go from facts (positive literals) and rules (definite clauses) to facts:

  swan(alice).   white(X) ← swan(X).
  ----------------------------------
            white(alice)
(If we know that Alice is a swan, and all swans are white, we can infer that alice is white).

Induction: we go from facts and rules to new rules:

  has_feathers(X) ← bird(X).  bird(X) ← swan(X).
  ----------------------------------------------
             has_feathers(X) ← swan(X).
(If we know that all birds have feathers and all swans are birds, we can infer that all swans have feathers).

Abduction: we go from facts and rules to _goals_ (negative literals):

  white(alice).   white(X) ← swan(X).
  -----------------------------------
            swan(alice)
(If we know that Alice is white and all swans are white, we can infer that Alice is a swan).

The last inference is of course not sound, as such - Alice could be white because she's a yogurt, say. But, if we operate under the assumption that only what we can prove is true (the so-called "closed world assumption") and the only rule we have about the concept of "white" is that "all swans are white", then the reasoning step becomes sound: the only way that we know for Alice to be white is for her to be a swan.

What if we know multiple ways for an "X" to be white? For example, what if we have the rule "white(X) ← yogurt(X)"? Well then, we can make this inference also:

  white(alice).   white(X) ← yogurt(X).
  -------------------------------------
            yogurt(alice)
And now we have two contradictory hypotheses about Alice: either she's a swan, or she's a yogurt^. We can resolve the contradiction when we learn more about yogurts and swans, and Alice. For example, if we have the information that Alice has feathers, we can figure out that Alice is a swan.

The closed world assumption is the simplest way to handle uncertainty (or in any case there are no simpler methods). It is also non-monotonic, in that new facts and rules can change the truth value of facts inferred from a theory. Which is great because that way we 're not stuck with the conclusions of initial premises.

Anyway, that's my logic-programmer's model for abduction. And it may be horribly wrong at places (my model for logic programming was horribly wrong the first ten years so...).

_______________

^Correction: Actually... alice can be a swan _and_ a yogurt. swan(alice) and yogurt(alice) are not contradictory unless we have some proof that they are.


Thanks. This is the closest I've come to understanding abductive reasoning and it's because you've explained it in a way my Prolog brain can understand it. Even though I've never written a Prolog program that abduces goals on its own, now I kind of want to try.


Thanks! Note that the first two inferences I show above are valid resolution steps since they have literals that are complements under unification, like swan(alice) and swam(X)← (the latter being a negative literal, since it's in the body). But the third and fourth inferences can't be made with resolution.

There is a whole bunch of work on abductive reasoning in a first-order framework, but I'm far from an expert on it. Antonis Kakas is a prominent name in the subject. For example, see:

Abductive Logic Programming, A.C. Kakas, R. A. Kowalski, F. Toni

https://academic.oup.com/logcom/article-abstract/2/6/719/942...

Although behind a paywall. And yes, that's that Kowalski. Here's an unpaywalled overview:

Abduction in Logic Programming, Marc Denecker and Antonis Kakas

https://web.stanford.edu/class/cs227/Readings/Abudction%20in...

And here's one on the application to learning:

Abductive concept learning, Antonis C. Kakas & Fabrizio Riguzzi

https://link.springer.com/article/10.1007/BF03037531


Yes, this is fantastic and accurate. It captures exactly the uncertainty that underpins this type of reasoning.

Abductive reasoning is usually an evidence-gathering process, so that we can narrow down the set of inferences that still produce meaningful conclusions.

Thus "once you've eliminated the impossible, whatever remains, however improbable, must be the truth."


Its also a great part of the "success"stories. Somebodys subconcious drags up a great idea or he is plain lucky, and then gets ask about the how and why. And the subconcious puzzles together a semi-plausible story, to explain away the heuristics in the head working and the statistics washing an outliar to shore.


This doesn't make a lot of sense because when you're trying to infer premises from conclusions the conclusions are your current premises. And premises you're searching will be your conclusions.


Any reason not to just call that “speculation”?


I've long felt that Bayesian logic is the formal undergirding of abductive reasoning, so I'd like to know what abductive reasoning is supposed to be over and above Bayes rule + decision theory.


I thought this was going to be an article about kidnapping. It wasn't.


From the introduction :)

> 1.1 Why You Should Want to Know About Abduction > If you are consulting this book with malicious intent, looking for ways to expand your criminal pursuits, I encourage you to stop reading and to better your life. (...)


It's a lot of words. People who cannot explain things well tend to use a lot of words.

The author cannot explain things well. You just witnessed the art of abduction!

(Just kidding. It's worth knowing what abduction is and recognizing it.)


Huh, I didn’t know about abduction. It’s the form of logic that you use if you play Wordle. You can’t prove that you have the right answer, only that your answer explains everything that you see.


That sounds strictly inferior to Bayesian reasoning (both to me and to my Wordle program), so I’m curious what advantages abduction has, if any. (If there are philosophers reading this: do you have any good, non-paywalled links defending abduction from the usual Bayesian criticism?)


The book that is the parent submission is classified as "open" and you can click on the pdfs for each section to, well, read each section. It seems as good a source about the benefits of abduction as any.


Oh! I had assumed that it was closed. Thanks!


Do you think that abduction is just a more realistic name for “bayesian inference where I had to make up some of my own priors”?

To computer program, playing Wordle seems like a perfect application of completely solid bayesian inference. All the data is there, the space is finite. But humans have to at some level resort to abductive reasoning since we can’t do all that math in our heads. Most people that play wordle would just tell you “this guess felt like the right one”


Could it be more computationally tractable compared to Baysian inference?


Looks like an interesting book. I’ve heard abduction described as “analogical reasoning” or “metaphorical reasoning” which is definitely not as rigid as deduction or induction but arguably more important—without analogies I’d doubt we’d ever learn anything.

If you’re interested in the topic check out the works of Charles Sanders Pierce (he coined the term abduction). He was a logician and semiotician and is still criminally underrated. Other than Wittgenstein, he’s pretty much the only logician I’m aware of who extensively wrote compellingly about topics outside of logic and was able to make logic itself feel relevant and meaningful in everyday life. Most logicians/analytical philosophers are either entirely silent on other topics or fall completely flat when they attempt to venture beyond logical subjects.

(there are others too, Russell sort of comes to mind, as well as a few others, but Wittgenstein and Pierce are in a class of their own. They are the two writers who I feel not only helped me understand some concept in a field but actually profoundly altered how I think about thinking—I guess I’d include Heinz von Foerster in this category too)


my 2 cents:

Deduction: p( C | do(P,R) )

Induction: p( C | do(P), R )

Abduction: p( P | do(C), R )

where C stands for conclusion, P for premises, R for prior assumptions / remaining possible statements in the universe of discourse not covered by the premises and conclusion.

Deduction can only take values in {0,1}. Deductive arguments are valid when that probability is 1, invalid when that probability is 0, sound when do(P,R) can be shown to be 1, and unsound when it cannot. Typically when trying to show that do(P,R)=1, we may assume or try to show that it can be simplified to do(P). Whether it can or not is often the source of soundness errors.

Inductive arguments are effectively probabilistic reasoning about the posterior probability of the conclusion (i.e. a process equivalent to Bayesian inference, with regard to statements in the universe of discourse and their probability).

Abductive arguments are statements of [high] likelihood, with the intent of exploring/generating/identifying [high-likelihood] data (i.e. useful premises). It can be compared to Maximum Likelihood Estimation in statistics. Quite often, R is simplified to be null (i.e., the universe of discourse is limited to only what we already know). The intent is typically to generate a subset of premises to act as a new, more manageable universe of discourse for subsequent deductive or inductive reasoning, when the original full universe of discourse would be intractable.


Book title is a bit startling/clickbaity, so here's a bit of the summary for context:

> A novel defense of abduction, one of the main forms of nondeductive reasoning. With this book, Igor Douven offers the first comprehensive defense of abduction, a form of nondeductive reasoning. Abductive reasoning, which is guided by explanatory considerations, has been under normative pressure since the advent of Bayesian approaches to rationality.


I was hoping for some edgy economic model wherein ransoms were pareto-efficient or something, but this was fun, too.


I'm thinking more Coase than Pareto for that particular scenario.


"A novel defense of abductive reasoning" would be a better choice of titles all around.


Interesting cover choice for that title too.


if hoofprints in snow were legible as a book cover, an allusion to the Name of the Rose would be even better :)


That'd be doubly appropriate since Umberto Eco said that C.S. Peirce "is undoubtedly the greatest unpublished writer of our century."

Peirce's Notion of Interpretant (1976)

https://www.jstor.org/stable/2907146


My university philosophy 101 critical reasoning course taught us to use this kind of abduction or reverse deduction to take the conclusion of somebody's argument and infer what premises might be needed to support it but are not being stated or are hidden or unexamined. A a very useful skill when in discussion with someone else who studied critical reasoning X'D


For those of you wondering what abduction is, the wikipedia article is decent.

https://en.wikipedia.org/wiki/Abductive_reasoning


The title made me think of Pixar’s Lifted. https://youtu.be/LVLoc6FrLi0


It looks like an ebook that is free, is there a epub to download somewhere?


Most of my comments are abductive reasoning but I call it breadth first reasoning or fearless ignorance, where an analogy potentially has a similar superficial "shape," and that shape could be a homomorphism in more formal contexts that gives you a fast path to hypothesis about what must also exist if the analogy were consistent. It's irritating to people with depth in a field because it ignores their concrete tools and experience and presents them with hypothesis' without any direct experience. Cranks usually think abductively as well, and what makes the difference is enough physical domain competence to use it well instead of just baffling other laymen and annoying more knowledgable people. If I'm interpreting correctly.


We've been working on abduction at our institute for a while. My take on it is that there is no abduction as a logical inference. Although he portrayed it as a third type of inference in some of his writings, C.S. Peirce thought the same, he emphasized many times that abduction is indeed an art and ultimately a psychological phenomenon.

The problem with abduction is that it doesn't have a justificatory value on a par with those of deduction and induction. Deduction forms the basis of mathematics and is basic enough to be intuitively acceptable when you learn logic. In turn, induction is mostly justified deductively. Finite discrete probability theory is purely deductive. Under the assumption that a process is stochastic (e.g. that a die is fair when it is thrown), probabilities follow deductively. Probability theory for infinite and uncountable domains are straightforward generalizations. There are only few leaps of faith when doing inductive inference, (1) when you transfer conclusions from a smaller to a larger population, (2) apply the law of large numbers, or (3) make reasonable guesses about an underlying probability distribution.

There is no such justification for abduction. Even just hypothesis generation, the narrow sense of "abduction", is way more complicated than induction and deduction. Theory selection, the broader aspect of "abduction" that has been labeled "inference to the best explanation" since Harman's horrible 1965 paper, is even more complicated. Both IBE and abduction require and involve deductive and inductive reasoning, but there is no indication that they can be fully justified deductively and inductively. The best you may get is an account that describes some form of theory discovery as a logical or probabilistic description of a supposed cognitive mechanism, and then the discovered theory might be proved deductively or confirmed inductively.

Defenders of abduction have claimed that we select hypotheses not just on the basis of their likelihood but also based on other properties like theory virtues ("loveliness" in Lipton's terms). That might be true but doesn't concern the justificatory value of the inference. In the end, the only hypotheses we're interested in (and should be interested in, from an epistemic point of view) are the ones that are most likely to be true. Hence, theory virtues, loveliness, plausibility, etc., all just have a justificatory value insofar as they increase the subjective probability that a hypothesis is true, and in that respect they are really just heuristics.

Some Bayesian takes on abduction and IBE get this right and I have nothing against them. I just wanted to mention that many, if not the majority of philosophers of science do not consider abduction a third type of inference and fairly skeptical about it, for the above reasons and a few more.


Thanks for your comment! Very interesting.

> Finite discrete probability theory is purely deductive. Under the assumption that a process is stochastic (e.g. that a die is fair when it is thrown), probabilities follow deductively.

I’m FAR from an expert here but I read that probability theory in general can be based on fuzzy logic, which is purely deductive. Can you please comment on why it needs to be finite and discrete? And why that stochasticity assumption is needed?


I'd say that fuzzy logic is neither purely deductive nor does it represent probability theory unless you make some weird constructions and re-interpretations. Fuzzy logics are logics with infinitely many truth values, usually uncountably many in the interval from 0 to 1. Binary connectives are thus functions from two real numbers in [0, 1] to a real number in [0, 1]. Unlike in classical logic where there are 16 such functions, there are infinitely many such functions for fuzzy logic. These are characterized in different ways (called "norms") and it is very hard to justify them or why you'd pick a particular one. For example, there are different competing definitions of conjunction, and different rules for the conditional are particularly problematic. The rules for the conditional don't have much to do with probabilistic conditioning. That's not to say that fuzzy logic and representations based on it like fuzzy sets aren't useful, these are just better regarded as something else, ways of representing certain forms of non-probabilistic uncertainty or vagueness.

As for the discrete cases: I was merely talking about the justification of descriptive statistics. If you have a 6-sided die, for example, and you make analytic assumptions that it is fair from the way it is constructed, then describing the outcomes of die throws involve only basic combinatorics. The same for all other finite probability spaces. However, in the philosophy of probability it is widely accepted that you should avoid circular reasoning. So you cannot test empirically whether the die is fair, for instance, when talking about the foundations of probability. You either have to assume it or need some analytic model that justifies why fairness follows from the way the die is constructed. In any case, at some point you need to assume that the underlying process is indeed random.

Maybe you're right, though, and probability theory for uncountable domains is equally deductively justified. Since mathematics uses deduction and complete induction in their proofs it seems reasonable to claim that. But I think there are several leaps of faith in the uncountable case that you don't get with descriptive probability over discrete spaces. For example, you cannot "lift" probabilities of complex events from the probabilities of singleton events when the domain is uncountable. The point is that this quickly gets as or more complicated than models of abduction (though arguably in a more "math" way). In contrast, basic descriptive probability theory for finite spaces is IMHO as intuitively plausible as logic. That's why I think it's almost on a par with deduction in terms of justification.

Anyway, in both finite and uncountable domains you do make educated guesses when making statistical inferences from a small to a larger population or when making assumptions about an underlying probability distribution, but you can e.g. estimate errors very precisely and justify the methods in much detail. Abduction mostly doesn't even come with error estimates.

That being said, there is interesting work on model selection that kind of bridges the gap between statistics and abduction. See Burnham & Anderson: Model Selection and Multimodel Inference, Springer 2002.


Intuitively it seems to me that Cox theorem kind of proves that Bayesian inference is the only correct version of abduction so to speak.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: