Hacker News new | past | comments | ask | show | jobs | submit login

I don't get it. I read case I, and I don't see how Jones is 'justified' in believing (e), since his belief is based on a false premise derived from hearsay. But maybe I'm missing some technicality in the meaning of 'justified.'

In any case, "justified true belief" seems like a flimsy basis for epistemology, since we have no independent oracle to ask whether our beliefs are true. So this definition doesn't seem to do anything to help us separate knowledge from non-knowledge.




One assumption there is that testimony can provide you with justification. We are told by the article that Edmund Gettier has died earlier this year, and we seem to be justified in believing that. In that case, let's assume, justification and truth matches. But this isn't the case always. You may see that the color of the wall is red, for example, and may feel prima facie justified in believing that the wall is red. But it may be the case that, unknown to you, there is a red light projected over the wall, which is actually cream color. In an absolute sense, you are wrong, but your belief is still justified. 'Justified' and 'true' are used for different purposes.


Okay, I buy that, you can have a justified belief that isn't true, if justification is itself subjective/probabilistic rather than deductive.

Then this paper argues that you can have a justified belief that is true, but still isn't knowledge. Why? Because the justification is faulty. Well sure, since we've admitted in the first place that justifications can be faulty. So why would you ever assume that they invariably lead to knowledge? Had the concept of believing right thing for the wrong reasons not been invented in the 1960s?

I just can't get myself into a mindset where this is a revelatory finding, but maybe I'm too steeped in a bayesian worldview.


Just to be clear, it isn't that people though justification (which, yes, can be probabilistic/non-deductive) invariable leads to knowledge, and these aren't supposed to be cases of believing the right thing for the wrong reasons.

Suppose you grant that justification is probabilistic. What's the extra condition that makes justified belief count as knowledge? Pre-Gettier, one might have been inclined to say that it's that your justified belief is actually right—what you believe is true. Gettier's point is that this can't be the full explanation of the difference between merely justified belief and knowledge.

Similarly, suppose you grant that one can have true belief that falls short of knowledge because one lacks "right reasons". Well, "right reasons" is a tricky notion, but suppose this means something like good evidence, or more generally, the sorts of reasons that we take to show your reasoning in a responsible way. Again, Gettier's point is that good evidence or responsible belief (assuming this isn't limited to deduction from things that you're certain about) isn't sufficient to make true belief into knowledge.

(Fwiw, Gettier wasn't the first person to point this out, but he did it in an especially vivid way which got a lot of people thinking hard about what else you needed to say if you were going to explain the difference between mere justified true belief and knowledge.)


> Suppose you grant that justification is probabilistic. What's the extra condition that makes justified belief count as knowledge? Pre-Gettier, one might have been inclined to say that it's that your justified belief is actually right—what you believe is true.

I would say that what makes it count as knowledge is the quality of the justification. If Jones had seen a written job offer to Smith rather than just hearsay that he would get the job, his belief that Smith would get the job would probably count as 'knowledge.'

Requiring that what you believe be actually true to count as knowledge is begging the question, since that requires either already having 'knowledge' that it's true, or having direct access to objective reality that no one has.


If "quality" means something like strength of the evidence, then couldn't you still have Gettier-like cases, so long as the evidence doesn't entail the truth of the belief? (suppose the HR department printed the wrong name on the offer letter, etc.)

If "quality" doesn't mean this... then the worry is that you've just labeled what you were trying to explain, because "quality of justification" might mean something like "that thing, whatever it is, which makes a belief count as knowledge".

About truth as a condition for knowledge... there's a lot of subtlety here. The claim isn't that you have to first know that P is true in order for you to know that P, or that you have to provide the truth of P as one of the reasons why you know P, or anything like that. It's more of a logical claim about what it means when we say "S knows that P"... knowledge is "factive" in that you can't know P if P is false. At least, that's how philosophers use the word "know", and it's at least one of the common ways that ordinary folk use the word.

(For what it's worth, there's a lot of discussion amongst philosophers about whether anyone who knows that P also has to know that they know that P... plenty of philosophers deny this.)


> If "quality" means something like strength of the evidence, then couldn't you still have Gettier-like cases, so long as the evidence doesn't entail the truth of the belief? (suppose the HR department printed the wrong name on the offer letter, etc.)

It would entail Gettier-like cases if you require knowledge to be true in some absolute sense. But I'm discarding the "true" condition as a requirement of a knowledge. Outside of a few domains like math, knowledge of this kind doesn't exist. Knowledge is always subject to revision.


Often we might think we know something and later conclude that we were wrong... but in this case, it's common to say "I thought I knew such-and-such, but I didn't". Also, for what it's worth, most people don't think that knowing that P requires that you're certain that P.

At least, that's one way of using the word "know". So if we're trying to give an analysis of this thing, we can't just decide to discard truth as a requirement of knowledge... the goal is to give an analysis of this thing that people talk about, not replace it with something else.

That said, it sounds like you're sympathetic to a line of thought on which this so-called "traditional" conception of knowledge isn't very useful, and we should focus on the sorts of things that Bayesian epistemology focuses on... something like degrees of confidence.


> Often we might think we know something and later conclude that we were wrong... but in this case, it's common to say "I thought I knew such-and-such, but I didn't". ... At least, that's one way of using the word "know". So if we're trying to give an analysis of this thing, we can't just decide to discard truth as a requirement of knowledge... the goal is to give an analysis of this thing that people talk about, not replace it with something else.

Let's consider a case like that. A student receives a graded history test, and says "I thought I knew that James Madison was the 3rd president of the United States, but I didn't."

Under the "justified true belief" one might say that their knowledge was undermined because they learned it wasn' true. But their teacher, born (say) circa 1970, has no more direct access to the truth of who was the 3rd U.S. President than their students.

So we might say instead that the student changed their mind about their knowledge because a rather flimsy justification for believing that Madison was the 3rd president ("I studied really hard") was kicked out from under them by a somewhat sturdier justification for a contradictory belief ("My teacher says that Jefferson was the 3rd President, and she knows a lot about history.")

> That said, it sounds like you're sympathetic to a line of thought on which this so-called "traditional" conception of knowledge isn't very useful, and we should focus on the sorts of things that Bayesian epistemology focuses on... something like degrees of confidence.

Yes, I'd say so.


> Under the "justified true belief" one might say that their knowledge was undermined because they learned it wasn't true. But their teacher, born (say) circa 1970, has no more direct access to the truth of who was the 3rd U.S. President than their students.

I'm not quite sure what you mean and I think a bit more precision is helpful here. If one adopts a JTB conception of knowledge, then one wouldn't say that the knowledge of undermined because the student didn't have knowledge. Yes, it's also true that the (weak) justification for believing Madison was the 3rd president was outweighed by the (new and stronger) evidence that he wasn't... but that's about the justification, not the knowledge per se.

(Quick note: philosophers tend to use "undermined" for when something removes the evidential force, rather than countering the force. For instance, suppose I see someone who looks exactly like you in the library. Ordinarily that's pretty good evidence that you were in the library, and it would be reasonable for me to believe you were in the library. However, if I found out that you had an identical twin, this would undermine the evidence, and assuming this is all the evidence I had, it would not be rational for me to believe you were in the library.)

For what it's worth, the post-Gettier literature (and failure to get much clarify about the concept of knowledge) is one of the things that pushed people towards the more Bayesian approach to epistemology.

(Personally I think the Bayesian approach is avoiding some hard but genuine issues about rationality. Specifically, in Bayesian epistemology everything is relative to your priors. But there are lots of probabilistically coherent priors. Are they all equally good? If so, then there's really not much we can say in general about what's good or bad evidence, what people should or shouldn't believe, it's all pretty relative. I don't think that's right... it seems like one can be irrational yet coherent, otherwise it's hard to say how people like you and me are more rational than people believing massive conspiracy theories. Lots more could be said here, but that's why I and plenty of other folks aren't happy with an "anything goes" view about priors. But then for folks like me who want to say that all coherent priors aren't equally rational, it's hard to say what would make some priors better than other in purely Bayesian terms... so I think we end up back in the realm of traditional epistemology. That said, all this is pretty controversial and this is a pretty quick sketch of an argument, so there's of course lots to be said in response.)


Thanks for the very thoughtful reply.

You're right, JTB would say they didn't have knowledge. But why? Is it because it wasn't justified or because it wasn't true? If it's the latter, how can you ever ascertain whether something is or isn't knowledge (i.e. how do you know 'truth')? That's my fundamental problem with this account.

I agree, also, that just defining good knowledge as beliefs based on good priors regresses the problem. I tend to think this regression is unsolvable except in pragmatic terms (what is efficacious when tried against the real world is what is good).


Most folks who do traditional epistemology would say that you could have very good justification yet fail to have knowledge because what you believe isn't true.

And yes, this probably means that in some sense, when you know something, you don't have direct access to the fact that this is knowledge. It's also often the case that you shouldn't be certain that you know. You can't perfectly ascertain whether you have knowledge... though you can have good reason to believe that you know something.

Most philosophers these days deny that knowledge requires either certainty or direct access. For instance, we know lots of things on the basis of testimony and other sorts of non-deductive evidence.

Pleasure chatting with you!


The problem is that if your definition of knowledge requires only three conditions (belief, truth and justification), then it has counterexamples, namely, the Gettier cases. So we need to either revise our concept of justification (for example, make it so that a knowledge-apt belief is justified only if it is for the right reasons -- but then, how to account for justified but false beliefs?), look for ways to supplement the analysis with additional conditions (for example, that the conditions are satisfied in a robust ways counterfactually), or reject the idea that knowledge can be analyzed altogether.

To make it more concrete how this sort of thing can get complicated quickly, consider your own explanation of the case. As I take it, you think that the issue is that "the justification is faulty". But how is it faulty, besides not being true (remember, we have reasons to want justification to not imply truth)? And also, there are Gettier-like cases where we cannot say that justification is faulty or where we can't find false premises (for example, cases where it seems like the problem is that the individuals in questions are not in an appropriate environment, see [here](https://en.wikipedia.org/wiki/Gettier_problem#False_premises...)).


Thanks for engaging in the conversation. When I say "the justification is faulty" I just mean it's not deductively air-tight, therefore we can't assume that it leads to truth. In this sense, pretty much all justifications are 'faulty.'

The thing I'm struggling with here, ultimately, is truth. All of these problems rely on some outside, oracular knowledge of what is true, (e.g. in the example you linked there really is a cow/sheep in the field, or Mark is really hiding under the desk). But we have no such oracle to refer to, so trying to ground knowledge in truth seems like a lost cause in the first place. All we can have is more or less certain beliefs.


Sure, from a present perspective on epistemology this may make sense, since we now know all the troubles and faulty attempts to decide proper justifications from incorrect ones after Gettier's paper. However, back then the idea to define knowledge using JTB was on the table. The knowledge as justified true belief view is about a definition of knowledge, not a learning method or how to gain knowledge. Many authors would even have rejected the idea of graded belief entirely and said you either believe something or don't (many still do).

Since you accept the notion of truth (otherwise you couldn't be a Bayesian), you'd have to explain why you reject the realist conception of knowledge inherent to the JTB view. Gettier's paper is one attack on it, arguing for graded belief representations is another type of attack on it. In this context it is worth noting that graded belief and categorical belief are very hard to reconcile because they have different logical properties [1]. It's known as Locke's Thesis and quite a vexing problem.

That being said, I share your intuitions. The factivity of knowledge has created more problems in epistemology than the notion of knowledge was supposed to solve. The German word Erkenntnis in Erkenntnistheorie has a meaning closer to learning theory. AFAIK there is no good equivalent to this in English.

[1] http://fitelson.org/coherence/hawthorne.pdf


You're trying to smuggle the "truth" criterion back into the "justification". Any standard of justification that depends on truth is impractical (to say the least) to meet, creates conceptual confusion, and, if applied to the definition of knowledge in this case, makes the "truth" criterion redundant.


Not so. I was applying a strict definition of justified, where justified means "logically necessary." Others have clarified elsewhere in this conversation that this in't the intent.

As I've said elsewhere in this thread, I already think the truth criterion is redundant, since if you can determine whether your knowledge is true, you don't need to worry about justifying it or believing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: