Hacker News new | past | comments | ask | show | jobs | submit login
A three-page paper that shook philosophy, with lessons for software engineers (jsomers.net)
566 points by jsomers on Jan 15, 2019 | hide | past | favorite | 179 comments



> A philosopher might say that these aren’t bona fide Gettier cases. True gettiers are rare. But it’s still a useful idea...

At least as presented, I see the idea being used to do more harm than good.

Take the first example, with the form not autofocusing. We're already not in a Gettier case, because the author didn't have JTB. The belief that he caused the bug obviously wasn't true. But it wasn't justified, either. The fact that he had rebased before committing means that he knew that there were more changes than just what he was working on between the last known state and the one in which he observed the defect. So all he had was a belief - an unjustified, untrue belief.

I realize this may sound like an unnecessarily pedantic and harsh criticism, but I think it's actually fairly important in practice. If you frame this as a Gettier problem, you're sort of implying that there's not much you can do to avoid these sorts of snafus, because philosophy. At which point you're on a track toward the ultimate conclusion the author was implying, that you just have to rely on instinct to steer clear of these situations. If you frame it as a failure to isolate the source of the bug before trying to fix it, then there's one simple thing you can do: take a moment to find and understand the bug rather than just making assumptions and trying to debug by guess and check.

tl; dr: Never send philosophy when a forehead slap will do.


How is it not a JTB?

Belief: The pull request broke the search field auto focus.

Truth: The pull request did break it. There was an additional reason beyond the pull request unknown to the author, but that's not important to the Truth portion here.

Justified: This is the only one you can really debate on, just as philosophers have for a long time. Was he justified in his belief that he broke autofocus? I think so based on the original JTB definition since there is clear evidence that the pull request directly led to the break rather than some other event.

I think that when claiming it's not a JTB you're choosing to focus on the underlying (hidden) issue(s) rather than what the author was focusing on, which is kind of the whole point of Gettier's original cases. For example Case I's whole point is that facts unknown to Smith invalidate his JTB. In this programming example, facts unknown to the author (that someone else introduced a bug in their framework update) invalidate his JTB as well.


His real belief was not exactly that the PR broke it, it's that the root cause of the break was isolated to his code changes. This is evident from the debugging procedure he described. And that distinction is very important, because that detail, and not some abstract piece of philosophy, is also the real source of the challenges that motivated describing the situation in a blog post in the first place.

What I'm really trying to say is that the article isn't describing a situation that relates to Gettier's idea at all. Gettier was talking about situations where you can be right for the wrong reasons. The author was describing a situation where he was simply wrong.


> His real belief was not exactly that the PR broke it, it's that the root cause of the break was isolated to his code changes. This is evident from the debugging procedure he described. And that distinction is very important, because that detail, and not some abstract piece of philosophy, is also the real source of the challenges that motivated describing the situation in a blog post in the first place.

Yes, but the exact same point can be made about the Gettier case. The problem is inappropriately specified beliefs. The problem with that is that it's impossible ex-ante to know how to correctly specify your beliefs.

For instance, you could just say that the problem with the Gettier case is that the person really just believed there was a "cow-like object" out there. Voila, problem solved! But the fact of the matter is that the person believes there is a cow - just like this person believes that their PR broke the app.


I think I agree with the parent. While this can be made into a Gettier case by messing with the scope of the JTB (pull request broke it vs change broke it) I don't think it really works as intended by the author, and it feels like a poor example in a field teeming with much more straight forwards instances.

I can't simplify the explicit examples I have in my head enough to be worth typing up, but the gist is that I can be correct about the end behavior of a of a piece of code, but can be completely wrong about the code path that it takes to get there. I have good reasons to believe it takes that code path. But I don't know about signal handler or interrupt perhaps, that leads to the same behavior, but does not actually use the code path I traced out.

This happens to me reasonably often while debugging.


I think this is accurate, and not at all pedantic.

The idea that software has 'gettiers' seems accurate and meaningful. To some degree, making and maintaining gettiers is in fact the point of black-boxing. Something like a well-implemented connection pool is designed to let you reason and code as though the proxy didn't exist. If you form beliefs around the underlying system you'll lack knowledge, but your beliefs will be justified and ideally also true.

(One might argue that if you know about the layer of abstraction your belief is no longer justified. I'd argue that it's instead justified by knowing someone tried to replicate the existing behavior - but one form of expertise is noticing when justified beliefs like that have ceased to be true.)

And yet this story isn't about facades breaking down, it's just a common debugging error. Perhaps the precise statement the author quotes is a true and justified, but the logic employed isn't. And it's an important difference: being aware of environment changes you didn't make is a useful programming skill, while being aware of broken abstractions and other gettier problems is a separate useful skill.


Agreed. This is not some unusual philosophy case. This is a case of a programmer who should know better (rebase is not exactly a no-op change!) ignoring one of the most obvious possibilities.


I think you're interpreting the author's belief in a way that you want rather than what he actually says. I read the belief based on this statement from the article:

"When I released the new version, I noticed that I’d broken the autofocusing of the search field that was supposed to happen on pageload."

That's it. That's the belief - he broke autofocusing when he released the new version. This was true. The later digging in to find the root cause is merely related to this belief. And yes I agree that Gettier's cases were meant to show that correct belief for the wrong reasons (maintaining the three criteria essentially), but this case meets that intent as well. The author is correct that he broke autofocus via his pull request, and thus JTB holds, but the actual reason for it is not his personal code and thus the Knowledge is incorrect.


Typically, philosophers would not consider a belief about a formal system justified unless that belief is backed by a proof.

In software, for known behavioral specs, you don't have a real justification until you write a formal proof. Just because formal proofs are uneconomical doesn't mean there's some fundamental philosophical barrier preventing you from verifying your UI code. Doing so is not just possible, there are even tools for doing it.

So really, this is not a Gettier case, because in formal systems a true justification is possible to construct in the form of a mathematical proof.

An example of a Gettier case in a software system would be formally verifying something with respect to a model/spec that is a Gettierian correct-but-incorrect model of the actual system.

There are almost no software systems where we have true JTB (proofs), so there are almost none where Gettier cases really apply.

Uncertainty in software is more about the economics of QA than it is about epistemology or Gettier style problems, and that will remain true until we start writing proofs about all our code, which will probably never happen.


Maybe I am arguing the same point as you here, but I am uncomfortable that you are painting the justification criteria as being debatable in these situations.

In particular, I think your criteria for justification is too low. The standard for justification is - however much is necessary to close off the possibility of something being not-wrong.

I find the JTB concept to be useful to reminder us (1) that the concept of knowledge is an ideal and (2) how vulnerable we are to deception.

As an idea survives rounds of falsification, we grow confidence that it is knowledge. But, as Descartes explained in the evil demon scenario, there is room for doubt in virtually everything we think we know. The best we can do is to strive for the ideal.


> however much is necessary to close off the possibility of something being not-wrong.

This is borderline self-referential with respect to the whole Knowledge definition, though. If you have enough information to remove the possibility of a belief being not-wrong then there's no point in defining Knowledge at all. The whole debate around the definition is that humans have to deal with imperfect information all the time, and deciding what constitutes Knowledge in that environment is a challenge.


> Truth: The pull request did break it. There was an additional reason beyond the pull request unknown to the author,

Nope:

> At some point, I did a routine rebase on top of this change (and many other unrelated changes).

> [...]

> (Yes, I should have caught the bug in testing, and in fact I did notice some odd behavior. But making software is hard!)


Philosophy major here. Didn't read the article, but will point out:

The significance of Gettier problems as we investigated it is the exposure of an entirely _wrong_ mode of philosophy: philosophizing by intuition. Ultimately, the reason Gettier problems are significant is because for philosophers, the textbook Gettier problem works because _for philosophers_ the problem captures their intuitions of knowledge, and then proves the case of knowledge fails.

Most normal people (i.e., not philosophers) do not have the same intuitions.

After Gettier analytical philosophers spent decades trying to construct a definition for knowledge that revolved around capturing their intuitions for it. Two examples are [The Coherence Theory of Knowledge][0] and [The Causal Theory of Knowldege][1]. Ultimately nearly all of them were susceptible to Gettier-like problems. The process could be likened (probably) to Goedel's Incompleteness proof. They could not construct a complete definition of knowledge for which there did not exist a gettier-like problem.

Eventually, more [Pragmatic][2] and [Experimental][3] philosophers decided to call the Analytical philosophers bluff: [they investigated if the typical philosopher's intuition about knowledge holds true across cultures][4]. The answer turned out to be: most certainly not.

More pragmatic epistemology cashes out the implicit intuition and just asks: what is knowledge to us, how useful is the idea, etc. etc. There's also a whole field studying folk epistemology now.

[0]: http://www.oxfordscholarship.com/view/10.1093/acprof:oso/978...

[1]: https://en.wikipedia.org/wiki/A_Causal_Theory_of_Knowing

[2]: https://en.wikipedia.org/wiki/Pragmatism

[3]: https://en.wikipedia.org/wiki/Experimental_philosophy

[4]: https://en.wikipedia.org/wiki/Experimental_philosophy#Episte...


> The significance of Gettier problems as we investigated it is the exposure of an entirely _wrong_ mode of philosophy: philosophizing by intuition. Ultimately, the reason Gettier problems are significant is because for philosophers, the textbook Gettier problem works because _for philosophers_ the problem captures their intuitions of knowledge, and then proves the case of knowledge fails.

And to concretely tie this directly back to software[0]:

Intuition is a wonderful thing. Once you have acquired knowledge and experience in an area, you start getting gut-level feelings about the right way to handle certain situations or problems, and these intuitions can save large amounts of time and effort. However, it’s easy to become overconfident and assume that your intuition is infallible, and this can lead to mistakes.

One area where people frequently misuse their intuition is performance analysis. Developers often jump to conclusions about the source of a performance problem and run off to make changes without making measurements to be sure that the intuition is correct (“Of course it’s the xyz that is slow”). More often than not they are wrong, and the change ends up making the system more complicated without fixing the problem.

[0] https://www.systutorials.com/3525/favorite-sayings-by-john-o...


This is very interesting. Is there a book that discusses stuff like this?


You make a good point, but I think the framing of “justified true belief” may still be useful for the reason you illustrated. It gives a developer a nice way to sanity check their instinct. Instead of “I’m sure my code broke this,” asking “is this belief justified” may help them pause and consider “well I did rebase so I can’t really be sure, I need to eliminate what’s in the diff first.”


Right... but then how do the Gettier exceptions fit into that not-so-original insight?


Example would be the above comment about "There is a cow in that field". It would be to then ask, did I see the cow move, eat, or do anything else that you would expect a cow to do - rather than just seeing the silhouette?

I generally think the point is that you oftentimes have beliefs about why something broke and that it is important to check your beliefs.


Arguably the Gettier example of the cow falls into the same realm. Cows move, eat grass and go 'Moo. Did the observer see it do these things, or just go on the silhouette? Then they were not rigorous enough to have a justified belief.

But then we can get back to a better Gettier case if we suppose that Boston Dynamics happens to be testing their new Robo-Cow military infiltration unit in the field at the time.


What if the facsimile does all of those things? My initial reaction to the example was, "so what"? The guy _knew_ there was a cow but was wrong. The fact a _real_ cow existed behind the fascimile is irrelevant. All knowledge is contingent and relative. How could it be otherwise? What if 100% of all cow facsimiles ever displayed in the past were displayed on a farm with real cows--would that alone be sufficient to justify an inference of a cow nearby? To say that a belief is _justified_ simply begs the question--how can you _really_ know the truth of something?

In the scientific method you can never prove hypotheses, you can only disprove them. 'nuff said.

I find the Gettier problem entirely uninteresting myself. Someone elsethread explained it in the context of technical discourse in philosophy, in which case I could maybe appreciate the issue as a way to test and explore epistemological theories in a systematic, rigorous manner, even if for many theories it's not particularly challenging to overcome.


It's interesting because until Gettier nobody had actually proved that a justified true belief does not constitute what we commonly consider to be knowledge.

It's all very well dismissing it an inconsequential, but several generations of philosophers and scientists have grown up in the post-Gettier world, before which the Justified True Belief account was widely considered to be unassailable. Yes _now_ we all know better and are brought up knowing this, but Gettier and his huge influence on later thought is the reason why and it's just that not many people are aware of this.


But plenty of philosophers prior had suggested, if not argued, the contigency of knowledge. Kant famously discussed how knowledge is shaped and circumscribed by our faculties (e.g. sense of space and time).

Positivism was already well developed long before 1963.

I could on. I've studied enough philosophy to be comfortable with my criticism. But I'll grant that it may not have been until after 1963 that all these philosophical strains became predominate. But that doesn't excuse people from not seeing what's coming.


I agree with your point I think, however all of the Gettier cases that I know of seem to be examples of a particular choosing of the fact. In the cow example, he had a justified true belief that there was a cow in the field, but if you had phrased it more naturally as 'I know that that black and white shape I see in that field is a cow', that would not have been true.

I suppose the point is just that having a justified true belief purely by chance, almost certainly depends on the justification itself being a false belief, even in true Gettier cases.

Anyway, I find the whole thing fairly unmysterious - I just take 'know' to be a particularly emphatic form of 'believe', and I like your conclusion.


Yes, this was my gut reaction to the Gettier exceptions as well. But you picked out the specific detail that seems to clinch it for me: the framing of the "fact" under question matters greatly.


From the author's first example, the framing of the statement was also critical. Rebasing introduced the bug, and it would be a correct statement to say "something I just did broke autofocus." However, it would be incorrect to state "my code in the last commit broke autofocus."

In many ways, programmers need to be as fussy in their statements as philosophers. Since computers are stupid, and do exactly what you specify (usually...) it is important to pay close attention to these exact details. Assuming that the new code contains the bug is incorrect, and proper debugging requires careful attention to these details.

Ive certainly had bugs that were caused by some other, hidden factor like this, and typically the best way to find them is to carefully examine all your assumptions. These may be ugly questions like "is the debugger lying to me?" or "did the compiler actually create the code I intended it to make?" So while these may not be strict Gettier cases (and the author admits this in the article) they nevertheless are fairly common classes of similar problems, and framing them as such does provide a useful framework for approaching them.


The author did have a JTB that his pull request caused the bug. The point is that we hesitate to say that he knew that his pull request caused the bug, because he had the wrong justification. If you take issue with the justification, suppose that he was supposed to be the only one working on the code that day, and that the person who introduced the bug was working from home without telling anyone. He’d then have a JTB but still lack knowledge. If even that isn’t enough justification for you, then you have a problem with the entire JTB definition, and you’re essentially offering a circular definition as an alternative (since the only justification you’ll accept is justification that itself provides knowledge).


I think sometimes the spirit of these observations can be lost in the specifics.

Maybe this doesn't confirm strictly to the philosophical definition, but as an analogy I find it succinct and useful.

Many times I've fallen in the trap of absolutely _knowing_ in my head the reason things are the way they are, only to find out later I'm completely wrong. In most cases the evidence was right in front of me, but software is hard and complex and it is easy for even the best developers to miss the obvious.

Putting a term around it, having an article like this to discuss with the team, these are all useful for reinforcing the idea that we need to continually challenge or knowledge and question assumptions.


one countermeasure i use for this is to substitute the phrase "i'm pretty sure that..." in place of "i know...". that gives both you and others the mental room to consider alternative causes. the 5 why's is another more formalized technique: https://en.wikipedia.org/wiki/5_Whys


The auto complete quandary comes from ambiguity of language. The phrase "Auto-complete is broken" is ambiguous in the article encompassing at least two defintions.

I know _that_ auto-complete is broken (I see it, the test fails)

but

I do _not_ know _why_ auto-complete is broken (some other dude did it)

But I still think it's very interesting to talk about if for no other reason than that it clarifies terms and usage.


I think it helps to look at the mind as a probabilistic survival engine than some truth engine.

If there appears to be a cow in a random field the odds are extremely low that someone put a papier mache cow there. If there’s something that has 50 % chance of being a snake you panic and run because that’s a 50 % chance of dying.

In the case of the authors bug yes the change he introduced had a good probability of being the cause. However he could have increased the probability by going back over commits and confirming that his exact commit introduced the bug. Now the probability goes even higher. But it could still be machine specific a cosmic ray or whatever but the odds are over whelmingly low.

In practice causal reasoning also works in a probabilistic fashion.

I have a simple model saying that if a planes engine is on then it’s flying. Its a single step probability and so it’s not very accurate in the real world.

I do a bunch of experiments and say a plane is flying if the engine is on and air is going over it’s wings faster than a certain speed.

Now we have two correlations connected by a causal model that works in many other cases. Hence the probability of it being correct rises.

But at the same time we should never mistake direct correlation for causality. But in daily life it’s “good enough”.


I'm sad that I had to read so far down the thread to find this comment, which beautifully cuts right through the argument. The problem is the idea of absolute knowledge -- knowledge is probabilistic and contextual. If I'm driving by the field in a car, I may be duped by the cow, but my perception that I'm surrounded by pasture land is more than adequate to support the activity of driving. If on the other hand, I'm Farmer Jones and I'm searching my pasture for Susie the cow, well, I'm not going to be taken in by a statue of a cow for more than a split second anyway.


This is an excellent comment - I'm just disappointed you managed to say it all without the word "Bayesian". The base rate for paper mache cows in fields is very low - one is perfectly justified in assigning a decent probability that a field contains a cow, if one sees a cow-shaped object in it. If you are in a part of the world that has a lot of cows in fields, you will presumably assign an even higher probability. You might even say you're "sure" there's a cow in the field, and act as such for everyday purposes. But don't be fooled - you're not really sure. If someone offers to bet you ten dollars versus your life that there is a cow in the field, you'll likely not take the bet.

It seems that the philosophers were grasping towards a definition of "know" that encapsulated the idea of assigning 100% probability to something, after incorporating all the evidence. From a Bayesian standpoint, this is impossible. You can never be 100% certain of anything. To "know" something in the sense of "a justified, true belief" is impossible, because a belief is never both 100% and justified.

(Note that it is entirely possible to see the paper mache cow and conclude that it is likely only paper mache and that there is not a real cow in the field. Is this belief "justified"?)


I've thought a bit more about it, and concluded that while the above is a neat answer, it doesn't explain the question, and [thewarrior]'s remarks were nearer the mark on that. So here goes.

It's tempting to think of "knowledge" as some relationship between the mind and a single fact. But when we use the word "knowledge", what we actually mean is "an accurate world model" - a set of beliefs. This is the disconnect that Gettier cases are designed to expose - they construct scenarios where someone's mental model is inaccurate or incomplete, yet by sheer luck produce a single, cherry-picked correct prediction. We are uncomfortable calling these correct predictions "knowledge" because as soon as you start probing the rest of the mental model, it falls apart. Sure, they think there's a cow in the field, and there really is one. Ask them any more questions about the scenario though ("what color is the cow?") and they'll give wrong answers.

From this perspective, "knowledge" as a "justified, true belief" is a perfectly coherent concept - the problem lies with the inadequacy of the word "justified" to describe the output of a complex decision procedure that incorporates many beliefs about the world, such that it could be expected to yield many other correct predictions in addition to the one in question, up to some arbitrary threshold.

A thought experiment - suppose you tell the observer that the cow they see is made of paper mache. They no longer believe there is a cow in the field. Intuitively, has their knowledge increased or decreased?


I like the term "probabilistic survival engine". It explains a lot of phenomena in social trends, religion, politics, and even the practice of science.


The first time I heard to a brain referred to in this way it was in the scifi book Blindsight by Peter Watts. The book is pretty solid and deals with a lot interesting ideas from cognitive science.


I think what you're saying is correct when it comes to perception with our senses, the kind of knowledge we share with the other higher animals. However, there is a higher level of knowledge that is unique to humans, which is rational in nature and can come up with certain knowledge. We can know certain mathematical and logical truths with certainty, such as 2 + 2 = 4 and that A and not A cannot both be true. We may not get an accurate count of cows in the field because one is hidden from view, but we can know with certainty that if we have X cows in field A and Y cows in field B, we have a total of X + Y cows in the two fields.


I agree with your main points, but I think it's worth pointing out that even things like 2 + 2 = 4 and X + Y cows in fields A and B, all rest on certain definitions and sets of deductions, many of which go back down to basic axioms which have to be assumed (e.g. for arithmetic, see Peano axioms as an example: https://en.wikipedia.org/wiki/Peano_axioms).

- We have a definition of what a cow is, and we know that cows are discrete/physical objects, and have relatively fixed locations (i.e. that they are not like an electron cloud with a probabilistic location).

- We assume that fields A and B in your hypothetical have clear, non-overlapping boundaries.

- We assume that we are working in a fairly normal universe with a fairly standard model of physics, and that due to the way time works in this universe, a cow cannot simultaneously be located in both fields A and B.

- ...

- (this could get really pedantic and go on forever)

The point is, even the things "we can know with certainty", are only as certain as the framework of observations/deductions/axioms/etc. that they rest upon. Almost nothing is certain on its own, without any further layers of reasoning behind it.


An unforeseen issue with probabilistic assessments is the model of the probability space is probably inaccurate.

For example, Nassim Taleb has an argument of IQ being a single-dimentional assessment to a multi-dimensional domain.

I think it's more practical to have a possibility space (where unknown unknowns is a possibility). This removes the need to assess probabilities (which will probably be incorrect) while having being able to per-mutate through the list of possibilities. One can also do logical deductions, based on the possibility space, to assess possible strategies to explore/solve the issues at hand.


Denying knowledge makes your arguments work, but knowledge does exist and is a real thing. Are you sitting on a chair? The answer isn't probabilistic, you KNOW the answer. What's your name? Have you ever been to the moon?


That's all well and good until the supposed "chair" falls apart and is revealed to be nothing but a book and some bamboo.

I see OP's comment not as denying knowledge, but rather as clarifying what is meant when people speak to each other of knowledge.


This is a lesson learned well from open-ended systems. An open-ended system is one where input is received to process, but the input is not well defined. The more accepted unknown input becomes the more open the system must be in its rules to process it. The results of processing are:

* expected output from a known input (intention)

* unexpected output from a known input (defect)

* expected output from an unknown input (serendipity)

* unexpected output from an unknown input (unintentional)

For example I maintain a parser and beautifier for many different languages and many different grammars of those languages. In some cases these languages are really multiple languages (or grammars) imposed upon each other and so the application code must recursively switch to different parsing schemes in the middle of the given input.

The more decisions you make in your application code the more complex it becomes and predicting complexity is hard. Since you cannot know of every combination of decisions necessary for every combination of input you do your best to impose super-isolation of tiny internal algorithms. This means you attempt to isolate decision criteria into separated atomic units and those separated atomic units must impose their decision criteria without regard for the various other atomic decision units. Provided well reasoned data structures this is less challenging than it sounds.

The goal in all of this is to eliminate unintentional results (see forth bullet point above). It is okay to be wrong, as wrong is a subjective quality, provided each of the atomic decision units are each operating correctly. When that is not enough you add further logic to reduce the interference of the various decision units upon each other. In the case of various external factors imposing interference you must ensure your application is isolated and testable apart from those external factors so that when such defects arise you can eliminate as much known criteria as rapidly as possible.

You will never be sure your open-ended system works as intended 100% of the time, but with enough test samples you can build confidence against a variety of unknown combinations.


How is 2 & 3 from above different from 4?

An unknown input producing correct results is still a problem - the unknown input is the problem.

Therefore, i postitulate that anytime an unknown input is possible, the software is defective.


My application is a code beautifier that receives code samples as input. I cannot know of every possible combination of code samples. This does not necessarily mean the application is defective. A defective state is unwanted output.

Another way to think about this is that the more open a system is the more useful and risky it is. It is useful because it can do more while tolerating less terse requirements upon the user. It increases risk because there is more to test and less certainty the user will get what they want. Part of that risk is that its hard to guess at what users want as sometimes the users aren't even sure of what they want.


Say you write some movement software.

You expect that people will move forward, left, or right.

You didn't expect people to try moving backward.

People start moving backward, but the software happens to do the right thing due to how it was written.

Is the software defective because of your missed expectation?


Yes, i would still classify the software defective. It did not reject backwards explicitly. Just because by accident it works (i.e., the consumer of the output doesn't care) doesn't mean the defect has gone away.

To be not defective, the software has to explicitly reject input that it was not designed to handle.

Imagine if the software updated with some changes, and the unknown input now produces an incorrect output. Is the defect introduced with the changes? Or was the defect always there?


> To be not defective, the software has to explicitly reject input that it was not designed to handle.

In some cases that breaks forward capability. e.g. the case where there is an unknown XML tag. You could reject the tag or message. You'll end up rejecting all future change inputs.

If the whitelist of acceptable items is large, it may be acceptable to have a black list however if the above holds, you don't know what you don't know.


The middle ground may be explicitly flag/indicate/log that an unknown situation has been encountered, and 'handle' that by doing something useful (continuing to work without crashing, preventing known "unknown" data from being processed silently, etc). It may not help with forward compat entirely, but it would be explicitly known (and I'd think would be somewhat easier to modify/extend for known unknowns in the future).


I've been there, painfully. On my last day on a job, some code I wrote went into production, and the databases started filling up rapidly, to the point where the whole system would halt in a few hours.

Turned out the bug had been latent in the code for 5+ years, predating me. Its data consumption had never been observed before because it was swamped by other data consumption. Changing the architecture to remove that other data brought it to the foreground.

(fwiw, the bug was caused by the difference between 0 and null as a C pointer!)


What would "reject input that it was not designed to handle" look like for an automated car?


When you come around a curve near sunrise or sunset, you may suddenly encounter visual input that overwhelms your sensors. The sun is blinding you. It might overwhelm infrared sensors, too.

If you have alternate sensors, you should trust them them more, and camera systems less.

If you have a sunshade, you should deploy that.

If it is raining, or partially cloudy, the situation may change rapidly.

And perhaps you should slow down, but if you slow down too fast, other vehicles might not be able to avoid you.


You could also argue that your expectations are defective. It is possible to accidentally solve a problem in a correct manner.


Not reliably.

It's not professional to design systems that rely on luck.

"Let's ignore this edge case and hope we get lucky" is not something you want to see in a software specification.


Or simply acknowledge that your initial specs didn't cover enough, update the specs, test the "new" functionality, and call it a feature in the release notes.


Getting lucky is not the same as relying on luck.


Where do you fall on autonomous cars? It’s okay to be anti. I’m just curious


Well, it's harder to tell something's wrong when the output looks right.


The entire world of AI relies on dealing with "unknown" input?


I would say yes.

There's a saying that when people figure out how to make a computer do something well, that it's no longer in the field of AI. I'd say there's some truth in this, in that for many problems we have solved well (e.g. playing chess), the intelligence is not that of the machine, but of the programmer.

I think that in order for a machine to genuinely be intelligent, it must be capable of original thought, and thus unknown input. Known doesn't necessarily mean specifically considered, but that it could be captured by a known definition. As an example, we can easily define all valid chess moves and checkmates, but we can't define the set of images that look like faces.


No, it doesn't. It relies on explicit rules (old school) or statistical inference (new school).

There's a difference between "breaking" unknown input - i.e. non-computable within the system as it stands - and "working" unknown input, which is within expected working parameters.

The latter is business as usual for computing systems.

The former should have a handler that tries to minimise the costs of making a mistake - either by ignoring the input, or failing safe, or with some other controlled response.

It may not do this perfectly, but not attempting to do at all it is a serious design failure.


The propensity for mistaking belief for facts certainly take daily hits as a software developer. "How come this simple thing isn't working? I thought of everything didn't I?". After a while you are forced to realize that belief isn't the same as reality.

It seems insights like this don't easily translate into other domains though, like relationships, dearly held political views etc. We prefer to think of them as based on facts, when in all probability they are merely beliefs fraught with assumptions.

Some people might be good at being skeptics in all areas, but I sense most share my own ineptitude here, the reason probably being that any such (incorrect) beliefs don't immediately come back and bite us, as in programming.


The funny thing about development is what's say 90% of the time you are convinced everything is correct and should be working and it's immensely frustrating because it's not, so you know you are wrong but are unable to offer yourself a more convincing theory and just get stuck until something clicks. But then there's that 10% of the time where you're actually right. And you don't know which one it's going to be. So you have to calm yourself down like "I know I think Im right about this but clearly Im not" but at the same time you have to hold onto that conviction because you're right damnit. Haha.


As Raymond Smullyan proved, everyone is either inconsistent or conceited.



Hard to translate to other domains without a tool to prove you wrong until you are right.


After a while you are forced to realize that belief isn't the same as reality.

This is what the technique of Rubber Duck Debugging helps with. I wonder if you could translate it to other domains?


Sure it could translate seamlessly to the problem solving domain generally


it does seem to be a 24/7 zen practice.. i'm nearly always wrong so let's focus on checking every assumption with an open mind


My favorite example of this is when someone says "Nobody knows X." As though your own ignorance is as good as everyone's ignorance.

Somebody might know X. And they might know X for all the right reasons. But they probably didn't tell you those reasons, and you probably wouldn't believe them or understand them if they did.


The propensity for mistaking belief for facts certainly take daily hits as a software developer

I think that can be expanded to the whole human race.


The "cow in the field" example reminds me of two heuristics I like: am I right for the wrong reason?; am I wrong for the right reason?

Being right for the wrong reason is dangerous: it's not easy to spot, and it perpetuates false sense of security leaving "black swan events" unanticipated. This might occur during debugging as the article points out, or e.g. during A/B testing of a product.

Bring wrong for the right reason is just plain frustrating.


> Bring wrong for the right reason is just plain frustrating.

what's an example of being wrong for the right reason? I can't think of any cases where this happens...


In the context of political forecasting, imagine that you are a defence chief who is faced with an unquantifiable external threat, as the US was by Russia during the Cold War. You can predict that this enemy is a very great threat, or you can say that it isn’t much of a threat, but the outcomes are asymmetric. If you say that the threat is a grave one, and strengthen your defences accordingly, then if the enemy attacks, you were clearly right to take the threat seriously. If the enemy doesn’t attack, you were still right, because you can say that the enemy only didn’t attack because of the action you took. On the other hand, if you dismiss the threat as insignificant, and the enemy attacks, then at best your career comes to a sudden and unpleasant end. So therefore, it is always right to over-emphasise the threats, and if you turn out to be wrong, you were wrong for the right reason.[1]

[1] https://wiseinvestment.co.uk/news/antiques-roadshow-tony-yar...


> always right to over-emphasise the threats, and if you turn out to be wrong, you were wrong for the right reason.

if the audience is not receptive to the concept of opportunity cost, then yes. Unfortunately, a majority of people over-estimate the need for security and thus, allow themselves to be fooled into believing that this over-emphasis, no matter the cost, is justified.

Just look at the TSA!


Games are a place this comes of frequently. In poker you will often make the "right" (highest EV) play and still lose a given hand.

Making the highest odds plays (and being able to figure out what they are) over and over again regardless of how the individual hands turn out is how you win.

Obviously, you can also win while making the wrong plays by getting lucky (right for the wrong reasons). Evaluating your play based on the outcome of the hands (did you win or lose) rather than the plays you made with the information you had at the time is called Results Oriented Thinking: https://www.pokerdictionary.net/glossary/results-oriented-th... ... and it is a pernicious mistake (with wider applications than just poker :).


My compensation includes company stock. I sell that stock, and some of the proceeds end up in index funds. I do this because I want a diversified portfolio.

My employer's stock is up 28% over the last year, S&P 500 is down 6%.

I guess my goal going in was to reduce variance, so I wasn't "wrong" about anything. 36% more money would be nice, though. (I haven't changed my strategy as a result)


Nate Silver is widely seen as having forecast the election "wrong" because he said Trump had a 20% chance of winning. AIUI his position is that his model was right and 20% was an accurate probability.


The essence of probability is being wrong for the right reasons (less often * weight then being right for those same reasons).


>> He called them “gettiers” for short. So we used to talk about gettiers all the time, no doubt in part just because it felt clever to talk about them, but also because when you’re a programmer, you run into things that feel like Gettier cases with unusual frequency.

Sometimes I think that is what philosophers are doing - feeling clever - perhaps as a defense against some negative inner problem (psychology is an outgrowth of philosophy after all). The whole cow story stinks of telling someone "you're right, but you're also WRONG! Your perception of reality is BROKEN!". To me knowledge is simply having a model of the world that can be used to make useful predictions and communicate (and some other things). Aside from that, it doesn't matter if your model is "grounded in reality" until it fails to work for you, at which time it can be helpful to realize your knowledge (model) needs adjustment.

One way to resolving the authors first software issue would be to check a diff between what he committed and the previous production revision - this would quickly uncover the changes he "didn't make". This is an old lesson for me - what I changed may not be limited to what I think I changed. It's a lesson in "trust but verify". There are any number of ways to view it, but in the end we only care about ways that lead to desired outcomes weather they're "right" or not.

On a related note, I've found that software is one of the only places where there is a "ground truth" that can be examined and understood in every detail. It's completely deterministic (given a set of common assumptions). I've found the real world - and people in particular - to not be like that at all.


> Sometimes I think that is what philosophers are doing - feeling clever - perhaps as a defense against some negative inner problem (psychology is an outgrowth of philosophy after all).

All science is an outgrowth of philosophy.

It's very frustrating when people look the obviously trivial and sometimes silly examples that philosophers use to elucidate a problem, and take it to mean that they are interested in trivial and silly things. Being right for the wrong reasons is a common and difficult problem, and some if the solutions to it a really insightful and powerful ideas.

> Aside from that, it doesn't matter if your model is "grounded in reality" until it fails to work for you, at which time it can be helpful to realize your knowledge (model) needs adjustment.

It might matter a great deal if your model is not grounded in reality - there are situations where that can kill you. It also seems like one of the fundamental aims of science, to have theories fail less often.


Exactly, one ontology of knowledge is: pointers (names), categories (matchers/recognizers), models (descriptions of systems).

The first two have all the problems philosophers talk about. But the last one does not. Not even underdeterminism, unless the system of the model is fundamental or fades into history or is a "wicked" problem.


There's a much closer analogy from software development to the cow story. The cow story is confusing because the cow that you see (A) is fake, but the real one (B) you don't know about. So your belief is not a justified true belief because although the real cow exists, the one your knowledge refers (A) to isn't the real one (B).

An intertemporal variant of this is race conditions. There have been lots of problems of the form "(1) check that /tmp/foo does not exist (2) overwrite /tmp/foo"; an attacker can drop a symlink in between those and overwrite /etc/password. The file that you checked for is not the same file as you wrote to, it just has the same name. This is an important distinction between name-based and handle-based systems.


If the real cow (B) was not present, your belief that there was a cow in the field would be justified but not true. Seeing a the fake cow (A) justifies your belief that there's a cow in the field. Adding a real cow (B) that you can't see doesn't remove the justification.


Good point, I've edited "justified" to the full phrase "justified true belief"


If the belief is both justified and true, how is it not a justified true belief?


> I could have had a JTB that the code change had caused the emails to stop delivering, but still we wouldn’t want to say I “knew” this was the cause, because it was actually the service outage that was directly responsible.

He is wrong and this is not a gettier in any way. "The code change had caused the emails to stop delivering" is not a JTB, because it is not true. Rather it was that the email server went down.


I don't think any of his code examples are really JTBs because, like you said, its not "true". In the cow example, you make an assumption based on the papier mache cow that actually turns out to be true through information you didnt have access to. All of his code examples are more about him making assumptions that turn out to be false instead based on information he didn't have access to (or rather didn't think to access at least).

I don't think this really affects the take home message from the piece, I'm just being pedantic that it doesn't parallel perfectly (which he even acknowledges people may say in the last paragraph).


He simply wasn't speaking with precision. If you replace "code change" with "pull request" in his statement, it's JTB.


No you're talking about the first example. On the second example he says

> But—gettier!—the email service that the code relied on had itself gone down, at almost the exact same time that the change was released.

So the error was caused by the email service going down, which is completely independent of the code change/pull request.


That is correct - it is then a JTB - but not a “Gettier”. None of his examples are Gettiers.


I don't see any of his examples as Gettier cases. He thought his code caused the autofocus problem; it didn't. He thought someone else's push had broken email, but instead the service happened to go down at the same time. A proper Gettier case would be when you write code that you believe to be correct and it does work, but not for the reasons you think it does. Often this eventually bites when some edge case arises.

I run into this fairly often playing chess. I calculate out some tactic, decide it works, and play the first move. But my opponent has a move I overlooked which refutes the line I was intending to play. Then I find a move that refutes his, and the tactic ends up working in the end anyway, just not for the reasons I thought it would.


Here's an example that comes to mind:

A programmer writing a function refers to a local variable, "status", but thinks they are referring to a global variable. The code works by chance because the variables happen to have the same (fixed) value.

The variable shadowing means that the programmer could quite plausibly be confused and believe that they were accessing the global variable ("justification"). "I know I checked the status variable, like I was supposed to".


>A philosopher might say that these aren’t bona fide Gettier cases. True gettiers are rare.

I beg to differ. Besides the examples in programming the author gave, I can very easily think of examples in medicine, police work (e.g. regarding suspects), accounting, and so on...


In science, experiments with a good control is how this problem is removed. And repetition is how to control for coincidences and outlier measurements.

Software engineering has many of these Gettier cases, because most software engineers do not follow the scientific method when investigating a problem!


Many (most?) wrongful convictions are examples, not of Gettier problems, but of similar issues with JTB.


Well, why not write them down for us?!


As if they're difficult to derive on one's own?

You believe X has cancer because he has the symptoms and you can see an offending black spot on their X-ray.

The lab results say the black spot was just a cyst but X indeed has cancer in the same organ.


That sounds to me like (1) not a clear example of a Gettier case and (2) something that would, in fact, be rare.

#1 because one of the reasons for your believing X has cancer is that "he has the symptoms", which (if I'm understanding your example correctly) is in fact a consequence of the cancer he actually has; so, at least to some extent, your belief is causally connected to the true thing you believe in just the way it's not meant to be in a Gettier case.

#2 because (so far as I know) it's not at all common to have both (a) cancer in an organ that doesn't show up in your X-ray (or MRI or whatever) and (b) a cyst in the same organ that looks just like cancer in the X-ray/MRI/whatever. I'm not a doctor, but my guess is that this is very unusual indeed.

So this isn't a very convincing example of how clear-cut Gettier cases aren't rare: it's neither a clear-cut Gettier case nor something that happens at all often.


In the general case it's "I believe X based on signs that would imply X correctly, and I happen to be correct that X holds, but I misread the signs I used to come to the conclusion".

I don't think this is rare -- including in the version of my example.

The only reason there's arguing that it's not a "clear cut case" is that I mentioned seeing "symptoms". Ignore the symptoms I mentioned, as they are a red herring, e.g. seeing the mark could cause the belief alone.

Other than that, it's a belief (1), that's justified (2), and true (3) -- while being accidentally justified.

Consider the case of a policeman that things someone is dangerous because they think they seen a gun on them. So they shoot first, and lo and behold, the suspect did have a gun on them -- but what the policeman seen was just a cellphone or something bulky under their jacket.

Or the spouse that thinks their spouse is having an affair because they see a hickey. Their spouse indeed has an affair (and even has a hickey on the other side of the neck), but what their spouse saw was just some small bruise caused by something else.

Or, to stick with the theme, figuring domestic abuse, and the victim suffers that indeed, but your guess is based on a bruise they had from an actual fall.


As in, when u write "I can very easily think of examples in medicine, police work (e.g. regarding suspects), accounting, and so on..." but you don't give any, the natural tendency is not to believe you.


That would be the "natural tendency" if I was describing something mysterious and rare that few can fathom. Whereas to me those situations don't seem really as far-fetched or hard for someone to come up on their own.

I didn't write "one can easily" to imply I have some special talent to imagine such situations (and thus had motive to leave examples off to hide the fact that I don't).

I wrote it because I really do believe one can easily find such examples, and wasn't it even worthy to go into details (since I mentioned medicine, police work, etc, I thought the cases I implied where pretty clear too).

In any case, I gave 3 examples in a comment above.


I can imagine a future where what's true generally describes itself (like terraform on drugs for software :p) Imagine software that is fully self descriptive and would no longer require engineers to individually interpret what's happening because the software would tell us. The system would be a graph of every single component and all possible connections between them, and all variants of each component and state that it could be in. When we introduce a change we would be aware with perfect information about the affect to the states and the paths between them.

In the example the Mental Model was at a level too shallow, it should have only affected the paths between the autofocus and the user. But the bug necessitated a larger mental model (the author was considering too small subsection of the graph).

I'd hope in the future we could reach a state where the program could have detected that the frame refactor would have an affect on the autofocus and all other components instead of being an implementation detail.


Although many folks have raised the "not Gettier" objection, I would propose that the premise of applying the test to debugging is wrong. Debugging means that your assumptions were already faulty: otherwise the system would not have bugs.

That is, the act of programming means working on an unfinished thought, something that can reflect some beliefs but compromises on being an exactly true expression of them. And so the weight of philosophical reasoning should appear at design time. What occurs after that is a discovery phase in which you learn all the ways in which your reasoning was fallacious - both bugs and feature decisions.


> (Yes, I should have caught the bug in testing, and in fact I did notice some odd behavior. But making software is hard!)

How often have I noticed some "odd behavior" in testing, and later wasn't able to reproduce it? Some nagging feeling that I broke something remained, but since I've deployed a new version (that fixed something else), and I couldn't reproduce the "odd behavior", I tricked myself into ignoring it.

And then I deployed to production, and shit hit the fan.

Now I try to pay more attention to those small, nagging feelings of doubt, but it takes conscious effort.


This reminds me of a recent event at WrestleKingdom 13, a Japanese professional wrestling event where, as you might imagine, pretty much everything is planned and choreographed ahead of time.

In the opening match, Kota Ibushi suffered a concussion. Some doctors came out, carried him out on a stretcher, and took him to the back. As it turns out, this was all planned. The doctors were fake, and this course of events was determined ahead of time. But coincidentally, Ibushi _actually_ suffered a real-life concussion in the match.

Wrestling always has an interesting relationship with reality.


This is precisely why when dealing with bugs I advise juniors to avoid asking the question, "what changed?" Gettier cases are just one problem that you can face when asking that question.

Instead I usually tell them to do it the proper way: start from the bug, and work backwards to understand why that bug is happening. At that point the change that caused the bug becomes obvious, and most of the time we realize that we probably wouldn't have come to that conclusion by looking just at what changed.


This is one of two approaches to the problem of debugging a system. It's advantage is that assuming the programmer can focus on everything they have seen in the debugger for long enough, they can find where the issue arises.

It's disadvantage is that as systems get larger, it can get exponentially more time consuming. As programmers we sometimes learn tricks (read: assumptions) to cut down this time, but in the end, the complexity of the system beats all but the very best/most determined.

Consider tracing a bug in this manner through the entire code of something as complicated as an operating system. Most of the code you did not write yourself, and you have likely never seen before and no idea what it does. Each new frame the debugger reaches you have to spend time understanding what is happening before determining if this is where the problem occurs, and there are so many frames that it can become difficult to sort through them all.


This are called false positives and are a normal occurrence in the life of a software developer. That's why when testing the root-cause you should test for a false positive as well.


I haven't read the original paper, so maybe the example is better, but it seems the cow example fails the justified condition. The knowledge is justified if it derives from the evidence, but once we know the evidence is faulty it can no longer be used for justification by definition. It seems by extension that any justified true belief can become unjustified by the addition of new information that invalidates the justification on which the alleged knowledge is based upon.


What you're saying is more or less exactly what the paper was getting at.

It's hard to say based on a short Internet comment, but it sounds like the spot where your disagreement comes from is that you're understanding the word "justified" in a slightly different way from how epistemologists were using it. For example, one of the responses to Gettier's paper was to suggest that maybe the definition of "justified" should be altered to include a provision that invalidating the justification would imply that the belief is false.

So, for example, under that modified definition, the visual evidence couldn't serve as a justification of the belief that there is a cow in the field, because it allows the possibility that it isn't a cow but there still is one in the field. On the other hand, it would work for justifying a belief like, "I can see a cow from here." (Yeah, there's another cow in the field, but it's not the one you think you see.) But, still, that wasn't quite the definition that the mid-century epistemologists who made up Gettier's audience were using.

(ETA: Also, the original paper didn't involve cattle at all. Wikipedia has what looks like a good summary: https://en.wikipedia.org/wiki/Gettier_problem#Gettier's_two_...)


Thanks, I think you're right about how I was understanding the word "justified". I like bringing up philosophical disagreements on HN since it often gets responses like yours :)


Yeah, sorry, though, I realized after I posted that I failed to properly acknowledge that you hit the nail on the head -- I picked that response to Gettier specifically because it matched your criticism.


In the example, the subject does not know that the evidence is faulty.


Sounds like (from the cases presented) an over intellectualisation of “coincidences.”


Another really great work that's related to some of these concepts is "Naming and Necessity" by the philosopher Saul Kripke.

https://en.wikipedia.org/wiki/Naming_and_Necessity

It investigates how we assign names to things and what those names mean and how we can reason about them.


I know mathematical logic but I don't know much about conventional philosophy. The Gettier argument seems to indicate that a system of false propositions used to arrive at a conclusion could be called "justification" in normal epistemology - that seems a bit disturbing. Being "justified" in believing X means just having some reason, maybe utterly bogus but existing and convincing to you, for believing X (and then X becomes knowledge even if X happens to be true entirely different reasons). How could no one have commented previously if that was how the system was?

Edit: the main practical implication of the argument seems to be that one cannot assume that when you have argument for X and when you then you get empirical evidence for X, you cannot take that a proof your argument for X is true. It might be suggestive of the truth of the argument but the structure of the argument also has to be taken into account. But that's been a given scientific and statistical investigations for a long time.


I initially thought this article might be about programming by coincidence [1] or maybe about user experience superstitions [2], but after reading it I wonder if this isn't just about the practice of debugging. Software is complex. When someone begins investigation into a bug, if the fix is not immediately found, it becomes a matter of running down a list of hypothetical causes. As one's experience deepens, both the "running down" and the curating of the "list" become more efficient. IMHO this article is merely about a developer who was unaware of certain hypothetical causes.

1: https://pragprog.com/the-pragmatic-programmer/extracts/coinc...

2: https://www.nngroup.com/articles/quality-assurance-ux/


I actually disagree with the Gettier thought experiment and don’t believe it demonstrates anything interesting.

When you see the cow (but it’s really a convincing model), then in your mind, there should be some probability assigned to a variety of outcomes. The main one would be a cow, another one might be that you’re hallucinating, and so on down the list, and somewhere the outcome of cow-like model would be there.

From that point you can go in at least two directions, one would be something like a Turing test of the fake cow... beyond a certain point it’s a matter of semantics as to whether it’s a real cow or not, or you could say that your “justified true belief” had to apply to the total state of the field. If you believed there was both a cow model and a cow behind it, that woukd be justified, but the existence of the cow behind the model would not justify incorrect belief that the model was a real cow, in the sense of not admitting uncertainty over the things you see.


Your model of thinking is something like idealized bayesian one, not like human. If your mind decided that something is true with sufficient probability, it then fails to see alternative explanations. This "something" will be a reality to you, not just a belief about reality. But it is not the all. Your mind have some ideas associated with cow you've seen, and those ideas also becomes beliefs of yours. Implicit beliefs, you are probably not aware of them, you could notice this implicit beliefs only if they will begin to contradict evidence in a sharp way. For example, you can implicitly decide that this is a soft tempered cow, that would like to lick you in a face, if you come near. It was not important when you saw the model of a cow, so you didn't become consiously aware of your idea of a cow temper. But it was planted in you idea of reality. Maybe you would have an intention to come near the cow to be licked in the face, and you might be unaware of this intention. Human mind can easily manage such tricks.

And it leads to a funny thing. You saw the model of a cow, and it make you believe that there is a cow in the field and that you saw a cow. Then you could find a heap of poo, and you will strengten your beliefs futher. You might find a lot of evidence and it all will be explained under assumption that you saw a cow. And this evidence will strenghten your belief that you will be licked in the face, when you come near the cow.

But you didn't saw the cow that made this heap of poo. The real cow is pitch black, with a horns of gigantic size and they are really sharp. The real cow has red glowing eyes and it is going to kill you. But before you see the real cow itself, all the evidence that would point that there is a cow would also reinforce the idea of soft tempered black and white cow. The longer you manage to keep youself oblivious to real cow traits, the more surprised you will become when you find the real cow.


> From that point you can go in at least two directions, one would be something like a Turing test of the fake cow... beyond a certain point it’s a matter of semantics as to whether it’s a real cow or not, or you could say that your “justified true belief” had to apply to the total state of the field. If you believed there was both a cow model and a cow behind it, that woukd be justified, but the existence of the cow behind the model would not justify incorrect belief that the model was a real cow, in the sense of not admitting uncertainty over the things you see.

You're replacing the model it was criticizing with a different model and then saying that it doesn't say anything interesting about your model, so it's not interesting. It's not an argument that knowledge isn't possible, it was an argument against the traditional definition of knowledge as it was almost universally understood at the time.


I’m saying the model it would like to criticize is not an interesting or worthwhile model to talk much about.


It's a bit in the weeds, but I think the author has a wrong JTB. They author deployed multiple changes, and just incorrectly assumed that their PR was the one that introduced the bug. They author had incorrect knowledge about what was being deployed. If something in their deploy process indicated that in fact only their code was being deployed then perhaps it's a JTB? But otherwise I think it's just a bit off.

However, the gist of it is correct. We often update dependencies or deploy more than we think we do. We have an "us" focused view of our code, and keeping gettier cases in mind helps us break out of that.

Just recently I kept thinking that I didn't know how to write a jest test, when in fact I was using a version of Jest which didn't support a certain method. It's easy to think it's our fault, when in fact there can be deeper reasons.


Interesting to compare this with a simpler term - "Red herring" as something that throws you off course.


These cases seem to come up often (weekly?) in software development. I wonder how often they come up in other professions.

One common case is when you change or delete a comment, and suddenly something breaks. It couldn't have been the comment... but it was working fine before my edit... wasn't it?


And then as you look closer, you wonder how it ever worked. Hang on, did it ever work?


I lost quite a few hours trying to restore a feature after I made a commit, only to find out that it was broken for weeks already. Or worse, was not even implemented yet.

It's amazing how that just keeps on happening.


This gettier concept is new to me, but what's certainly not new is trying to wrap my head around errors in the code. I'm a relatively new developer and I've many times asked more seasoned coworkers about what they do with all the their thinking they perform and possibly code they write during a long error invalidation process. Say you try fixes b,c,d,e,...,z (some of which might be objective improvements, now more robust code) then you finally fix the bug by trying solution A. What do you now do with the code for attempts b through y, and more unclear, all the thoughts in your mind that went into those efforts? Just forehead slap and move on?


Considering the fact that so much of programming is error finding, it's useful (and probably necessary) to have a solid heuristic for quickly determining causal relationships.


Or the bugs that disappear when you're debugging and then come back hen you're not.


Also known as Heisenbug[0].

[0]: https://en.m.wikipedia.org/wiki/Heisenbug


I have had bugs that where fixed by recompiling the entire application


I was surprised when I first learned that this was a novel philosophical concept, because I recall reading a Renaissance-age Italian story(maybe from the De Cameron?) that talks about this:

Basically, a man sees his wife walking in the town square with the hat of a friend of theirs, and this leads him to believe that she is cheating with that friend. It turns out that he just offered the hat to her in the market, to help her carry some eggs home, and she was going to return it. So, she goes and returns it, the husband follows her, and it turns out she actually is cheating on the husband with the friend, but the hat hat nothing to do with it.


This hits close to me as a possible reason why I could never get good at solving geometry problems, solid geometry especially. Most problems would be trivial when one assumes specific preconditions, but my mind was always wandering around, looking at all potential sides of a problem and I could never solve anything. To quote the author from my particular pov:

    a problem has multiple potential causes, and you have every reason to believe in one of them, even though another is secretly responsible.
Reminds me that I need to pick up a book and re-learn the damn thing. It really saddens me that I suck at geometry.


I think you should try looking at geometry more like a creation initially then a problem. In this manner you can see assumptions as just building up more simple worlds with those constraints. I have found this view helps when teaching geometry as it empowers the mind.


This is surprising.

Russian geometry problems, as I remember them, required creative thinking, which seem to be exactly the opposite to what you are describing.


I don't understand how anyone could have ever thought "justified true belief" was a good definition of knowledge, since the question of "what constitutes justified belief?" is muddier than the question "what constitutes knowledge?" in the first place. Further, even without considering such absurd situations as a real cow hiding behind a fake cow, if you see something that, based on its appearance, almost certainly is a cow, the near certainty doesn't change into absolute certainty just because what appears to be a cow is in fact a cow.


As usual, science has a more practical take on this. Occam's Razor says that if you see a cow shape and hear a cow sound coming from that direction, the most likely explanation is that you're seeing a cow. It retains the possibility this isn't true; the belief can be falsified in several ways: by examining the cow more closely, by the real cow walking out from being the fake one, etc.

I think it follows that we never absolutely "know" something. We asymptotically approach knowledge. The scientific method is a way of approaching truth.


I don't think the example of checking in someone else's bug is a very good one. If I checked in something that had code from other people I would see it very easily, and if I did not think there was any way my code should have affected the autocomplete then I would assume the code I checked in which was not mine broke the autocomplete.

Matching it to the example of the papier mache cow doesn't really work because the papier mache cow hides the real cow but it is very easy to see that your code was also checked in with other people's code.


Somehow, the examples all sound more like a quality and/or testing issue. The workflow seems prone to people rebasing to a buggy state and at that point, in a non-trivial system, all bets are basically off. Basically I need to be able to have a "JTB" about a pull request having undergone enough review and testing before being merged on the master that it doesn't introduce such glaring regressions as cited in the examples. If that cannot be ensured, I'm setting myself up for one goose chase after the other...


I learned this word, but I'm scared and wondered by it every time. I think it fits here. There is two worlds: believed and true. And when they merge it's called "peripeteia"


I read the Wikipedia's entry. It seems that it could be better explained as a "plot twist": when your confidence in an outcome is overturned, for better or worse.


Peripeteia in modern Greek means simply "adventure".


It might be true that even though I have a JTB about something I might be wrong.

Nevertheless, I think It would be resonable to act upon a JTB as it it was true. For all effects it is true to the best of my knowledge. This does not mean I shun down new information that might make me to change my JTB.

And if having a JTB is not knowledge, what is? What can we know? We can always imagine a world where even our most firm JTB might be false. If a JTB is not a good case to use the word knowledge I don't know what is


Always bemused and not a little confused that anyone (most notably, Americans) still spend so much time and energy on analytical philosophy [and its Quine-y assertions about semantics] so many decades after its sort of formal semantics collapsed as a useful way of analyzing natural language.

Linguistics (not to mention, comp lit or continental philosophy) departments have an order of magnitude more to say about meaning in natural language and have had for... decades and decades.

I just don't get it.


I don't like this approach. If everyone were to approach a problem with this mentality it would conjure doubt in the entire process. Nothing would ever get done. Question: "how do we know if anything exists?" <--(an extreme example). Answer: "well we don't but it doesn't help us with the realities of the problem at hand." I think this idea introduces confusion and does more harm than good, in my opinion.


This reminds me of how some studies are proven to be true but the reason they are true are not the reasons the author of the study presents. Instead, the true-ness is either a coincidence or a correlation, not causation. These people go on to write books and entire industries are formed around these hypothetical truths and it takes years to undo the damage of the original study (e.g. marshmallow test, jam experiment)


So just to be clear, a 'gettier' is when something you Believe and have Justification for turns out to be false?

  Actually True
  J B
  1 1 knowing
  1 0 denying
  0 1 lucky hunch
  0 0 sceptic, default position

  Actually False
  J B
  1 1 mis-justification: un/incorrectly-verified 
  1 0 lucky denial of mis-justified
  0 1 superstition
  0 0 sceptic, default position


No, it's when you get the right answer for the wrong reason, like a math problem solution that has two mistakes that cancel out in the end result, but are still incorrect logic


Note most gettier cases are plays on our intuitions:

What feels like a pointer is actually a category. That is, it feels like it points to one, but it points to many. Like both examples given here: https://en.wikipedia.org/wiki/Gettier_problem .


For the C heads, you mean what feels like a pointer is actually an array? :-)


I had a philosophy lecture last year that included a lot of epistemology (Theory of Knowledge). We talked a fair bit about justified true beliefs, but Gettier only came up in a side note - the professor being more interested in skepticism and the responses thereto. Never would have dreamt of applying that lecture to programming, though.


The article was great. I also find philosophy and software correlates occasionally. The response of the author to the gettier cases are expected behavior. In fact it is a blessing. 99% of the time there is a cow, we experienced that before and that’s the reason of our confidence. We easily solve our problems with this approach.


The first example is exactly the reason why I hate rebases and prefer merges and complicated history instead. It may be more complicated, but it doesn't swipe problems under the rug.


That's why in some quarters it's called git debase.


My own opinion is that knowledge is always relative to a perspective. It is only valid in a context.

Nothing is absolute.

Example:

1=1 is something i know is true cause i know the rules of mathematics. There is no absolute truth to that.


10+3 = 13 when talking about coconuts, but 10+3 = 1 when talking about time[1]

[1] In America anyways.


As software abstractions often put up facades in front of similar abstractions, this is bound to happen to us software developers.


Didn't Descartes cover this with "I think therefore I am."? Everything else is varying degrees of speculation.


My reaction Gettier cases as a philosophy minor was that J is the only philosophically interesting part of JTB.


“Justified true belief”? All our knowledge is subjective by definition. We don’t even know whether we’re living in a simulation.

Personally I doubt that we’re living in a simulation. But the fact that we could be, demonstrates that we don’t have objective knowledge. No cows needed in the field explain it

Philosophy might better be called “the history of flawed thinking”


> We don’t even know whether we’re living in a simulation.

I think it's actually worse than this. This scenario suggests that our minds are capable of infallible reasoning yet we may not be able to trust our observations. Really, I don't think we can even trust our own mind, and therefore JTB is undefinable.


This strikes me somewhat similar to saying that we should not use Newtonian mechanics, because we know better that at subatomic level Newton's laws do not apply.

But Newtonian mechanics are still extremely valuable, worth discovering and understanding, and anyone catching a baseball is employing them quite proactively. JTBs are the Newtonian mechanics of epistemology. You can pick them apart at a deeper level and show how they don't really exist, but they are still incredibly useful.


I'm not saying that we should entirely disregard epistemology. After all, belief is necessary to make decisions, and thus to thrive. Although I think people should be mindful that knowledge may not be possible in actuality, just as modern physicists are mindful of relativity and quantum mechanics.


Also specific example with a cow is silly. Belief there is a cow in the field is not really justified. It just seems justified to the observer that can't tell difference between model and a real cow.


By what definition? The "notion" of "justified true belief" WAS the definition of knowledge prior to these complications. Just saying "everything is subjective" is more tedious than making meaningful distinctions, in my subjective opinion


I think GP means that the justification of JTB by definition is recursive and thus has no real foundation.

To call something 'true' is to know it is 'true'.

If we are bothering to debate whether knowledge is possible, JTB is unconvincing.


So just more simply “red herring”


Isn’t this just a confounding factor?


Is not this same as coincidence?


To me this seems unhelpful. I'd say there is no "knowledge"; there's only belief. And if you defined knowledge as "justified true belief" then you couldn't apply the definition in practice in the real world because you don't know when something is true. But that's philosophy for you: fun (for some people) but not useful.


In an epistemology class once I said to the lecturer "I don't know anything." He said "Don't you know your name?"


Not good enough, point to that which knows. Where is it? Who knows it? If there is a knower to that, where is it.. Keep going with this investigation and realize there is no knower, only knowing.

Eastern philosophy has nailed this thousands of years ago and we westerners are up to this day totally in the dark. We actively treat the I as a concrete object that really exists as an entity. It does not hold any closer examination and evaporates entirely the closer it is questioned.


I know my name just like I make a cup of tea. It's not you who makes my cup of tea, right, so who is it? Well, me—the referent of my name, this person right here.

That's just part of how our language works. It doesn't seem to matter whether I am a "concrete object" or some swirly pattern of becoming or indeed even an illusion! The English word "I" does not refer to an eternal soul or "atman."

If you stare long enough at an ice cream you'll have the marvelous insight that in reality there is no concrete ice cream entity, not least because it melts. Yet people don't go around saying "wake up, there are no ice creams!" Why is that?


That's really not what epistemology is about at all. (Yes, I'm a big Ramana Maharshi fan etc, I know what you're talking about)


Well, given that you don't exist, mind if I eat your sandwich?


Isn't that connaître rather than savoir?

So much philosophy is playing with words.


We do know when some things are true, and when observing the world we can talk about truth under certain assumptions. As in “it’s True the sun didn’t explode yesterday, assuming we aren’t all living in a simulated reality run by aliens in a different solar system billions of years from now the exact day after the sun exploded, or similar possible scenarios”

And while everyone loves to run in circles around the argument “but how can you know with certainty” the fact is that I am as certain that this assumption holds as I am that it provides no value at all to continually question if reality is really real, you’d have to take that as an assumption to ever have any kind of value adding discussion.

The people who insist we can’t know if anything is definately true must agree that they can’t know if that assertion is definitely true, so they sort of kill their own argument axiomatically.


You assert that there is no knowledge under the expectation that it is true, then you go on and justify it. So it seems you know that there is no knowledge, a contradiction.

Why deny the word has meaning, just because you can't distill it down to a concise explanation? The meaning of a word can be arbitrarily complex. Knowledge can exist, even if you can't define it, because meaning is not determined by definition. Definitions are simply a mechanism for coordinating understanding, not for demonstrating its existence.

Besides, any argument against the existence of knowledge could be used against belief. Play "taboo" with the subject and don't confine yourself to using ancient terminology to describe the world and these pointless linguistic problems melt away.


You may be right about knowledge, (relativstic quantum information theory leans in your favor, hi Ron Garrett!) but your dreprectatory definition of knowledge and philosophy covers most humans who have ever existed, not just self titled philosophers.

Anyway your definition is wanting. A religious scientist has two kinds of clearly different beliefs: faith and knowledge. A mathematician has the same two kinds, under different names: axioms and deductions.

Saying that axioms are the same as deductions is a radical claim.


Sure, that is widely useful.

You can apply that to the real world if you then add social processes and see "knowledge" or "truth" as shared belief between a chosen set of people (or all of humanity). Then you can go down the whole rabbit hole of beliefe aggregation and voting theory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: