Hacker News new | past | comments | ask | show | jobs | submit login
Epistemic legibility: being easy to argue with is a virtue (acesounderglass.com)
242 points by shrikant on March 25, 2022 | hide | past | favorite | 62 comments



Also reminds me of "computational kindness" where, for example, you give people 2 specific date-times to have lunch, rather than asking them to look through their entire calendar and come up with something (it's easier to check two time slots than to think about all your obligations).

Comes from Brian Christian and Tom Griffiths' "Algorithms to Live By: The Computer Science of Human Decisions"

https://boingboing.net/2016/06/17/algorithms-to-live-by-what...


ATLB is such a great book. I need to reread it now that I've learned how to take better notes.


What’d you learn for taking better notes? Always curious how to improve note-taking.


Exactly. Have been hoping to find more books by the same author, but apparently it’s the only one he has written?


One of the authors has another couple of books that sound similarly interesting.

https://en.wikipedia.org/wiki/Brian_Christian


Good point. Thank you.


I didn't see it mentioned, I think I "epistemic literacy" is the other side of the coin. I see examples of people demanding citations for reasoning that is part of a work, of people just not understanding something, or of only really being able to blindly follow based on province or authority without trying to reconcile back to what they know.

If there is going to be a standard for legibility, there should be one for literacy too - it's especially important when learning from a debate where readers and writers (or sources and sinks) switch roles, to understand how different positions are engaging with each other's arguments, not just how they are making them


> I see examples of people demanding citations for reasoning that is part of a work, of people just not understanding something, or of only really being able to blindly follow based on province or authority without trying to reconcile back to what they know.

That’s a tricky one for me because I see a lot of utility in asking for sources:

a) Curiosity. Maybe I really want to know how this person came to believe what they did even if I’m pretty sure they’re wrong.

b) Challenge. A version of the Socratic Method, helping them learn through questioning their assumptions.

c) Learning. A genuine quest for knowledge, that perhaps might yield more unique or varied results than Google.

d) Depth. Possibly helping others in a public thread (or the author) realize that it might be an epistemically illegible argument requiring greater scrutiny, discussion or evidence.

I get that asking for sources can also be lazy or ill intentioned, but for the most part I don’t mind it.


I think you misunderstand OP. The purpose of citations is to prove facts. The argument made based on those facts does not need a citation.

Much of the time it isn't fair to ask for citations even for specific facts, but that's a separate problem.


> I think you misunderstand OP. The purpose of citations is to prove facts.

Those cases are covered by b)

But the person you're replying to pointed out other, non-adversarial reasons to ask for citations (which depending on the tone of the question, may be misinterpreted)


This is actually a huge pet peeve of mine; when people ask for a citation for reasoning, and then smugly declaring victory when no such citation is provided, as if you even could "cite" analysis.


It's a bit more than a pet peeve for me. It's exactly shirking any ability to perform or judge analysis, instead granting total deference to "authorities" that should be cited to provide credibility.


That's especially fair, as there's a good-faith way of engaging with such things that they've avoided: "Could you expand on this please?"


Even expert analysis isn't evidence. At its best, an expert can give you a sense of the range of possibilities, which might include things you never thought of. But actually knowing what's happening in a specific case requires evidence.

It seems like being clear about what you're doing (are you just discussing possibilities or trying to claim something specific) would go a long way in avoiding such misunderstandings.

(And I of course don't know which cases you're talking about, so I'm just talking about possibilities.)


Source?


Early in college I got the sense from some professors (likely English professors) that I, a student, couldn't be trusted to just state something in a paper. Instead, I was supposed to find an instance of someone else writing that same idea somewhere and quote and cite them.

I was pretty relieved when I got to Philosophy, where no one gave a damn about anything but the argument.


I can sort-of see how that might come about, if the purpose of the philosophy class was to practice making arguments, using philosophy merely as providing something to argue about, but one thing I have learned about it (and most other things, for that matter) is that if you don't find out if your idea has been floated before (spoiler: it has, at least if it has any merit), and, most importantly, what responses have already been made, the class will spend all its time splashing around in the well-stirred shallows.

At first, it seems odd to me that your English professors acted in the manner you describe, but I guess most of them have chosen to ride the criticism wagon, rather than the creative one.


They were teaching you a scientific writing mindset. Philosophy is different in this regard, as it is the science of rational thought.


I think you almost have to go a step further than that; you have to teach people about debate, how to debate, and decorum. Increasingly folks seem to rely on moral argumentation when debating despite our world becoming increasingly less moral (in the context of right vs wrong). Instead, our world mostly reflects shades of grey but our argumentation style doesn't reflect that change.


2 + 2 = 4 [citation needed]



Superbly written and argued. Assessing an argument's Epistemic Legibility is a new tool in my toolbox for identifying weak arguments.

> I expect having a handle with which to say “no I don’t have a concise argument about why this work is wrong, and that’s a fact about the work” to be very useful.

This was my favorite quote from the article


I’ve seen “Source?” used as a concise critique against what I consider to be epistemically illegible arguments.


Yeah, but P(Valid Critique | Source) is low. For instance, behold as I use it illegitimately against you: "Would you mind linking to where you've seen this use of 'Source?'?"

Besides, legibility is about the argument - it doesn't make sense to talk about a "Source". The source is me. I am the one making the argument.

Statements of fact aren't arguments. "There is a giant octopus besieging NYC right now". There's no argument there. It is merely a statement.

An example legible argument is:

- There is likely a lot of squid ink covering NYC

- This is likely because there is a positively colossal squid creature floating above NYC at the moment

- Squid creatures (colossal or otherwise) discharge ink when attacked

- We are attacking this humongous monstrosity

In fact, "Source?" works better against a legible argument since you can identify which part needs the extra stuff. This argument is fairly legible and is practically just syllogisms. It's merely the case that a premise is wrong - something which you can conclude by asking for the source for a crucial premise.

With an illegible argument, you could debunk the source of one of the claims and still be unable to trace its impact on the conclusion.

For instance:

It's probably raining in NYC right now because it's kind of the conditions that would cause rain if you know what I mean. But there's also this squid creature in the air. It rains when there's squids. Here, read this article about colossal squid ink rain. We all know that squid rain is a thing.


I believe that this is a key aspect of data-driven leadership, and the Friday meeting that Google founders started and many other companies imitated: make it easier for employees to argue against decisions. No organisation is perfect, or even good, so any effort to make complaints louder, clearer, more structured is incredibly valuable. One grostesque example of where that chain of information is broken is how Customer service teams are isolated.


I find some of this author's arguments... epistemically illegible.

For instance, she criticizes James Scott's "Against the Grain" for using a different definition of 'state' than the first one she found on wikipedia, but it is unclear why this is a bad thing. In fact, Scott himself notes that there are many competing definitions of state that are all imperfect in his context; and that ultimately the choice of one or another is rather arbitrary. It is so that he can develop this context that Scott delays his formal definition. In searching for a legible, bold-face, definition of 'state' on page 1, the author has ignored the context surrounding the concept of state as well as its place in the arguments of the book as a whole.

The author suggests she borrowed Scott's sense of legibilty in her own term to suggest that legibilty has some benefits; that she can easily determine the effects of the failure of a single leg of an argument on the whole and she can easily 'spot check' to determine is some fact is true or false. But in her application of this concept, especially the section on "Against the Grain," she demonstrates primarily that her approach is causing her to miss nuance, trip over ambiguity, and ignore context. Ironically, this is the exact critique Scott presents of legibility in his other work.


(the author is a woman: https://acesounderglass.com/about-me-2/ )


Oops. Thank you. I need to be more careful.


Any time I heard "epistemic" I am reminded of the stellar paper "In Praise of Epistemic Irresponsibility: How Lazy and Ignorant Can You Be?" by Michael Bishop

Arguing that in numerous settings, even crude simple linear mathematical models outperform experts in a variety of tasks.

https://www.jstor.org/stable/20118248


Couldn't find anywhere else, but here's a link: https://philpapers.org/rec/BISIPO. Posted earlier by you.


> Finally I worked my way up to “first walled, territorial state”. Not thinking super hard, I googled “first walled city”, and got a date 3000 years before the one Scott cites. Not a big deal, he specified state, not walls. What I can google to find that out? “Earliest state”, obviously, and the first google hit does match Scott’s timing, but… what made something a state, and how can we assess those traits from archeological records? I checked, and nowhere in the preface, introduction, or first three chapters was “state” defined. No work can define every term it uses, but this is a pretty important one for a book whose full title is Against the Grain: A Deep History of the Earliest States.

Isn't this kind of spiral going to appear when you analyze almost any claim? It's where skepticism comes from. When you get down to it it's very hard to say a claim is truly "epistemically legible" outside math and the hard sciences (and even there there are dissenters).

And if someone does manage to write a history that's relatively epistemically legible it's going to be boring as hell. All histories contain narratives and all historical narratives are seriously flawed. What saves history (besides the entertainment value) is that some narratives are better than others. We can move closer to the truth through discovery and analysis of information. At the end, though, we're still left with seriously flawed narratives, just less flawed than the previous ones if history as a discipline is functioning.


Consider the difference between modern Egyptology wrt the history of the pyramids and the "history" of scientology, Xenu, and alien spirit volcanoes.

The first is far more epistemically legible, the second is pure ideological fantasy (or sci-fi, considering the author. )

There's a much smaller gulf between Principia Mathematica and the history of Egypt - differences in recorded historical facts allow for differing interpretation but the possibilities are constrained and have a huge amount of overlap with hard science - we can be almost as sure that Tutankhamen was a real king as that Abraham Lincoln was president of the US, and the certainty of both having been real probably exceeds five 9's. The probability of Scientology being real, as she is wrote, is probably below 0.1e-100.

From a Bayesian perspective, you can be 100% certain of things like your own subjective experience, and from there you can presume certainty for things whose probability or improbability exceed some threshold - you might never know if Tutankhamen was a nice guy or generally a prick, but his existence is founded within enough other certainties that you can (probably) assign that fact the moniker of "100% real" with great success. The same with "Xenu is cult nonsense."


If we're talking about a list of facts concerning Egypt then I'd agree. But history isn't a list of facts. If it was it would be unreadable. All histories have a narrative (and usually other things, like a political philosophy and a sense of morality). Against the Grain is an obvious example of that. The author here is trying to reduce Against the Grain to a list of verifiable facts and finds that this is very hard.

Religions are also not lists of facts. Most sophisticated believers (and I assume this applies to Scientologists) have a partly metaphorical understanding of their religion.

If we take some extremely far-fetched history, let's say David Irving's Holocaust denial, I would say that (unfortunately) the gap between Against the Grain and Irving is much smaller than the gap between either history and Principia.


I think we need to define history - you seem to be operating from the definition as a narrative anchored to a list of facts, but I consider history to be the collection of facts of themselves, from which narratives can be derived. If the narrative is bounded by the limits of the facts (dates, correlated recording, carbon dating, artifacts, etc.) then the resulting narrative lacks the potential for inaccuracies. If history is instead a narrative, into which facts are selectively slotted, the uncertainty rapidly reaches a point at which interpretation and belief factor into the history as much or more than dry facts.

I think the trick is requiring that the fundamental narrative of a given history be bounded by science (itself a narrative) in the sense that things are left unknown and unknowable where evidence is lacking or insufficient.

Egyptology is a non scientific, dogmatic narrative into which facts and conclusions are selectively slotted. The "prestige" resulting from British imperialists fetishizing the subject created a mythos of Egypt as some primal, fundamental expression of the beginning of civilization. This mythos is tended and cultivated in modern times by Egyptians and their cultural elites, with their interpretation of history resulting in notoriously poor handling of any science-based revision.

So there is dogmatic and narrative history, and scientific history. Things like the archeology of Clovis culture result in dry, but scientific history. Few people or institutions will encounter perverse incentives in that field. Egyptian heritage and tourism necessitate cooperation with institutions and corrupting outside influences that derogate the resulting narrative.

In the case of religion, there's a great deal of similarity and overlap with histories of the dogmatic sort and both suffer from the issue of the narrative selecting for ideologically convenient facts, instead of a self limiting construction founded in scientific principle.

The worst part is that science itself can be considered an ideology as dogmatic and fraught with perverse incentives as any other story we attach to dry lists of facts. The difference is in the repeatability of direct consequences - we can trust carbon dating, but educated guessing about culture based on parchment fragments that result in a story about thousands of people at a particular point in time has to be weighted similarly to religion or mythology in terms of factuality.

If you choose to believe in science, then everything becomes lists of facts, whether rigorously repeatable tests like carbon dating, or assertions about the beliefs and actions of people thousands of years ago - or about modern cohabitants of this existence. A Bayesian perspective on the deconstructed narratives as lists of facts gives us a methodology to navigate narratives rationally.

The difficulty is in assigning probabilities to the authenticity of a given fact, so - Lincoln 100% existed, Xenu 100% did not.

The modern narrative of Egyptian history (my own understanding of it) is probably 85% scientific or accurate, and the questionable bits are the ones that directly benefit the individuals or institutions that stand to benefit from corrupted or perverse incentives.

Holocaust denial, like flat earth and Bigfoot, is inherently anti-bayesian and requires wilfully abandoning easily verifiable dry facts and rigorously scientific information that is trivially accessible. To believe in such stuff you have to embrace an extreme of skepticism that makes navigating reality impractical at best, either believing everything from a particular authority or rejecting everything outside one's own direct experience.

It's an interesting problem, and I'd like to see a Bayesian history of humanity some day, with a rigorously bounded and annotated set of probabilities attached to every part of the narrative portion of historical stories. Maybe gpt-3 or future transformer models could eke out such a system.


> I think we need to define history - you seem to be operating from the definition as a narrative anchored to a list of facts, but I consider history to be the collection of facts of themselves, from which narratives can be derived.

But historians don't write down lists of facts. I think you'll find, as the author does, that if you try to extract from a work of history a list of facts, you're going to end up with things that are highly debatable. Some of these things will be moral judgements, some will be assertions about human psychology and motivations, some will be implicit metaphysical beliefs. It might be worthwhile to write all this down but historians won't do it voluntarily because many of these things are embarrassing to admit.

I would say that sophisticated readers of history already have a somewhat Bayesian perspective. They implicitly attach probabilities to historical narratives and update those probabilities as new information arises. It's not rigorous but it seems crazy to me to expect that it could be made rigorous.

I don't agree with your dismissal of old Egyptology. I think you're buying into an Edward Said/postcolonialism narrative that is terribly flawed. I also don't think you can easily label historical narratives as "anti-bayesian" or obviously wrong. I think Holocause denial is obviously wrong but it's not on the same level as flat-earth.


...Falsifiable? The author uses a lot of words and seems to appreciate brevity, but isn't the entire article summarized by this one word?


Falsifiability is about grounding out in an experiment that can be performed. Epistemic legibility is a generalization of the concept, where the arguer highlights the cruxes of potential disagreement, and points to whatever evidence they feel is convincing--whether that's experiments you can replicate, official records, expert opinions, or their own introspection.


Ah, refutable.


"Refutability" is a more general concept than epistemic legibility, in my opinion. One can (at least attempt) to refute abject nonsense that doesn't actually mean anything by saying "that's abject nonsense, that doesn't actually mean anything."

Other arguments might depend on facts about the world, but attempt to hide those dependencies. A Gish Gallop is one such form of argument. These can be refuted, but the interlocutor has to do a lot of work to highlight the factual errors, and the reasons that the argument depends on them.

An epistemically legible argument does that part of the work for the other party, so the only thing they must do is show which premises are wrong, or which logical dependencies are invalid.

The pre-existing term I've heard that comes closest is the term with the opposite meaning, "Logical Rudeness." https://dash.harvard.edu/bitstream/handle/1/4317660/suber_ru...


This sort of arrogant, unapproachable response is exactly what the article is addressing. You are giving no space, no consideration at all, of someone disagreeing with you.


What exactly do you think you are doing right now then?


I took it as more in the realm of information theory and less about hard science or even content.

Less about falsifiable or correct and "does this compile".

More "can I easily grok what this function is trying to do even if it isn't compiling".

I see this term as an attempt to define a precursor to "Cogent" or maybe it's an attempt to give "Cogent" a more formalized definition.


I got the additional impression that there are bounds to how freely the reader can interpret the narrative. Thus freed from endless mental contortion to make the argument work, the reader can assume that they are not too ignorant to understand the wit of the author but actually able to judge them as lacking. - The author has not made themself beyond reproach by presenting an irrefutable argument.


Ah, yeah, that makes sense.

I hadn't thought in terms of hostile argument... you see this in verbal argument sometimes, where a person rambles or says vaguely contradicting things to prevent themselves from alienating their fan base.

Ambiguity lets people who endorse you to contort their interpretation into something they like.

So I think their advocating for specificity makes sense, it helps ensure that the argument is about some _thing_ and you're not engaging with some unknown meta game that lives outside the argument itself.


An excellent piece of terminology. Closely related to the concept of "not even wrong", I think?


I had to google that phrase and found that it’s slang for unfalsifiable. Based on that, I would agree with you.

In addition, it sounds like Epistemic Legibility differs [from Falsifiability] by not necessarily requiring an empirical test to refute one or more claims.


That description ("not ever wrong") is some of my favorite academic side-eye.


I thought the concept of Epistemic Spot Checks was a great idea, so I was hoping that there might be some on the blog checking materials I had previously read so I could compare my intuitions with the author's hobby, but I didn't find anything.

The About Me says the author gave up the project after a year or so, but I'm still curious about updating my own epistemological toolbox. Does anyone have any suggestions?


Maybe check out other posts on the blog to see how they did it? They've done epistemic spot checks on book chapters. It sounds like a lot of work!

Looks like these blog posts are tagged:

https://acesounderglass.com/tag/epistemicspotcheck/


I did an "epistemic spot check" of the section on Against the Grain, which caught my eye because I just finished reading the chapter on state formation in The Dawn of Everything. Result: It was indeed legible but was oddly false.

I could not find the "well accepted definition" of the state they sourced from Wikipedia [1]. They give the impression that there is an overwhelming consensus as to what the definition is, whereas the Wikipedia page actually states throughout "there is no undisputed definition of a state", "there is no academic consensus on the definition of the state", and provides multiple different definitions. I have not read Against the Grain, but it seems entirely reasonable for the author to offer their own definition, though it sounds like they could have been more forth coming with it.

[1] https://en.m.wikipedia.org/wiki/State_(polity)


I think this is a valuable perspective, but also is a bit narrowly focused.

It's not the single argument/book/article or whatever, you need an ecosystem of tools, institutions, datasources that work towards truth. I think this is probably what they are building towards, but felt it wasn't given enough emphasis.

One of the examples is acoup, citing ancient historians, yet one of regular points is that those ancient historians, weren't really historians as such and are often making an argument and making stuff up to fit whatever they were trying to argue for at the time. So it's naturally quickly recursive, if you are citing people who are hard to argue with.


To draw a comparison with software development: how much test coverage does this argument have? And do those tests genuinely handle most expected inputs/outputs and demonstrate correctness?

Similarly, recursively: do the claims that the argument relies upon also have good test coverage?

And, extrapolating beyond the article: how can we identify the flaws with minimal effort and help the authors to correct those (or fork improved arguments and build support for those when they refuse to)?


How different is "comprehensibility" from "epistemic legibility"?

Comprehensible arguments are easier to argue with, and favoured by rhetorics.


A comprehensible argument isnt necessarily easy to test.

One of the first points he makes in the piece!


“That’s gibberish” probably could be better said as “that goes against all factual truths that I’ve learned.”


No, "that's gibberish" means "I can't tell whether it goes against factual truths, because I can't tell what it's actually claiming."

[Edit: And one of the marks of at least some kinds of BS is that it tries to obscure what is actually being claimed, so that you can't disprove it.]


Is this author well known in some community? Their writing is superb, and I have to ask who this is.


This author is certainly known in the rationalist-ish ("less wrong", Scott Alexander etc.) community; whether or not they'd consider themselves a member I don't know. Not all rationalist posts are that well written, but there are certainly more I've enjoyed like that.

I think the author might have heard of Seeing Like a State and the idea of legibility through the book review on slatestarcodex, for example.

EDIT: from their about page, "Before this, I was a software engineer first at a few FAANG companies and later at start ups." I guess that counts as another community of sorts?


No.


Being easy to argue with is quite distinct from being easily brought to argue, or being argumentative about easy things.

Kindness in argumentation means caring more about the right solution than winning. Unfortunately I have to deal with a lot of people who really only want to win, or who will change their basis of argument in order to win, even if it means espousing the view they initially opposed, as if it was their core line of reasoning!

I also deal with people who are only there for the argument. Being devils advocate is a sport for some.


> Unfortunately I have to deal with a lot of people who really only want to win, or who will change their basis of argument in order to win, even if it means espousing the view they initially opposed, as if it was their core line of reasoning!

This is orthogonal to the author's point. E.g., people who only want to win can make use of epistemic legibility to make their wins clear to themselves and/or others. People who change their core line of reasoning can opportunistically switch between two epistemic legible points.

If in either case the speaker doesn't use what the author calls epistemic legibility, then it is difficult to tell what the speaker means.


Yes, in strict sense this is all orthogonal. It's riffing down a rathole.

What I call "Legalistic" argument is the form which allows two oppositional basis of reasoning to be used simultaneously because any reason to invalidate the case at hand is acceptable, even if contradictory. Working with ex lawyers, debate team members and legal degree graduates I get this a lot. Also programmers. Well, not the ones who do numerical algorithms, they're used to floating point rounding error.

I think the fundamental problem can also be cast as the problem of language. A word like "few" is a classic example of an undefined value which is interpreted differently by people. To me, a few has to be below 2% to be disregardable. To others, closer to 10%. In numbers less than ten, a few can mean almost half to some people because three, is a "few" of ten and they forget how close 3 is to 4. This also hinges on "almost" and "nearly"

An epistemic line of reasoning would presumably state its basis to approximation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: