Hacker News new | past | comments | ask | show | jobs | submit login

I don't buy this. First, instead of defensive writers, I see people--though I'm not talking about this author--whining that their unclear writing is being "misinterpreted" when it turns out they've implied something dumb.

Second, while I appreciate stylish and direct writing, I'll take bloated but well reasoned writing if I have to. The world is full of writers who "have no ideas, and the ability to express them", but that doesn't help us. Precision and careful thought often push us towards difficult and ugly writing.

Edit: the worst kind of hedge is the one that makes your opinion harder to understand. I'm guilty of adding that pointless "almost" and I've been struggling to overcome that habit for a long time.




My solution to this has been to learn to write like a Bayesian, which is a useful project because it helps me think like a Bayesian.

The first step in this is to recognize the goal: it is not to prove some proposition true or false, but to show that some proposition is more plausible, given the evidence, than the alternatives. The alternatives should always number more than one, because single-alternative arguments are recipes for false dichotomies and oppositional dynamics, neither of which are useful.

Science--which is the discipline of publicly applying Bayesian reasoning to the results of systematic observation and controlled experiment--is not about proof or truth or falsity. It is about plausibility. This fundamentally changes the goal of any intellectual enterprise.

I liken philosopher's quest for "certainty" to alchemist's search for the secret of turning base metals into gold: despite the many interesting things they learned along the way, the goal itself was based on a fundamental misunderstanding of the nature of knowledge, which is evidence-based and therefore inherently uncertain. A certain proposition is one that is immune to any further evidence, conceivable or inconceivable (because what we can conceive has nothing to do with what is real). The name for such propositions is "faith", and to a Bayesian this is an epistemic error.

Once we've abandoned the impossible and wrong-headed goal of turning base metals into gold... err... of achieving certainty... we're in a position of acknowledging our priors (which are explicitly represented in Bayesian reasoning: you can't do it without them) and adducing our evidence. Differences of opinion may come down to differences in priors: "I find your evidence for Israel's war crimes unconvincing because I believe anti-Zionists dominate the international news media." Such revelations at least make it clear what we should be arguing with the person about. If we differ radically in our priors, arguing about the posterior plausibility of a particular proposition is probably useless.

If you do all this right (I'm still learning, always learning) it won't come across as hedging, but as reasoning. This is the joy of abandoning the alchemy of certainty.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: