> ”The idea behind the meta-induction is that all of our theories are fundamentally provisional and quite possibly wrong. If we can add that idea to our cognitive toolkit, we will be better able to listen with curiosity and empathy to those whose theories contradict our own.”
One thing I didn’t see addressed anywhere in the article is the dimension of wrongness through time helps you decide which of you is less wrong.
He knows timeline is a factor, cited that quote earlier:
> ”“Here’s the gist: because so many scientific theories from bygone eras have turned out to be wrong, we must assume that most of today’s theories will eventually prove incorrect as well.”
But then the miss — two people discussing “scientific” theories, if one theory is built on the shoulders of the other, probabilistically, which position is more likely to be “right” for continuing progress?
I can pass an Ideological Turing Test for a flat earth and turtles all the way down, but it’s not the greatest use of my time.
Rather, what’s the next thing we need to learn, and the “right” position to learn it from?
Learning builds on learning (this is why ‘discoveries’ happen all over around the same time), so recognize where parties are on that timeline. Fine, you’re probably both degrees of wrong, but which position incorporates the most learning?
That’s the position I want to stand on to discover what’s next.
// PS. Yes, on the bleeding edge, what we know is “true” can flip back and forth. That’s just natural selection at work. Same applies, just be aware where you are on the timeline.
As I see it this thought process only takes you so far in most areas of importance. The main purpose of an ITT is for those areas for which capturing knowledge and comprehension of that is trickier.
In the hard sciences, objective truth serves and you can talk of approximating towards something truer than before. That is something linear. The model gets measurably improved.
Most of everything else in society, economy, social, politics, ecology - I don't think runs well on such thinking to the point using it is counterproductive aka you will not survive by thinking like this. Practical people with rational ideas inserted into the political arena appear to typically get eaten alive. You're left with ogres, hopefully enlightened ones. It's likely the spectacularly convoluted nature of human politics is responsible for our biological evolution - like A/B testing with millions of lives, there's a nice essay musing on the cruelty of ourselves called Meditations on Moloch at Slatestarcodex.
Objective truths remain in all of those areas, society, economy, social, politics, ecology but they're the kind of truths which are not especially helpful. There's these cyclical patterns, emergent phenomenons.
One day the model will be Fedualism, another Liberalism, another day Communism, another day another -ism, but what is really going on? So maybe some of the time some people are wrong, but more often the new -ism is saying "incomplete model" and if your society decreases in complexity it may even start to say something like "reduce model", deregulate, even promote anarchy.
The most appropriate model is that of a Garden. You may grow blackberry plants or oak trees, but you can't really think of "how do I optimize it all" because bluntly you're not that intelligent and nobody is that intelligent. Instead you get weeds, experiments, patterns of growth, environmental conditions, incentives, bad weather - all those joys. In one way you may read scientific literature on optimizing particulars - but I wouldn't say that's what Gardeners really do unless their only goal is to produce spuds - and even then any farmer will update you on just how much of that is outside their control. It's like that with Society but we're in a worse devil of a position. Talk about legacy code, yikes.
tldr; You're maybe not looking for "the truth" as wondering "how do you get a benevolent gardener" and still deal with the slugs :-).
> “A whole lot of us go through life assuming that we are basically right, basically all the time, about basically everything: about our political and intellectual convictions, our religious and moral beliefs, our assessment of other people, our memories, our grasp of facts. As absurd as it sounds when we stop to think about it, our steady state seems to be one of unconsciously assuming that we are very close to omniscient.”
>
> Why do we feel this way?
I feel this way because I've spent the last 34 years of my life gathering evidence, forming the views that I hold, and testing them against alternative interpretation.
I don't think it's unreasonable for an adult to believe that their well-considered views are correct, barring strong evidence to the contrary.
I do my best to have "strong views that are weakly held". Yes, I'm confident that I'm correct in most things, but I'm very interested in hearing someone tell me I'm wrong and why, and I am open to changing my views if confronted with strong evidence of their incorrectness. Honestly, it's rare that this happens, but it does happen.
>I don't think it's unreasonable for an adult to believe that their well-considered views are correct
Millions of teenage voices suddenly cried out in terror, and were suddenly silenced. Suffering under a "reasonable" but wrong adult is an extremely common life experience. So common that the sheer weight of observation suggests that "well-considered views" are usually wrong, if not completely ridiculous.
Many of my adult coworkers and friends have stated that they are just "making up adulting as they go" -- and I share this observation. Few, if any of us, really understand how things work, why things are the way they are, or if our actions are fair and correct. We would all be much better off if we admitted this poor state of affairs more openly.
>I feel this way because I've spent the last 34 years of my life gathering evidence, forming the views that I hold, and testing them against alternative interpretation.
Inductive reasoning, the process of building up small pieces of evidence and observations and forming a larger conclusion, is inherently unsound. The conclusions you build from this process are fallible, and should be held with suspicion.
Weakly held, sure, but why have strong views when you don't need to make a decision? There's value in being comfortable with uncertainty, in collecting interesting questions and not answering them.
One of the tricks of those books about being wrong is that we almost always believe we are right. Every once in a while, we get evidence to change out belief, and then we again believe we are right. So the actual fraction of our existence when we were wrong and realized it is very small. Neat hat trick to help us feel right almost all the time.
The teenager argument is sooo stupid. Yes, he wanted X as a teenager, now he isn't and wants Y. It's a different but related personality, rights and wrongs just don't ever enter the picture.
Unless he either overcommits as a teenager ("I would never grow up to want Y") or becomes a hypocrite later ("X was never a good idea").
I'm not actually arguing about the whole point. But the argument itself might get your idea discarded even before getting tried on its merits. Think of it as of Bayes of ideas.
I'm honestly not sure if that title is intentionally referencing Less Wrong or if it was an accident. I'm leaning towards intentional, since this seems to be far too close to what I'd expect from the rationalist community (steelmanning, etc) for him not to have heard of it...
Almost certainly intentional. Caplan is very close to the rationalist community, and rationalists quickly embraced his idea of the ITT - and occasionally ran public ones:
One thing I didn’t see addressed anywhere in the article is the dimension of wrongness through time helps you decide which of you is less wrong.
He knows timeline is a factor, cited that quote earlier:
> ”“Here’s the gist: because so many scientific theories from bygone eras have turned out to be wrong, we must assume that most of today’s theories will eventually prove incorrect as well.”
But then the miss — two people discussing “scientific” theories, if one theory is built on the shoulders of the other, probabilistically, which position is more likely to be “right” for continuing progress?
I can pass an Ideological Turing Test for a flat earth and turtles all the way down, but it’s not the greatest use of my time.
https://en.m.wikipedia.org/wiki/History_of_the_center_of_the...
Rather, what’s the next thing we need to learn, and the “right” position to learn it from?
Learning builds on learning (this is why ‘discoveries’ happen all over around the same time), so recognize where parties are on that timeline. Fine, you’re probably both degrees of wrong, but which position incorporates the most learning?
That’s the position I want to stand on to discover what’s next.
// PS. Yes, on the bleeding edge, what we know is “true” can flip back and forth. That’s just natural selection at work. Same applies, just be aware where you are on the timeline.