The practice is not the “do good science using the correct methods”. The practice is “call for increased education spending for *waves hands* reasons”. Listen, this is the old Internet trick: people show up demanding rigorous models and evidence when they see something they don’t like, but what they like they demand nothing of.
Ha. It's a trick to point out that there are well-known, easy-to-understand, rigorous models to measure this kind of this, which the author has declined to use?
Why bother with spreadsheets at all then? If the author is going to do a bunch of arithmetic to convince us that some conclusion is correct, shouldn't we ask that he actually perform relevant and useful arithmetic?
1 + 3 = 4 See? I just proved that we should spend eleventy brazillion dollars on K12 education next year.
Okay, so is his claim that "nobody cared about my spreadsheets" is simply incorrect, because people cared very much about them, by pointing out that they are useless, because they are not doing causal inference ... or what's going on? :o
> but has anyone looked at the spreadsheet yet to see for themselves? I think we're proving him right so far, haha.
But we aren't.
The author has described his method in words. My contention is that the method he described does not pass the bar for good science. If my contention is correct, then it simply does not matter what is or is not in his spreadsheets.
Here's an analogy: If I told you that I determined the safest car on the road by measuring the distance to the moon, and that I had very very good spreadsheets for measuring the distance to the moon, would you really bother to check my spreadsheets?
I don't think you would. I think you would (correctly) say, "Hey, that's a really dumb way to figure out the safest car on the road and I'm not going to bother to check your math."
Now, I don't think that the author has done something quite that orthogonal to his stated goal, but I also don't think it's close enough that I want to bother checking his math.
That's ... not a fallacy. It's simply extraordinary claims require extraordinary evidence in action. (Eg. if your Bayesian prior is already edukashion gud, and someone says giv muniez to teacherz, you can say yes, ok, dollar. But if someone says no no, it's a waste of resources you can say okay the burden of proof is on you now, have fun.)
I'm not claiming it's a fallacy, my dude. I'm claiming it's a trick.
Since the prior is uninformed, it's obvious this isn't "extraordinary claims require extraordinary evidence" since both claims are extraordinary. It’s just that one is political desirous. Ain’t fooling nobody with the attempt at rational cover.
Caplan shouldn't be surprised at the reception of his work, because people have this prior.
Of course as you say high priests of rationality examine their own biases as they reach for the book, and by the time they flip it to read the recommendations on the back they are already empty vessels for unadulterated pure non-chill-filtered high-octane data and nothing else. But most people working in public policy are just lowly humans. :/
How much evidence you'd demand to change your mind depends on whether your prior was 50%, 90% or 99.999999999%. You shouldn't need more evidence to change back to 50/50 than it took to get you to your current beliefs - otherwise you'd have two equal cases and prefer the one you heard earlier for no reason other than that you heard it first.
Of course. I'm simply saying that Caplan shouldn know that people are entrenched in the pro-education position (because this is the norm for centuries, because nigh certainly everyone who is in the education policy sphere has gone through many years of the usual schooling and higher ed, and virtually all of these policy experts are pro-education).
Plus it seems despite his many spreadsheets his model is not able to do causal inference, which means his tables are very verbose narrations or appendices to his arguments in prose, and not a decisive proof.
> to change your mind
Unless one is in a state of 100% confirmation bias then ideally every little piece of evidence counts. Sure, we are not perfect walking-talking sentient infinite resolution mathematical distributions, so in practice we simply discard a lot incoming information, as we have meta-(meta?)-heuristics that we depend on. (Eg. if our friends and family and the news and even random blogs claim X we start to take it seriously, but still, if it doesn't really affect us our attention won't bother. And so on.)