Totally bonkers: economists using science to approach "social science" issues. Social sciences should be ashamed that this is not the norm, we should all be startled and surprised that this is a new thing and evidently people have just been using a system of high-fives and good wishes to solve the world's social problems.
Define "science". For most topics in economics it's basically impossible to an RCT. You can't say "Let's turn the US into a centrally planned economy, and see what happens." RCT based macroeconomics, or trade economics are basically impossible. Instead people have long relied on observational data, and did the best they could to handle the issues this caused.
RCTs didn't start with Duflo. (Duflo isn't even the first to win for RCTs -- Kahneman and Smith won in 2002 for experiments.) Experimental economics dates back to the 70s, but it always suffered from the same problem as psychology -- most experiments were conducted on students, and the interventions were always small-scale.
RCTs in development economics are much bigger scale because there are rich NGOs willing to spend big money on measuring the efficacy of interventions, and willing to work with economists to do it. This is not without controversy. A development RCT involves an economist from a rich country flying to a poor country, and then running an experiment on the inhabitants of that country. Not everyone thinks that's okay.
The RCTs also rely on the fact that economists come from coun
Can astronomy reproduce the Big Bang? Can biologists reproduce the Cambrian explosion? Can geologists reproduce end of the Mesozoic Era?
It's harder to know things we can't do experiments, but we can still know them. In economics, there is a rich tradition of relying on "natural experiments", which is where something like a natural disaster or a law change allows researchers to examine the effects. This is how it was shown that the effect of minimum wage increases on employment is very small. The financial crisis falsified an entire school of macroeconomics.
Before we pile on the social sciences, maybe someone familiar with them can tell us why RCTs are so hard to do. There are likely other issues involved that make RCTs very difficult -- I'm guessing some ethical issues at least. I doubt social scientists and economists are just a bunch of idiots or charlatans. Likewise, the recent breakthrough wouldn't be such a breakthrough if it had been easier. I don't have an Economist account so I can't read the rest of the article. Perhaps that was illuminated in the article. Anyways, before we criticize another field, we should at least have a good understanding of it.
You are exactly correct. RCTs are hard, not just for ethical reasons but also logistical ones. It's hard to get the money and authority to conduct an experiment in the first place, and it's often impossible to create a true control group.
"Hard" scientists like to pat themselves on the back for rigor, but they get that because they're studying comparatively simple things. Studying the lives of people is hard, but it's also important. It affects public policy, which in turn affects people's actual lives. That public policy gets created whether it's being studied or not -- the studies are hard, but they're better than guessing, and slowly they can build up a picture that makes them better. It's a bit like medicine: we're not going to stop treating people just because we don't understand the mechanism of action and can't guarantee that it will work.
This breakthrough is about finding ways to use the many villages found in poor countries to even attempt to do an RCT, and to come up with mathematical ways to account for the fact that the trials aren't really randomized. Aid had previously been given based on people's best guesses about what would work, which would maximize the value of the aid given if the guesses were correct, but it's hard to measure if it weren't. Aid has been beset by misguided theories and lack of measurement -- good intentions, but often ineffective.
> It's a bit like medicine: we're not going to stop treating people just because we don't understand the mechanism of action and can't guarantee that it will work.
Yet medicine actually focuses on scientific measurement of effects. They don’t just throw their hands up and go, “experiments that affect people’s lives are too hard.”
Right, and that's what this is about. They're doing the experiments. I didn't say they were too hard; I said they were hard. But it's early days of learning how to do experiments, much like medicine was not that long ago.
There's quite a bit of medicine that "works" but the specifics of why it work isn't well understood, especially in mental health. One of my friends who works in the mental health pharmacology field told me one of the challenges with the field is measuring the efficacy of those drugs. What do you do? Do you ask someone if they are feeling better or happier? Is that trust worthy? Or is it too fuzzy? Was it the drug that did it or something else? In that regard, they face similar challenges as the social sciences.
I have a subscription. The entire remainder of the article discusses it haha. Let see if I can paraphrase: 1) framing the economic experiment is difficult to prevent bias (my take: economics is not in a lab), 2) what works in Kenya might not work in Guatemala (my take: confounding factors are much greater), 3) ethics of withholding benefits to a group, 4) rich country researchers assuming they can and should intervene in poor country's problems, 5) rich country researchers don't have local context, 5) small experiments and small topics may not apply to or have an impact on a global scale which economics operates at.
- ethics. Example: is democracy good for economic growth? Of course one could randomly engineer coups in some countries but that's probably not appropriate.
- cost. Example: how much do people change their labor force participation when taxes change by 1%? where a RCT would be "let's give a _lot_ of money to people" and see what happens.
- situation where it is not appropriate. Example: why did Europe rise to prominence (aka the great divergence)? There is not much to randomize here.
Note that RCTs have shortcomings anyway (see for instance [0]).
Physics–the only science where reductionism ever really worked–has sort-of ruined it for all the others, where it (mostly) doesn't.
In economics, you are studying vast systems. For the majority of questions, it is impossible to isolate some part of the system and control and measure all the inputs and outcomes. That's probably obvious for macroeconomics: You can't have FED raise or lower interest rates based on a random number generator. And even if you could, you would still need a second United States to act as the control group.
It's mostly also true for microeconomics. Consider the difficulty of studying UBI. The largest such studies gave a basic income to a small African village, for a limited time of maybe two years. But the idea, and its opponents, mostly deal with the life choices people make, requiring essentially life-long guarantees. And even just knowing to be part of such a study, or continuing to live in a society that hasn't changed, is likely or at least plausible to change the outcome to render the study meaningless.
> I doubt social scientists and economists are just a bunch of idiots or charlatans.
The vast majority are certainly not, however, idiocy or ill-intent are not required to fall prey to many common causes of inaccurate results. Smart people trying their best to do good work still frequently succumb to errors and this is especially true in the less 'hard' sciences.
That's why the push for increasing rigor with RCTs and other methods is important and necessary.
I guess RCTs are complicated to do because you often can't generate homogeneous control and treatment groups, so you are either forced to laboriously measure every relevant aspect of each group to standardise. Or to otherwise invent clever ways to do your treatment that ensures that most of the effects from unavoidable differences cancel out.
The thing is, these complication doesn't explain why nobody overcame them until Kremer, Duflo e. al. started their experiments in the 1990's. Their work appears to be a simple adaption of methods from other fields to studies in developmental economics, not any sort of technological development. (This is one of the earliest papers cited in the motivation provided by Nobel foundation: https://pubs.aeaweb.org/doi/pdfplus/10.1257/app.1.1.112 it does some linear regression at the most)
With creation of new technology ruled out as the blocker for performing the experiment, you are basically left with internal and external sociological explanations.
Yes, it would be more appropriate if sociologists and public/political scientists took charge and were doing these studies and not economists. Some of these RCTs are only loosely related to solving market questions.
Is there nothing between RCTs and "high-fives" to estimate causal effects? Economists seem to run awfully long journal articles if they all boil down to high fives.
Piketty's Capital in the 21st century would be an example of something in between. His main hypothesis is that unless there is some intervention wealth accumulates until almost all of it is concentrated among just a few. He uses lots and lots of statistics to support his theory. He tells us that it happens but is unable to tell us exactly why.
Is that really any better though. Unless those statistics are shown to have a predictive effect on money flow, he’s just publishing something to fit a model and metaphorical high fives are thrown around by people that already agree with the hypothesis.
There is a lot of research on how to estimate causal effects from data.
And text books. "Causality" by Pearl about causal models in general. "Causation, Prediction, and Search" by Spirtes about how to learn the models from data.
For example assume the world consists of three random variables A, B, and C. If A causes B and B causes C (as DAG A -> B -> C), then A and C are correlated. But if the model is A -> B <- C, then A and C are not correlated. But conditioned on B, A and C are correlated in A->B<-C and not correlated in A->B->C. So you can falsify such causal models without an rct
There are many philosophy journals filled with long articles that also have no causal connection with reality. Why would you assume any given academic journal has to be publishing things that make sense, or are useful?
Worth noting that Transformative Hermeneutics of Quantum Gravity was not publishing in a philosophy journal. It was published in a litcrit journal which is a very distinct discipline.
Anyway, I think it's more correct to say "It speaks horribly of Hacker News readers _economics knowledge_." There are some topics here in which the quality of comments is quite poor but it's probably unreasonable to expect a group of people (specifically "good hackers") to be knowledgeable about _everything_. It is what it is and you just have to figure out which topic to avoid here.