Hacker News new | past | comments | ask | show | jobs | submit login
This year’s Nobel prizes prompt soul-searching among economists (economist.com)
156 points by pseudolus on Nov 21, 2019 | hide | past | favorite | 129 comments



Esther Duflo is the most obviously deserving Nobel Prize in Economics pick in decades. Major substantive contributions to development economics coupled with being absolutely the vanguard of the credibility revolution.

The article seems to motivate criticism of RCTs in only the barest manner: "what of external validity?" No shit, Sherlock, which is why the credibility revolution also argues for frequent and cross-contextual replications. "RCTs present ethical quandaries in re denying the control group the benefits of a presumed good intervention". Again, duh. The problem is these presumed good interventions often aren't which is why the most resounding impact of RCTs is not proving treatments successful beyond our wildest dreams, but rather finding a series of disappointing and in some cases devastating null effects. Bringing up deworming is especially rich in light of the serious concerns raised by deworming RCTs about the value and effect.

It is also weird to talk about the credibility term solely or entirely in terms of RCTs. As economists often say RCTs are the gold standard, but there exist many silver standards and causal inference writ large and a broader concern with well identified pseudo-experimental or observational work has also developed thanks to the same forces and same people credited here.

Thank god for Esther Duflo. Oh, and her husband is a half decent economist too, I guess.


RCTs are obviously a valuable tool for economics, and it deserves a bigger role than it's gotten historically. After the Nobel Prize, RCTs will probably get more attention than they merit for a while. Ultimately, there's a very limited set of economic topics that they can be used to illuminate. Based on the discussions I've had with economists since early October, I feel like we're about to see a lot of effort to push them to and beyond their limits, at the expense of the silver standards you mention. Mind you, I don't think that's a huge problem, and it's better than the original state of zero RCTs. But academia has its own fashions, just like anywhere else.


I'm not quite sure how to place your comment, maybe because the term credibility revolution is new to me. Yes, RCTs are used to test and provide evidence, and the null results can be very informative. But I think the external validity point made in the article is absolutely fair. Note that I work in RCTs so I'm definitely not opposed to the approach, but I think there are still a lot of improvements to be made on this - in the field people really do generalize findings quite aggressively. This is "practical" because you need to make decisions when setting up new projects and RCTs are incredibly laborious, expensive and take a lot of time to complete. I'm also surprised about your remark about deworming because it is generally touted as one of the big wins in the field of RCTs, by the RCTs people - Michael Kremer frequently mentions it and so do Esther Duflo and Abhijit Banerjee in their courses on EdX. The concerns that you refer to don't tend to be mentioned (I think they should be!).

Totally agree that Esther Duflo is fantastic and that the contributions of these people go beyond just RCTs, e.g. Esther Duflo's work on the returns of education in Indonesia.


See [0] for the "credibility revolution"

[0] https://www.aeaweb.org/articles?id=10.1257/jep.24.2.3


Study with Esther now!

https://www.edx.org/bio/esther-duflo

Amazing I can type that. The net ain't all bad.


They offer a "MicroMasters" on EdX [0] basically covering a lot of this RCT work. I can recommend it (I've completed it), I work as a data scientist and have learnt quite some new statistics tricks (and a lot about causal estimation) from some of the courses.

[0] https://micromasters.mit.edu/dedp/


Abhijit Banerjee was her doctoral advisor. He tutored her not the other way around. If you get any chance to read their famous book Poor Economics, you will find almost 70% RCTs were conducted in India. Classical Indian bias.


What is it with India and economics? I have personally known many Indian economists, I see a lot of books written in Hindi about finance and other economic topics, and apparently many studies are conducted in India. There is something going on here, maybe a trend that will be remarked on a century from now.


India is quite a bunch of people :) If the academic output per capita is the same worldwide, then India alone would contribute 17% of the worldwide output of financial/economic publications.


I guess there is a bias that keeps perpetuating itself. I've made the same experience with Italians. On the other hand, here in Germany economics seem to be quite niche (and terrible), while our CS, Math and Engineering research is world-class.


do you mean anti-Indian or pro-Indian ?


I am in general skeptical that statistical experiments will give use much insight into complex behavior. I think building theoritical models is the better approach to study complex systems, since then we will can explain why a certain behaviour shall manifest.

Do astrophysicists use controlled trials to study the universe?


Those models aren't worth much if you don't test them, so these efforts will have to go hand in hand. The way it tends to go now is RCT -> try to explain findings by hypothesizing about mechanisms -> think about an RCT that can test this hypothesis -> repeat. It's a slow and expensive process and not always successful, but I would say that it's progress.


I would say testing is a small part of building models: building models should essentially be driven by theory, not by experiments.


Stare upon the FRBTheory wiki in delight then, here are ~50 different theories describing the same thing: https://frbtheorycat.org/index.php/Main_Page

As a generous guess at most 40 of these models are wrong, since they all describe basically the same few events with vastly different mechanisms. I guess that majority of interesting but wrong models is what you are after?


The space of incorrect-but-reasonable models is infinitely larger than the space of correct models. The way to distinguish between the two is experiment. Experiment is therefore the much more important side of the balance.


Uh, have you heard about Astronomy at all? Admittedly it's purely observational measurements, but you'd need a very poor philosophy of science to deny that the careful measurement of radiation is not an experiment.

As for trials, the universe itself has provided billions upon billions of different objects to perform measurements on.


There's the "shut up and calculate" thing within particle physics. I don't know how seriously it's taken.


What? Shut up and calculate is an interpretation of Quantum Mechanics in general, and is probably most common as a backup to the Copenhagen interpretation.

It doesn't have much to do with theory building at all, as most theory involving QM doesn't need to concern itself with exactly why QM works despite its strange philosophical meaning.


Esther Duflo is so good that she got her husband the Nobel Prize. If Banerjee hadn’t married her and started coauthoring so much with her, Sendhil Mullainathan definitely would have been the third winner.


I find the article and your comment somewhat disturbing. We really get into an ethics field here and I very much hope that they have ethical review panels as psychology or sociology have been using for decades.

But that aside, what I find disturbing is the economic couching of basic human questions: do you need economic long-term effects that can be proven to provide deworming to poor children? Is that what we have arrived at - wellbeing is not even a concern. A fictional economic greater good is the measure of success, the health and comfort of poor people is not?

I'd say there is really a need for soul-searching among economists - to rethink why capitalism has brought so much property but also leaves so many people in poverty, and why it has taken such radical forms that in the richest country on earth thousands starve and tens of thousands die of treatable medical issues - and millions more in a global view.


I don't think this is a fair interpretation of the poster above or Duflo & Banerjee's work. It is an unfortunate, but nevertheless true, statement that in the world today, we don't have an allocation of resources which allows for the most basic healthcare for all children. Let alone a broader set of healthcare services, free education, clear water, preventing malnutrition etc.

Those who want to make practical improvements quickly with those limited funds available (from developing country governments, foreign aid, charity, etc.) need to make effective use of those funds. I don't think it is a stretch to say it is a moral obligation to make sure those funds are put to good use.

Doing RCTs is quite key to identifying the most effective approaches whether the goal is lives saved/$, incremental years of education/$, malaria cases reduced/$, etc. With that data, you can direct funds to the most efficient cause in any given area.

Duflo & Banerjee are quite modest about any judgement on whether the focus should be on improving quantity or quality of life and how one should measure quality of life. Their books are quite clear that one's one view of preferences doesn't necessarily line up with those who you are trying to help.

And one last point, yes, of course, their work would go through ethics review panels. Basically anything with human subjects does and their work would obviously qualify.


I'm guessing you have a situation where vast numbers of people are not getting some treatment, you choose a manageable small subset to study, and only treat half of those. What's wrong with that? The alternative is not treating everybody in the continent, it's studying something else and treating nobody. Why would a researcher have the money and resources to treat everyone? (It does seem fair to offer the treatment to everyone involved in your trial after it's over.)


These critics should spend some time speaking to the medical research community. These aren't new questions.

It kinda reveals the deep underlying assumptions; "of course we can know the results of these interventions in advance, so we must base our ethics on that"... ok, but if you already know the effects, why are you testing them? You're testing them because you don't know. Unfortunately, there's no royal road to science knowledge, and, yeah, that means that some people are going to fail to get positive interventions, and some people are going to get negative interventions. Either that, or nobody gets anything and we just keep blundering on in ignorance. There is no answer where nobody takes any risks and everybody gets the good stuff guaranteed.


"One day when I was a junior medical student, a very important Boston surgeon visited the school and delivered a great treatise on a large number of patients who had undergone successful operations for vascular reconstruction.

At the end of the lecture, a young student at the back of the room timidly asked, “Do you have any controls?” Well, the great surgeon drew himself up to his full height, hit the desk, and said, “Do you mean did I not operate on half the patients?” The hall grew very quiet then. The voice at the back of the room very hesitantly replied, “Yes, that’s what I had in mind.” Then the visitor’s fist really came down as he thundered, “Of course not. That would have doomed half of them to their death.”

God, it was quiet then, and one could scarcely hear the small voice ask, “Which half?”


The way this story was structured makes me unsure if it really happened or not.



Let me guess. The young student was an HN pedant.


That young student's name... Albert Einstein.


> These critics should spend some time speaking to the medical research community. These aren't new questions.

There is a really brutal dynamic involved. It is hard to be uncertain and make a decision. However, when dealing with problems more complicated than assembling a sandwich, it is nigh impossible to be certain about anything.

We all know software so here is a software example - if I walk up to a new computer, fire up a web browser and go to news.ycombinator.com, will it work? Probably, but there are many things that could go wrong - configuration, hardware, new bugs, old bugs, etc. But I can't afford to worry about any of that because I only have 8 hours in a day and I have to just assume it will work out. Usually it does. If it doesn't, then I start exploring what the system is doing and why.

But this happens with everything and quietly trains people to approach complicated situations with great confidence and pick up the pieces if (and only if) someone or something flags that there is a problem. This approach doesn't work for policy but there is a constant influx of people who think this way getting into positions of power and making policy decisions.

It is a fact that these questions of evidence are well known and indeed ancient, but there is constant and heavy pressure on culture to backslide and stop accepting that there is uncertainty that needs to be controlled. Making decisions while being uncertain is hard and by-and-large has to be learned. Even learning it as a skill is hard because it is about how we behave more than about what we know. Figuring out how to embed those behaviours at culturally at scale is a component why the Enlightenment was a big deal.


I am somewhat bothered about the reverse scenario. For example e-cigarettes, some people are eager to quote statistical data on it, but I feel we should be able to state if e-cigarettes are harmful or not from a purely theoritical standpoint, an explanation on how it interacts with the chemistry of the human body and say it is harmful or not.

We didn't send the man to the Moon on the basis of statistical experiments, we knew it was possible even before the first screw was put in on the rocket.

I am fearful that depending on experiments would lead to the same dire credibility issues that plague experimental psychology.


The human body (and biology more generally) is fractal-like in its complexity. One day we may be able to model things accurately enough to not require experimental work, but probably not in our lifetime.


Modelling the basics of a human lung isn't all that difficult. You can see the effects of vaping on an enclosed space just by having the smoke pulled into an enclosed space like they used to do for cigarettes. That would give you substances created though the vaping process and likely to stick in the lungs. Longer term effects are drastically harder obviously as it doesn't take into account the cleaning mechanisms in the lungs but for basic verification that something is toxic that's not a difficult bridge to cross.

Now when you get into whole body or things with mutations such as cancer I agree with you while heartedly. While we can understand in a petri dish how to kill something in a human you've got to worry about things like delivery, hidden single cells, toxicity, and drug proliferation all of which can be different in every single person with a cancer. In biotech we're still just barely out of the dark ages.

To bring this to the original topic, we're still in the dark ages with fields like economics. While we understand many of levers that exist we have little idea of how and when to effectively pull them.


Models are useful tools, but at the end of the day you’re not going to get a conclusive answer to a question like “what are the impacts of vaping on human health?” without looking at a real life human. At least at current levels of scientific understanding, being able to plug a chemical into an equation and pop out a complete description of the effects (my interpretation of what the gp is asking for) is a bit of a pipe dream.


Knowledgeable medical scientists should be able to explain the effects of vaping as a physiological process, axiomatically. Are we saying our medical scientists do not want to attempt that?


"Vapours" actually were a major thing in medical research once, cured by bloodletting and mercury IIRC.

There are all sort of models still widely believed today. Some are unassailable (evolution, germ theory), some seem right more often than not ("don't eat bacon"), and some seem so right but nothing based on them works ("cholesterol is bad").

So this sort of naive belief in our ability to model the processes in a body or a cell has out of fashion at least since WW2. It's still used to come up with ideas to try. But even there, purely random exploration of the search space is not consistently worse.


> but I feel we should be able to state if e-cigarettes are harmful or not from a purely theoritical standpoint, an explanation on how it interacts with the chemistry of the human body and say it is harmful or not.

I agree with the sentiment but the truth is ballistics is several order of magnitude simpler than biology/biochemistry. I find medicine fascinating, but the more I learn about it the more I am astonished at the huge mismatch between perception of the field and what we actually know. And have always more respect for practitioners.

> Paracetamol was first made in 1877.[21] It is the most commonly used medication for pain and fever in both the United States and Europe.

> How it works is not entirely clear.

https://en.wikipedia.org/wiki/Paracetamol


> I am fearful that depending on experiments would lead to the same dire credibility issues that plague experimental psychology.

Economics already has huge credibility problems.


Theoretically I would agree. Medical science mostly just tried it in clinical studies. Of paramount important though is that the patient makes the decision to take the risk. This is were economics is lacking in my opinion.

But even if you are sure you got the model right, you still need a long time study to prove it. The current discussion on e-cigs are mostly about informing people that they are taking a risk because there are unanswered questions.

Putting a man on the moon was a lot of risk and there were many unknowns and I am sure the pilots knew that very well. Gladly, they still did it.

> experimental psychology

had precisely the problem that their patients often didn't really have a choice in that matter. So I think the field deserves its place. They might need to shed a lot of hubris before they can be taken seriously again. I wouldn't put economics on that level, but they often seem to gravitate to similar mistakes.


But the thing with the de-worming experiment is they did know it worked. They were studying a procedure known to be beneficial and merely asking "is this beneficial also to our policy goal" and using as a pretext to deny people a known beneficial medical treatment (for 1-2 years as the system was rolled-out).

Which is to say this seems neither analogous to medicine or ethical in the same fashion to controls for medicines of unknown efficacy. Maybe, if somehow they had no choice but their "phased roll-out", they have a justification but otherwise, what were they even considering? Not treating people?

See: https://www.betterevaluation.org/resources/example/Primary_s...


> Maybe, if somehow they had no choice but their "phased roll-out

It seems intuitively obvious, not "maybe, somehow", that bringing medical treatment to thousands of rural communities doesn't happen instantly.


I think what you're missing is that economics has a serious case of physics envy. They like to imagine that their handful of reductionist principles can model a scenario as well as Newton's laws can model n bodies. The experiments are only needed to fine tune a handful of parameters.

What is represented by this pivot towards emperical testing everything is a concession that there are no universal economic laws from which outcomes can be derived, just an ad hoc corpus of facts. I see why they don't want to give up.


The problem with most economic models is they all work great. In the moment. Right up until it is 10 seconds later and something changed. Then they don't.

However, over time they have hidden variables that no one knows how to measure. Say to day you are in a bad mood. You go to a store pick up an item and go 'nah not going to buy that'. Then tomorrow you are in a good mood. You go to the same store and buy that item. Your neighbor does the same thing but does not buy it at all. Economics is very bad at figuring out what that even meant. But to know if handing people cash, or taxing more/less, or changing policies you kind of need to know what it did mean. Experimenting is a good idea which we have been doing for a long time. But the models still do not match.

The other issue is many of these things are huge systems (macro economics). You can change one little thing and it has an effect on 3 other things that you did not want. To use the classic micro economic model. The pizza joint. I raise my price because (MR=MC). I sell less pizzas but my profit is up. Profit does not come from nowhere. My customers paid more. So they can buy less of something else. I bought less items from my wholesalers so their profit is down. The gov gets more money because of taxes. One little change touched dozens of other stores/governments/people not even related to me. Getting that model right is tricky with thousands of hidden variables that more like functions.


Even physics is not so simple as that anymore. The proof that the Higgs boson exists was a statistical test on a sea of noise. And CERN was recently recruiting machine learning experts to help them find other patterns in that noise.

Modern physics uses many of the same methods as softer fields like economics. The problem in economics (and other soft sciences) is the low standard of evidence they're willing to accept.


Though I know the Nobel is focused on theoretical contribution, I found this discussion interesting: https://andrewbatson.com/2019/10/25/who-deserves-the-nobel-f...

The editorial Batson is reacting to (by Yao Yang) makes the point that the largest economic development project that lifted the vast majority of people out of poverty over the past 20 years was orchestrated by China, and the RCTs/small scale interventions that Duflo et. al. won the Nobel for had no role or relevance there. He focuses more on the policies guided by the "classic" development economists like Solow which emphasize domestic savings and investment.

Batson's post delves into who in China actually was responsible for the economic policy changes that created that development.

Personally, I welcome the the addition of RCTs to the economic research toolkit. For too long, economics has wrapped itself up in a mathematically complex knot that bears no resemblance to the real world. Behavioral economics has started to crack that by applying common sense, though too often they have dramatically overextrapolated their hard-to-replicate results.

Hopefully economists can see RCT as a tool which can be used where appropriate, rather than an entirely new paradigm that much be applied to everything (as they did with highly mathematical economics).


At the end of the day RCT and other experiments will help us with allocating of the pie but not growing the pie itself.

e.g.India with a comparable population to China can only distribute so much with a 2.7 T $ economy compared to China with a 12 T $ economy.

I definitely feel that at least a few Chinese policymakers deserve the Nobel prize for actually lifting people out of poverty. You can debate their methods but i think most people from Western schools of thought come with an inherent bias thinking what worked for them over so many decades would've similarly worked for China which is not really true. This has to do with cultural difference, huge population and many other numerous reasons.


China has also not suffered a Japan style economic crash, even though The Economist has regularly predicted it would happen since the early 90s. Even though this is an economic anomaly that has to do with a very large percentage of the earth's population, somehow this development has escaped triggering any serious curiosity in western economics circles.

This lack of serious inquiry in the face of enormous empirical evidence that contradicts theory is what makes Economics a dying field.


RCT? Perhaps it would be helpful to explain what that is.


It's in the second paragraph of the article

> The prize, awarded in early October, recognised the laureates’ efforts to use randomised controlled trials (RCTs) to answer social-science questions.



RCTs aren't perfect. There's a bunch of problems with them, generalizability being just one. But you know what's even worse? Everything else.

Big questions are great and I'd be all for focusing on them if we were capable of coming up with reasonable answers. But all we've managed are DSGE models and they're about as useful as reading the lines in goat livers.

I'll take the RCTs thank you and with a side natural experiments. Duflo is the most significant economist of my lifetime. Thanks to her economics has some basis for calling itself a science.



Worth noting that The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel is not an actual Nobel Prize. It was established by a central bank so they could award people who justify the existence of central banks.


Do you have any evidence that this was the reason why the prize was established?


I think the only ”evidence” there is, is that it was initiated about the same time the US got off the gold standard.


No, not really worth noting.


Well, it's also manipulative. It wasn't created by Alfred Nobel but the Nobel Foundation does refer to it as a Nobel prize: https://www.nobelprize.org/nomination/economic-sciences/


Economics already existed when Nobel died. It's not an oversight that there is no Nobel prize in economics, he didn't consider it a science worth a Nobel prize. That's why his family still tries to get his name off the Riksbank prize.


Do they?

From what I can tell in admittedly fairly brief reading, a single member of the family is trying to remove the name, while the rest of the family simply want it distinguished from the other prizes while not objecting to the use of the name per se.

This statement from Peter Nobel[0], the man actively trying to remove the name, presumably sums up the best arguments he had available against using the name, and makes no mention of other family members trying to deny usage.

[0] https://rwer.wordpress.com/2010/10/22/the-nobel-family-disso...


I know you are just responding to the argument made by the comment you are responding to. But I would argue that at the end of the day, it doesn't matter how the Nobel family feels about the Prize. The Ecenomics Prize is sham regardless, and if the Nobel family participate in that sham, it doesn't make it less so.


Sham? Really? What makes it a sham? They hand out counterfeit money as the prize? Some dudes on a bar stool in Liverpool decide who wins? It's a prize that is awarded by the Nobel committee, that has Nobel's name in the title, that for the sake of shortness we call the Nobel Prize in Economics, even though the full name is something longer. What about that is a sham?


>that for the sake of shortness

This is the part that makes it a sham. You call it that to give it a legitimacy it doesn't have. The entire thing is a project in giving economics as a field a legitimacy it doesn't have.


The Nobel Prize are prizes donated by Alfred Nobel. The Riksbank prize wasn't so it's not a Nobel and calling it so is a lie.


Yes, it is technically separate from the other Nobel prizes, but no one treats it any differently than the rest of them.


I mean quite a few people treat it differently, you just have to look as far as some of Nobels decedents. Wyattpeak linked the article a bit below.

>It is a deceptive utilisation of the institution of the Nobel Prize and what it represents.

https://rwer.wordpress.com/2010/10/22/the-nobel-family-disso...

Not to mention one of its awardees, Hayek.

https://www.nobelprize.org/prizes/economic-sciences/1974/hay...


> but no one treats it any differently than the rest of them

Which is precisely the problem, and the intended effect. It's a propaganda effort.


Economics continues to try and reinvent itself. I’m curious how the example of an RCT in deworming schedules for children counts as Economics. This seems like a field in its death throes. Nobels for ‘credibility research’?


The effect size of an intervention (deworming) on a measured outcome (test scores) is, almost by definition, its efficiency. And that's within economic's wheelhouse if you're studying paper clip manufacturing or schools.

But, ultimately, I think the researchers would probably answer with something closer to "I don't care if it's economics. I consider it both interesting and relevant", and so would the Nobel judges.


Yeah, it doesn’t matter in the end, we have learned something important. But there is some context to all this - (macro)economics has had a huge influence on the world through monetary policy. This influence is waning due to financial crises and the apparent failure of neoliberalism among other things.


Economics has an identity crisis. It is trying to be a science when it is an art/humanity. Rather than embracing it's nature it is trying to be what it is not. It's like Christianity trying to be a science via creation science. It simply does not work.


Thanks for this glib comment, it must be true since you asserted it. Frankly what I find most annoying about this comment is the implicit claim that humans are so complex that we can't possibly study it; that our existing scientific method is inadequate to describe economics.

Unlike any art economics can make predictions, and we can test the validity of such predictions. That our models are insufficient right now doesn't mean we should cast the whole discipline into the toilet.


The commenters on either side of yours make it look like doubting the validity of economics is the province of people too lazy to make a real argument. I really don't want to take their side because of how they're presenting it. But somewhere in that idiocy there's a decent point to be made.

I've always been a bit uneasy about economics because people's behavior depends on their beliefs, and their behavior as economic agents especially depends on their beliefs about the the economic theory that motivated the design of their habitat.

I expect that if a population that grew up in a system designed under assumption X and happened to eat well and always have a roof, then they will behave in a way that confirms theories compatible with X.

On the other hand, if they experienced extreme economic strife under leadership that believes in the validity of X, then--as data points--they're more likely to influence economic theory testing in the opposite way.

How can you know whether your predictions are correct because they're objectively true about human behavior, versus them being correct because your sample set has been influenced by the same economic theory that motivated the prediction in the first place?

Is statistics really so powerful that it can eliminate the circularity from the situation?


The trouble with economics, especially macroeconomics, as a science is that it lacks good predictive power. Science is prediction, not explanation. Economic forecasting ought to work at least as well as weather forecasting, and it doesn't.


> our existing scientific method is inadequate to describe economics.

Agreed.

> we can test the validity of such predictions.

Nope.

> we should cast the whole discipline into the toilet.

Agreed.


Yearly reminder that there is no Nobel Prize in Economics. It is the "Nobel Memorial Prize in Economic Sciences", established in 1968, to coattail on the prestige of the original.

Edit: Your DVs don't change the facts.


They should have just named someone else. Now we have to qualify any sentence referencing the Economics price each time we say it.


>Now we have to qualify any sentence referencing the Economics price each time we say it.

They should have to do that, but they don't. Which is precisely the point, and why they didn't name it something else.


I think you are overestimating the intellect of Sweden’s central bankers.


I don't know. "If our discipline had a Nobel Prize, people would consider it most legitimate" isn't really a genius level plan.


I normally refrain from saying things like "This should be the top comment.", but in this case I'd go as far as say that this comment should get stickied. To understand why I say that, I'd recommend watching this video:

http://www.youtube.com/watch?v=dLtEo8lplwg


Surprised the article didn't even mention the word econometrics. It being the topic and all.


I don't get it...

Isn't RCT what the medical world, psychologists and sociologists have been doing for decades?


I am curious about this too.


it looks like economics is moving from the alchemy to the chemistry stage in its evolution


Note: This article lazily does not make the distinction, but the "economics nobel prize" is not a real Nobel prize.

It was added years later, sponsored by a bank, and is officially called "The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel". It exists purely to legitimize a neoliberal financial ideology and has nothing to do with scientific rigor.


Outside of the financial press this doesn't get nearly the amount of attention it deserves.

Part of the (harsh) reality in the dislike of economics recently has been that the Behavioural Economics folks (yay Ariely!) have made it a point to start introducing experiment-based rigour and protocols. That part I find fascinating at least.


What about informed consent ? People participating in experimental drug trials with control groups give informed consent that they may not get the medicine , did these RCT participants do the same ?


I'm sorry, but is this really news or what is it? Criticism against RCTs are in biological sciences too. I don't see how criticism against RCTs is news. Is it that the prize winners are just being more scientific about RCT and getting headwind? Is it that mentioning the Nobel is always news?

In the Ebola vaccine trial, they did away with the control groups the moment they realised that the vaccine was effective, sacrificing their nice full statistical analysis for (in this case sensible) common sense. So there are always exceptions to RCTs and there are always cases where they are not the suitable method from the outset.

Is the point of the article that economists are not rigorous enough and that Banerjee, Duflo and Kremer try to be more rigorous, bringing RCTs into economics where they are traditionally more in the biological sciences?


> not all economic questions can be suitably framed

maybe don't thumb suck answers for those questions then as is currently the norm


There’s overemphasis on macro and finance.


Totally bonkers: economists using science to approach "social science" issues. Social sciences should be ashamed that this is not the norm, we should all be startled and surprised that this is a new thing and evidently people have just been using a system of high-fives and good wishes to solve the world's social problems.


Define "science". For most topics in economics it's basically impossible to an RCT. You can't say "Let's turn the US into a centrally planned economy, and see what happens." RCT based macroeconomics, or trade economics are basically impossible. Instead people have long relied on observational data, and did the best they could to handle the issues this caused.

RCTs didn't start with Duflo. (Duflo isn't even the first to win for RCTs -- Kahneman and Smith won in 2002 for experiments.) Experimental economics dates back to the 70s, but it always suffered from the same problem as psychology -- most experiments were conducted on students, and the interventions were always small-scale.

RCTs in development economics are much bigger scale because there are rich NGOs willing to spend big money on measuring the efficacy of interventions, and willing to work with economists to do it. This is not without controversy. A development RCT involves an economist from a rich country flying to a poor country, and then running an experiment on the inhabitants of that country. Not everyone thinks that's okay.

The RCTs also rely on the fact that economists come from coun


Define science the usual way, the essence of science is reproducibility: if you can't achieve that you are not practicing science.


Can astronomy reproduce the Big Bang? Can biologists reproduce the Cambrian explosion? Can geologists reproduce end of the Mesozoic Era?

It's harder to know things we can't do experiments, but we can still know them. In economics, there is a rich tradition of relying on "natural experiments", which is where something like a natural disaster or a law change allows researchers to examine the effects. This is how it was shown that the effect of minimum wage increases on employment is very small. The financial crisis falsified an entire school of macroeconomics.


Before we pile on the social sciences, maybe someone familiar with them can tell us why RCTs are so hard to do. There are likely other issues involved that make RCTs very difficult -- I'm guessing some ethical issues at least. I doubt social scientists and economists are just a bunch of idiots or charlatans. Likewise, the recent breakthrough wouldn't be such a breakthrough if it had been easier. I don't have an Economist account so I can't read the rest of the article. Perhaps that was illuminated in the article. Anyways, before we criticize another field, we should at least have a good understanding of it.


You are exactly correct. RCTs are hard, not just for ethical reasons but also logistical ones. It's hard to get the money and authority to conduct an experiment in the first place, and it's often impossible to create a true control group.

"Hard" scientists like to pat themselves on the back for rigor, but they get that because they're studying comparatively simple things. Studying the lives of people is hard, but it's also important. It affects public policy, which in turn affects people's actual lives. That public policy gets created whether it's being studied or not -- the studies are hard, but they're better than guessing, and slowly they can build up a picture that makes them better. It's a bit like medicine: we're not going to stop treating people just because we don't understand the mechanism of action and can't guarantee that it will work.

This breakthrough is about finding ways to use the many villages found in poor countries to even attempt to do an RCT, and to come up with mathematical ways to account for the fact that the trials aren't really randomized. Aid had previously been given based on people's best guesses about what would work, which would maximize the value of the aid given if the guesses were correct, but it's hard to measure if it weren't. Aid has been beset by misguided theories and lack of measurement -- good intentions, but often ineffective.


> It's a bit like medicine: we're not going to stop treating people just because we don't understand the mechanism of action and can't guarantee that it will work.

Yet medicine actually focuses on scientific measurement of effects. They don’t just throw their hands up and go, “experiments that affect people’s lives are too hard.”


Except there is a direct, causal link between treatment and life and death, or at least quality of life.

Economic or social studies are a lot more nebulous.


Seems weird to accept that "quality of life" is a straightforward measure, but insists nothing of the like can be created for social studies.


Right, and that's what this is about. They're doing the experiments. I didn't say they were too hard; I said they were hard. But it's early days of learning how to do experiments, much like medicine was not that long ago.


There's quite a bit of medicine that "works" but the specifics of why it work isn't well understood, especially in mental health. One of my friends who works in the mental health pharmacology field told me one of the challenges with the field is measuring the efficacy of those drugs. What do you do? Do you ask someone if they are feeling better or happier? Is that trust worthy? Or is it too fuzzy? Was it the drug that did it or something else? In that regard, they face similar challenges as the social sciences.


I have a subscription. The entire remainder of the article discusses it haha. Let see if I can paraphrase: 1) framing the economic experiment is difficult to prevent bias (my take: economics is not in a lab), 2) what works in Kenya might not work in Guatemala (my take: confounding factors are much greater), 3) ethics of withholding benefits to a group, 4) rich country researchers assuming they can and should intervene in poor country's problems, 5) rich country researchers don't have local context, 5) small experiments and small topics may not apply to or have an impact on a global scale which economics operates at.


For essentially three reasons:

- ethics. Example: is democracy good for economic growth? Of course one could randomly engineer coups in some countries but that's probably not appropriate.

- cost. Example: how much do people change their labor force participation when taxes change by 1%? where a RCT would be "let's give a _lot_ of money to people" and see what happens.

- situation where it is not appropriate. Example: why did Europe rise to prominence (aka the great divergence)? There is not much to randomize here.

Note that RCTs have shortcomings anyway (see for instance [0]).

[0] https://www.nber.org/papers/w22595


Physics–the only science where reductionism ever really worked–has sort-of ruined it for all the others, where it (mostly) doesn't.

In economics, you are studying vast systems. For the majority of questions, it is impossible to isolate some part of the system and control and measure all the inputs and outcomes. That's probably obvious for macroeconomics: You can't have FED raise or lower interest rates based on a random number generator. And even if you could, you would still need a second United States to act as the control group.

It's mostly also true for microeconomics. Consider the difficulty of studying UBI. The largest such studies gave a basic income to a small African village, for a limited time of maybe two years. But the idea, and its opponents, mostly deal with the life choices people make, requiring essentially life-long guarantees. And even just knowing to be part of such a study, or continuing to live in a society that hasn't changed, is likely or at least plausible to change the outcome to render the study meaningless.


> I doubt social scientists and economists are just a bunch of idiots or charlatans.

The vast majority are certainly not, however, idiocy or ill-intent are not required to fall prey to many common causes of inaccurate results. Smart people trying their best to do good work still frequently succumb to errors and this is especially true in the less 'hard' sciences.

That's why the push for increasing rigor with RCTs and other methods is important and necessary.


I guess RCTs are complicated to do because you often can't generate homogeneous control and treatment groups, so you are either forced to laboriously measure every relevant aspect of each group to standardise. Or to otherwise invent clever ways to do your treatment that ensures that most of the effects from unavoidable differences cancel out.

The thing is, these complication doesn't explain why nobody overcame them until Kremer, Duflo e. al. started their experiments in the 1990's. Their work appears to be a simple adaption of methods from other fields to studies in developmental economics, not any sort of technological development. (This is one of the earliest papers cited in the motivation provided by Nobel foundation: https://pubs.aeaweb.org/doi/pdfplus/10.1257/app.1.1.112 it does some linear regression at the most)

With creation of new technology ruled out as the blocker for performing the experiment, you are basically left with internal and external sociological explanations.


Yes, it would be more appropriate if sociologists and public/political scientists took charge and were doing these studies and not economists. Some of these RCTs are only loosely related to solving market questions.


Is there nothing between RCTs and "high-fives" to estimate causal effects? Economists seem to run awfully long journal articles if they all boil down to high fives.


Piketty's Capital in the 21st century would be an example of something in between. His main hypothesis is that unless there is some intervention wealth accumulates until almost all of it is concentrated among just a few. He uses lots and lots of statistics to support his theory. He tells us that it happens but is unable to tell us exactly why.


Is that really any better though. Unless those statistics are shown to have a predictive effect on money flow, he’s just publishing something to fit a model and metaphorical high fives are thrown around by people that already agree with the hypothesis.


There is a lot of research on how to estimate causal effects from data.

And text books. "Causality" by Pearl about causal models in general. "Causation, Prediction, and Search" by Spirtes about how to learn the models from data.

For example assume the world consists of three random variables A, B, and C. If A causes B and B causes C (as DAG A -> B -> C), then A and C are correlated. But if the model is A -> B <- C, then A and C are not correlated. But conditioned on B, A and C are correlated in A->B<-C and not correlated in A->B->C. So you can falsify such causal models without an rct


There are many philosophy journals filled with long articles that also have no causal connection with reality. Why would you assume any given academic journal has to be publishing things that make sense, or are useful?


Why would you not? I don't even like philosophy, and I wouldn't assume they're just publishing nonsense.


Because of a Transformative Hermeneutics of Quantum Gravity.


Worth noting that Transformative Hermeneutics of Quantum Gravity was not publishing in a philosophy journal. It was published in a litcrit journal which is a very distinct discipline.


[flagged]


Looks like it's actually downvoted now.

Anyway, I think it's more correct to say "It speaks horribly of Hacker News readers _economics knowledge_." There are some topics here in which the quality of comments is quite poor but it's probably unreasonable to expect a group of people (specifically "good hackers") to be knowledgeable about _everything_. It is what it is and you just have to figure out which topic to avoid here.


HN has an anti humanities/social science bias that often rears its ugly head in discussions such as these.


TBH I've found that nonsense gets upvoted quite regularly here.


There is no Nobel prize for economists, specifically anyway. There is a Nobel Memorial Prize in Economic Sciences, but that was created in 1968 by Sweden's central bank, and has nothing to do with Alfred Nobel other than being from Sweden and "borrowing" his name. I believe the Nobel foundation now manages that award, but there are still only 5 "real" Noble prizes.


The Denver Broncos aren't a real NFL team. There is a team in Colorado that goes by that name, but it was created in 1960 by the AFL, and has nothing do to with the NFL other than being an American Football team. I believe the NFL now includes that team, but there are still only 16 "real" NFL teams.


More accurate to say that there are five "original" ones and then one added in later. It seems that at this point it has equal standing, in that the Nobel foundation admins it, and it is presented in the same ceremony.


The Wealth of Nations was published in 1776, 57 years before Alfred Nobel was born. He was aware of the field of economics and didn't think it was worthy of a prize.

Fields such as mathematics and philosophy were also around in Nobel's time and he didn't think they were worthy of a prize either. The difference is those fields aren't associated with an organization that literally prints money to buy their way in.


> He was aware of the field of economics and didn't think it was worthy of a prize.

I doubt you have evidence of this, beyond the simple fact that he didn't personally establish a sixth prize. If such evidence existed, I think it's unlikely that the Nobel Foundation would've agreed to administer the prize in the first place.

That said, at this point, I'm not sure what the difference would be anyway. The Nobel prizes (including the memorial one) have become a globally admired celebration of human achievement, the personal beliefs and shortcomings of the 19th century arms dealer who established the prizes notwithstanding.


“Real” Nobel prizes depends on what you think is the key property of a Nobel prize. If it is “was one originally Established by Alfred Nobel”, then you are meaningfully correct.

If the key properties are “Signals the same sort of contribution to human understanding. Administered by the same org as admins the other Nobel prizes. Has the same credibility.” Then you are merely technically correct.


Nobel price in economics has consistently higher quality and more prestige than literary or the peace prices.


Nobody doubts it's the weakest prize of the lot (alongside the peace prize, although you'll probably remind me that's not real either).

They seem to be perennially awarded to academic staff at the university of Chicago for coming up for a new way of pricing something or interpreting the markets. Nothing earth shattering or outside the realm of orthodox economics (beyond a few exceptions such as game theory).


They shouldn't worry too much. The economy Nobel was a fake political after addition to the real set of scientific prizes, along with peace prize.


Completely different cases. The peace prize existed since the beginning. The economic "Nobel prize" though was added after the fact by a Bank and should not be considered legitimate.


> Nobel Prize in Economics

No such thing exists.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: