On this topic: It's always bugged me when behavioral economists start going on about "irrational human behavior." Many times they are simply not accounting for cognitive or longterm costs. Using your precious energy on too many system 2 calculations will not help you live a good life. So compromises are made. Thinking is hard work. It is entirely reasonable to minimize hard work.
I think "rational" and "irrational" have taken on domain-specific meaning in economics. This is not unusual. Ontologies and DSL are kind-of a way of life now, in any information-theoretic field.
In my space, we refer to some things as "portable" or "non-portable" which has very specific intent, which doesn't relate to if you can pick them up or not. I think loan-words which have close analogies in a few minds rapidly diverge.
So a "rational" actor in economics seem (to me at least) to mean the typical selfish bastard who only acts to maximise their own profit outcome, no matter how its defined, and excludes a green warrior buying a good or chattel to NOT use it, or somebody buying it to give to charity, or buying it to round up another charge to get north of a shipping fee but its a nonce purchase and has no intent or purpose..
its a narrow, domain-specific meaning. I think that word doesn't mean what you think it means inconceivable
>So a "rational" actor in economics seem (to me at least) to mean the typical selfish bastard who only acts to maximise their own profit outcome, no matter how its defined, and excludes a green warrior buying a good or chattel to NOT use it, or somebody buying it to give to charity, or buying it to round up another charge to get north of a shipping fee but its a nonce purchase and has no intent or purpose..
That's not true at all. Rational is defined as someone acting to maximise their utility (roughly satisfaction), which is capable of encompassing "a green warrior buying a good or chattel to NOT use it, or somebody buying it to give to charity, or buying it to round up another charge to get north of a shipping fee but its a nonce purchase and has no intent or purpose" perfectly fine.
I’ve always thought (or have been unaware of) a formal economic way of talking about this. Years ago I came up with my own definitions. “Altruism” can be defined as the part of your utility that increases because someone else’s utility increases. If you get no personal satisfaction out of something intrinsically but get satisfaction because someone else is happier, that is 100% altruistic. If you get some satisfaction but also some from the external benefit, you could tease out where it lies between 0 and 100%.
This also leads to another concept of “Malice” which would be the positive utility you get from others losing utility.
The crux to me is that often utility is talked about as only what you intrinsically get. Doing selfless things isn’t without benefit, it’s just without direct benefit and I have seen little ability to quantify it in economic terms.
>“Altruism” can be defined as the part of your utility that increases because someone else’s utility increases. If you get no personal satisfaction out of something intrinsically but get satisfaction because someone else is happier, that is 100% altruistic.
Wouldn't altruism be getting no satisfaction from increasing someone else's utility but doing it anyways?
> So a "rational" actor in economics seem (to me at least) to mean the typical selfish bastard who only acts to maximise their own profit outcome, no matter how its defined, and excludes a green warrior buying a good or chattel to NOT use it, or somebody buying it to give to charity, or buying it to round up another charge to get north of a shipping fee but its a nonce purchase and has no intent or purpose.
This is completely unrelated to what economics means by referring to a rational actor. A rational actor is just one who, given a choice, will take the action that produces the all-inclusive result they most prefer. You're talking about someone whose only preference is having the greatest amount of money, but economics contemplates all possible preference sets, which is why it measures benefit in "utils".
So what you're saying is, that in the domain of study it means something which in common parlance, nobody understands. Because when any politician gets on their hind legs to bray about rational actors and efficient markets, they sure as hell aren't talking about "utils" to the taxpayer, are they?
Let me ask you a question. Do you think the word "rational" in rational actor, rational investor and rational market, has exactly the same meaning? Do you think as people commonly understand these terms (I don't mean economists) the answer would be the same?
"Because when any politician gets on their hind legs to bray about rational actors and efficient markets, they sure as hell aren't talking about "utils" to the taxpayer, are they?"
Do you even mean this as a serious argument? Can your favorite field survive politician's (politician, from "polis", city, and "-tician", "person": 1. professional liar 2. scumbag 3. one who professionally holds office) handling of their terminology?
Nor can I find it a terribly serious argument that "common people", i.e., people who aren't in the field and don't know the definition, don't know the definition. That's just shy of begging the question, except there's a small sliver of people "in the field" who don't yet know the definitions, called students.
I'm trying my best to unpack your words into some sort of actual argument but I can't find it.
> politician, from "polis", city, and "-tician", "person"
I know you didn't really mean this, but polit/ic/ian -- "polit" is from polis, city; "ic" is a Greek adjective-forming element; and "an" or here "ian" is a Latin adjective-forming element. "Polit" is the only part of the word that bears any semantics (in the etymology). The Greek root for person would be "anthrop", which isn't present.
(The root for "man", an adult male, is "andr", which is why it's so hilarious to Italians that English speakers think Andrea is a girl's name.)
> Do you think the word "rational" in rational actor, rational investor and rational market, has exactly the same meaning?
Rational actor and rational investor, the same. Rational market, different. But I only know the term "rational market" through its use in the fixed expression "the market can stay irrational longer than you can stay solvent".
> Do you think as people commonly understand these terms (I don't mean economists) the answer would be the same?
That is because economy did not allow for bounded rationality. They're talking about optimally rational with perfect knowledge. Also known as a spherical cow.
Ultimately, rational is supposed to mean reasoned, not optimal. Heuristics can be rational if applied with reason and devised with good valid reasoning.
Even such a domain specific definition is borderline useless because there is no way to constrain the "utility" which rational agents are supposed to maximize -- which makes the concept of a rational agent tautological, and not at all predictive. ("People behave that way because they behave that way". I might encode my observations in a utility function but that doesn't suddenly help me generalize to other situations)
The other thing is that the limited situations which psychologists/economists measure people in are very artificial, and there's no reason for people to ace the experimenter's metric.
Responding to grandparent: It's not just that project are efficient and not quite correct -- they're probably more correct and scientists have no way to tell.
"If you judge a fish by it's ability to climb trees, it will live it's whole life thinking it is stupid."
Put another way, even if you take the strict definition of a rational (optimal) agent, in probably any non-trivial or non-contrived corner case, for any arbitrary set of actions, there exists some utility function and information constraint that can justify that set of arbitrary functions.
Have a set of "irrational actions"? Add in information and computational constraints and the necessity for heuristics, the exact right kind of risk-aversion / novelty-seeking behavior, (or play with the second or higher derivatives of your utility function), add in social dynamics like signaling, repeated games (or incorporate mental burdens that negate the impact of repeated games) etc. There exists some formalized universe where your agent is indeed rational.
It's really quite simple. Have you ever made a decision and then later thought "even without knowing what I know now, I made a bad choice"? That's irrationality-- given the same information, making two different choices.
> So a "rational" actor in economics seem (to me at least) to mean the typical selfish bastard
Well, a rational actor works to maximise their personal preferences effectively. You've accidentally slipped an assumption in that all people are fundamentally typical selfish bastard's with a corresponding set of preferences, but I suspect you don't actually assume that personally because of the invoking of green warriors.
A green warrior or charitable giver is still a rational actor, and adequately modeled as a normal source of demand. It just happens they need to be acting rationally in context of a larger system to themselves or basic law-of-the-jungle style effect will optimise them out. Green warriors themselves often talk about preserving the planet so people can live on it, which is clearly a personally rational goal. Charitable people will often say similar things.
Appropriate that you're talking about idiosyncratic domain specific usage of terminology, I'm not sure what you mean by "information theoretic" here. I've always heard it used to refer specifically to the information theory of Claude Shannon.
I probably randomly misused a term I overheard in some bar some time.
(I have a great picture of Claude Shannon's hand holding a literal mechanical mouse: not a modern computer mouse, a simple robot mouse he built do do maze-solving. I wish I could find it, its such a nice example of dropping words and names into another context: "Claude Shannon's mechanical mouse" means probably something completely different to what many people think. https://www.google.com/search?q=claude+shannons+mechanical+m... shows things I think it was taken from. Its a kids book on computing from the 1960s)
> It's always bugged me when behavioral economists start going on about "irrational human behavior." Using your precious energy on too many system 2 calculations will not help you live a good life. So compromises are made. Thinking is hard work. It is entirely reasonable to minimize hard work.
I think you're missing the value in finding that some things are irrational. Of course no one assumes people make perfect decisions 100% of the time, and of course there are cases where making the better decisions isn't worth it - but it's incredibly useful to know in which cases people usually make "wrong" decisions, in order to, when necessary, find antidotes. Wrong is a synonym for irrational here, and is a placeholder for "contrary to what the person would have chosen to do, had they had all the information".
Take the "planning fallacy" - a pretty well known bias in which people underestimate how long things projects will take. Knowing that it exists is what leads people who want to get the correct answer to use "tricks" to get a more real answer (like taking the so-called "outside view", and estimating how long a project will take based on how long similar projects took in the past).
The problem is that it's not always clear that people are being irrational. Take the planning falacy. Is it bias, or is it politics? Your boss and / or customer might say they want accurate estimates, but when you give them, they say "that's not good enough". Before long, you implicitly know this, adjust your estimates, and hope for the best. You'll just make excuses and ask for forgiveness later. Now this isn't great business, and is an irrational system, but the individual players might all be acting rationally. I wish I had a better example where the bias / "irrational behavior", was purely rational. They do exist in other cases, but I can't think of any fire yours.
That's something a bit different and doesn't mean that. A revealed preference means that, if you say you never eat donuts, but when observing your behavior we see you buying and eating donuts, then we know that you actually do eat donuts.
A bias, on the other hand, is if we ask sometime if you want to eat 2 or 3 donuts, and you answer 2, but then after further studies, we see that the amount you eat depends on, say, the size of the plate in front of you, then we know that there's something influencing your decision which you may not be aware of and which theoretically speaking "shouldn't" influence your decision.
Revealed preferences don't mean that everyone makes perfect decisions all the time. It just means that, if you want to find out what people actually want, then the best way to do it is to observe what they actually do.
The other advantage is that revealed preferences tend to be fairly rational. Not 100% and there are systemic issues in them, to be sure, but they're often quite good. In fact I'd say they're reliably better than their criticisms.
That is, given a revealed preference and someone claiming they've identified a bias, I start from the presumption the revealed preference is actually rational and the explanation of the bias is wrong, because, well, the article is clearly correct. I see it online all the time. There's a bias bias, the presumption that if someone explains why something is biased, that's it, case closed, no further analysis needed. I see that bias in myself. It's hard to overcome, which is why I (try to) start by presuming the revealed preference is correct.
People's explanations of their revealed preferences are, by contrast, hot steaming garbage. People are terrible at explaining their reasons. So terrible that I think it actually plays into bias bias, because if you take people's explanations seriously they sound incredibly irrational, so it's merely a matter of trying to explain their irrationality. It seems so fruitful. But while studying patterns in rationalization is probably an interesting topic of its own, studying rationalization shouldn't be confused for studying what people do. In general, I (try to) start out by discarding someone's claimed reasons for doing something, unless they do a very good job of convincing me they're actually good at introspection and have made an at least halfway serious attempt at it.
There is value in understanding our biases. Yes, I think it is fair to call them biases but not necessarily irrational--in the longterm.
People are complex psychological creatures and this whole issues of irrational or rational behavior is compounded by the fact that people can /learn/ system 2 patterns of thinking so well that they become system 1 patterns. Those are essentially habits.
So with your rational mind, it is worth identifying any biases you want to codify in your system 1 brain to minimize cognitive costs. But that's an option. And I think we have that option because there is a giant tradeoff between spending time and energy thinking versus getting a good enough result (out of the many decisions and processes we have to carry out each day).
(With respect to the planning fallacy, I suppose there are benefits to underestimating the time it will take. If projects are really much harder than expected, it may never be emotionally worth starting them because they are too monumental. So we have a bias that makes us more optimistic so we actually start doing stuff which helps us in the long run. What I mean is that you can spin it as a good thing. It may be a wrong intellectual assessment, and sometimes that matters immensely, but it is also quite possibly rational for humans to have this bias because it helps them get more stuff done. I'm not a behavior economists or a psychologist, to be fair.)
Irrational isn't merely wrong. You can be rational, and still wrong, beucase you didn't have all the necessary information. Irrational is wronger than wrong - it means making a wrong decision despite having adequate information, or making a wronger decision than what the information you had should allow.
One shortcoming in economics is the inability to model cognitive costs. This is best visible in game theory - no model of cognitive costs has gained traction in the last few decades despite the desire for one.
The closest that I recall is representing strategies with finite state machine and having a preference for strategies requiring fewer states. A main difficulty there is mapping strategies to FSA.
Transaction costs and cost/benefit analysis of information are a thoroughly studied part of economics. I'm not even an econ major and it was covered in my college CS education
The capacity to make rational decisions is a limited resource, so making locally slightly irrational decisions is, globally, the rational strategy. There's a number of predictable ways we make locally irrational decisions, though, and there's a lot of value to be extracted from studying those.
Behavioral economists recognize two kinds of biases: cognitive and emotional. Cognitive are the kinds of biases you refer to - when someone makes a suboptimal decision because of failing to do proper research, properly take on new information, etc.
But then there are emotional biases, for example the loss aversion bias. Investors tend to hold on to their losing investments to avoid realizing a loss and sell their winning investments to lock in a profit. This is irrational - investors achieve subpar returns because of this - and often cannot be resolved with education.
Accounting for cognitive or longterm costs - using the concept of bounded rationality - would not make emotional biases disappear or make them invalid.
This reminds me about the supposed irrationality of emotion. An old popular view pits rationality against emotion: The more rational you are, the less emotion rules decisions, the better. The fallacy I see in this is that a rational human would acknowledge her/his humanity, and judge in agreement with empirical evidence that systematically suppressing emotion leads to certain psychoses and emotional disorders that will, ultimately, make her/him less rational. That is, to be more rational, one must allow herself to be a little irrational once in a while.
In agreement, the cognitive cost of being rational all the time is too high, and is, after all is done, irrational.
Carried to extreme this could lead a behavioral economist to report, "the subject made the perfectly rational decision not to peform any mental effort, and so was eaten by a shark".
You say, "Many times they are simply not accounting for cognitive or longterm costs." How do distinguish those times from the other times when a person is being rational?
Which makes a case for outsourcing these calculations to machines.
I wonder if when we'll be able to do that through better human-machine interfaces we will become more rational decision makers.
The author, Gerd Gigerenzer is a giant in social psychological research into bounded rationality, and especially heuristics. His books are worth a look. He provides an interesting counterpoint to Kahneman and Tversky, other giants in the field that you may know better. As usual, maybe start with Wikipedia https://en.wikipedia.org/wiki/Gerd_Gigerenzer
For all that Gigerenzer is a giant in social psychological research, the article doesn't half sound like two fields talking past each other. The entire purpose of behavioural economics is not to reinvent Gigerenzer's own field but to point out that humans frequently deviate from "rational expectations" outcomes predicted by orthodox economic models. Modelling deviations from rational choice theory is not "all that keeps it erect", it's its raison d'etre in critiquing a field where rational choice theory and unsystematic error is the default assumption. The crux of the "great rationality debate" is that Kahnemann is talking about rationality as "making correct predictions with random error" (or in terms of policy implications, "making predictions which lead to preferable outcomes to those which could be achieved by any other conceivable model") and finding examples where humans do not do this, whereas Gigerenzer is talking about rationality in terms of whether human intuitions and heuristics are justifiable in an evolutionary sense (and improvable upon by learning).
Gigerenzer is undoubtedly correct that Bayesian reasoning is something that can be learned by many people, and yet all that Kahnemann needs for the implications of his statement that "the human mind is not Bayesian at all" to be relevant in behavioural economics terms is for some portion of the population to reach systematically different conclusions from the correct Bayesian one using an inferior heuristic. (A point Gigerenzer essentially demonstrates by citing a study which show how outcomes improved after gynecologists already motivated to make correct predictions were taught Bayesian reasoning). Perhaps it's sloppy wording on the part of Kahnemann that Gigerenzer is taking exception to, but the core claim is not that humans cannot learn statistics, but that at a population level, some humans will continue to rely on less accurate heuristics which deviate from those predicted by rational expectations models in a systematic [and predictable] manner which are not simply eliminated over time due to financial incentives to be correct.
The one claim which really ought to be of serious concern to behavioural economists is that their experimental studies frequently mistake random error (acknowledged by 'rational expectations' orthodox economics) for systematic error, so it's perhaps unfortunate that the Kahnemann example Gigerenzer highlights in this paper as not being replicable is an abstract demonstration of the availability heuristic with zero direct economic implications.
> Gigerenzer is undoubtedly correct that Bayesian reasoning is something that can be learned by many people, and yet all that Kahnemann needs for the implications of his statement that "the human mind is not Bayesian at all" to be relevant in behavioural economics terms is for some portion of the population to reach systematically different conclusions from the correct Bayesian one using an inferior heuristic. (A point Gigerenzer essentially demonstrates by citing a study which show how outcomes improved after gynecologists already motivated to make correct predictions were taught Bayesian reasoning). Perhaps it's sloppy wording on the part of Kahnemann that Gigerenzer is taking exception to, but the core claim is not that humans cannot learn statistics, but that at a population level, some humans will continue to rely on less accurate heuristics which deviate from those predicted by rational expectations models in a systematic [and predictable] manner which are not simply eliminated over time due to financial incentives to be correct.
Indeed, the possible result that only some people are irrational would be very good for the paternalistic policy makers. In fact it would be better than the result that all people are irrational; if all people are irrational, what gives some irrational people the right to guide other irrational people’s lives.
Economists are always oh-so-sorry to inform us that some people just can’t run their own lives without the help of the “free” market or the government. So very sorry. Happily they know some technocrats that will bravely shoulder the burden of being Bayesians.
It might be my ignorance of the topic but it seems to me that behavioral economics seems to have a major blindspot with respect to the computational & informational complexity costs of decision making processes (which are as real as any other kind of cost).
This could also be because it's more "interesting" to come to conclusions like "here's a pernicious irrational bias that most people have" instead of "people are essentially rational within their informational and computational limits".
Also robustness costs, -- many heuristics that get poked at by champions of behavioral economics are behaviors that tend to give improved results when reasoning with noisy inputs (and, in particular, inputs with unknown levels of noise or even malicious distortions) compared to a formal decision theory that doesn't handle low quality inputs well.
These limitations are only a concern when people attempt to suggest that behavioral economics is telling us to rework how we live our lives. Generally such claims are unjustified extrapolations from very basic studies which in no way support the pop-economics advice. Like if someone did some experiments and determined that submersing people in water often killed the test subjects, then other people ran to the presses recommending that no one drink anymore because water was proven to kill in scientific studies... :)
I'm not sure why the focus here is on behavioural economics specifically. Ignoring decision making complexity is really the domain of traditional microeconomics. If anything, behavioural economics is the path by which economics as a whole has the best chance of digging itself out of the hole it is in, because it at least acknowledges that you have to look at humans a little bit rather than just inventing theories from whole cloth.
I think you're correct in your assessment, but there's more to it. Specifically we must remember that evolution finds more useful solutions over time, but not necessarily the most useful solutions. A local minima of cost can be found while ignoring an even lower point of cost on the other side of a hill. It would be ridiculous to expect us to have found the most rational course of action in a given situation, even accounting for the mental costs you mentioned.
Complexity costs of decision making is actually at the core of behavior economics, for instance one of the biggest insights (which sounds really obvious in hindsight) is that people stick to the defaults. You can see this insight everywhere from opt-out dark patterns and default opt-in organ donor systems but according to classical economics something being default opt-in or default opt-out shouldn't matter because people will take the time to make a most optimal decision possible with all possible information.
Most of the talk of 'rational' vs 'irrational' is because it is in opposition to the classical economics assumptions that people act perfectly rational with perfect information. The more full explanation a behavioral economist would probably give you would be something about how the human mind is optimized for a very different environment and takes short cuts appropriate to that environment that are not helpful in the current environment.
One of my favourite "bias-biases" is the one that is now very, very popular: the idea that people's largely accurate perception of some reality is instead a bias that is the cause of that reality (see "stereotype threat" and related), which is as close to a "wet roads cause rain" fallacy, and magical thinking[1] as you can get.
The authors' point seems to be "if you ask people questions whose real answers correspond to informal statistical judgments, then they will give the right answers."
This isn't really that informative, but the strongest form of this argument is "informal statistics is better in a lot of real-world cases." This sounds to me quite a bit like the case the behavioral economists make -- that these intuitions aren't crazy, or stupid, but they do exist. They were constructing tests specifically to isolate them, and succeeded.
A further point is that these kinds of situations when intuitions are challenged happen regularly. I don't know what behavioral economics says specifically about causes, but it seems to me they take a fairly neutral view: such situations can occur by chance, due to hostile intent, or by mistake. Whether the outcome is positive or negative depends as well. The point of "nudge policy" (opposition to which seems to be the main point of the article) is to see about making these situations positive -- that is, arranging the world through policy such that making intuitive considerations actually yields positive outcomes. I don't think the authors' argue successfully for the point that since everyone is rationale and no-one ever gambles when presented with the opportunity, that therefore arranging policy such that intuitive choices yield positive outcomes is unwarranted interference.
> governmental paternalism is called upon to steer people with the help of “nudges.” These biases have since attained the status of truisms. In contrast, I show that such a view of human nature is tainted by a “bias bias,”
Is it me, or does this sentence (from the abstract) seem very non-academic? It uses the first person, and seems rather political.
It is clearly political, but economics is inherently political - it justifies how society is organised. I would put it that someone having firm ideas of how an economy should be run doesn't in any way disqualify them from making a point and if an economist doesn't have firm opinions on how to organise an economy ... well, that would be fine, but surprising.
The abstract is putting forward a very important point in light of the environment that has developed where governments are expected to 'fix' the economy when a 'crisis' occurs. Both of those words are highly, highly political and also central to the practical branch of modern economics.
The core of this abstract is that there is evidence people have 'fine-tuned intuitions about chance, frequency, and framing' compared to what is currently believed by economists. If that is true, then that should have a bearing on the quantity and quality of regulation being recommended by economists.
In my experience using first person in abstract or even in the manuscript is field- as well as individual- dependent. Prof. John Cochrane (when he was at U of Chicago) advised not to use royal pronoun "we" when it's a single author paper. In business management you can see a mix of use of "we" even when the paper is written by a single author. Some journals like Journal of Marketing require all the abstracts in third person (e.g., the authors find xyz instead of we find xyz) but don't have any rules about using I/we in the manuscript.
It is more efficient to go to the gas station and fill you tank all the way. If you do, you are less likely to accidentally run out of fuel somewhere, you can choose better where you fill up for the best price, you do not waste time that could be devoted to other profitable endeavors.
So why is it rational for the poor to only put in a few dollars at a time? It is not often the case that they don't have enough money to fill the tank; No, they don't fill it all the way up because come end of the month, gasoline in the tank is not as liquid as cash, and other needs may arise. Filling your tank is a statement about the predictability of your financial needs and the availability of credit to offset hard needs.
If you can afford a car, you are not poor.
I recommend spending some time in e.g. Democratic Republic of Congo, Mozambique, Uganda, Yemen or Malawi, so learn about poverty.
Poverty entails non-optimal choices.
Looking at the drug addicts around where I live, non-optimal choices often lead to relative poverty too.
There's poor people living in wealthy countries like the US, too, and especially in the case of the United States its lack of public transport infrastructure basically means that you need to have a car even if you're poor, otherwise you can't get to your job, plain and simple.
I'd say a similar phenomenon has started happening in Western Europe, too, see the recent "gilets jaunes" movement. The French Government wrongly presumed that increasing the cost of gas by a few euro-cents won't matter, because, like you said, they thought that "if you can afford a car, you are not poor" so that a few euro-cents per liter won't matter, but the reality bit them in the posterior as lots and lots of poor French people have had to move out to the suburbs even exurbs because the downtown areas of cities like Paris or Bordeaux are expesneive af so that those poorer masses had to go wherever the real-estate was cheaper. And when you live 30 to 50 km outside of Paris you do need a car.
The 'poor' living in wealthy countries like the US or France are not poor in the sense of Democratic Republic of Congo, Mozambique, Uganda, Yemen or Malawi, where by-and-large there are not even paved streets to drive. It's deeply misleading to compare poverty in the developing world with how the bottom third of the US live. Think available health care, education etc.
No one said they were poor in that sense. However, there are still poor, even if they are not starving. Economic stress of living on small paycheck to small paycheck is poverty.
Sure if you took their several dollars to another poorer country they would be considered well off, but they can't get there and spend it. They are here, paying for food and lodgings that take almost their entire income.
Economic stress of living on small
paycheck to small paycheck is poverty.
No it's not.
There is a gigantic qualitative difference between living in a western country "paycheck to paycheck" enough money to afford driving a car, (not to mention all the other benefits that come with living in a first-world country, top health care, pension, unemployment insurance, working primary schools, working secondary schools, low corruption etc etc) in comparison with the crushing poverty one finds in poor countries, no streets, no schools, civil war, high corruption, no running water, no electricity etc.
What cognitive benefit do gain from conflating two entirely distinct phenomena? It's like calling both, a heart attack and flu, a heart attack, because both are unpleasant.
Once more, I invite you to spend some time in very poor countries.
This is not new. Behavioral Economics has always recognized biases as the result of heuristics -- rules of thumb that are mostly correct, especially with regards to our evolutionary history. However, in specific, predictable contexts, it can be very wrong, and lead to a person being manipulated.
For example, the anchoring bias is actually a useful heuristic in many situations. If you ask someone about the length of the Nile and a third person guessed 5000 miles, you probably would guess closer to 5000 miles to integrate the third person's information, which is usually somewhat informative.
The bias comes when an outside marketer manipulates you by saying "how much should the iPad cost? $2000? No! $1000? No! It's only $500!".
Ultimately, the reasonable price for an iPad is as the cost (including labor, figuratively) - obviously nobody wants to sell at such price. Everyone wants to get richer, irrationally so.
When Karl Popper popularized the "testability" definition of scientific theory, he had two targets in his sights: Freud and Marx.
Freudians and Marxists were promoting what was, in his view, scientism: a dangerous middle ground where theories and studies appear scientific but aren't.
These two fields (psychology & economics) are still having a lot of problems with that middle ground. On one hand, they aren't willing to cede the science label, and join the other humanities like history and philosophy. On the other, very few of their important theories are scientific. IE, the big debates in economics & psychology are about theories that will never be tested. They'll fall into and out of fashion due to anecdotes and other intellectual trends.
The Keynesian paradox of thrift, Friedman's monetarism... these will never be tested, confirmed or denied. They are likely to fall out and maybe back into fashion. Neither will historical materialism, etc.
Same exact thing in psychology.
It's a more aggressive conclusion than I'm happy with but.. behavioural psychology is pseudoscience.
Every subject has its own slant or “bias” if you choose based on how it defines itself and the sorts of problems it tackles. That’s certainly worth examining, but maybe not particularly good fodder for takedown articles.
This sentence in the abstract is probably the most revealing about the goals and yes biases of the article’s author: “meaning that governmental paternalism is called upon to steer people with the help of ‘nudges.‘“
In other words this isn’t really about examining whether the field is productive on its own terms but it’s about picking a political fight under the guise of a methods critique. The author seems convinced that behavioral economic results demand governmental intervention, and so he attacks the premise of the entire field.