Hacker News new | past | comments | ask | show | jobs | submit login
They studied dishonesty – Was their work a lie? (newyorker.com)
210 points by chrisaycock 9 months ago | hide | past | favorite | 153 comments




> When she expressed her doubts, the adviser snapped at her, “Don’t ever say that!”

I don't know whether the advisor was referring to collegial decorum in how good faith research is discussed, or more like whisper networks about bad faith that are impolitic or scared to speak aloud.

But I did actually once get a snap response like that. I was chatting with another grad student, who'd mentioned a student who'd just arrive, who'd be working for Prof. X. I hadn't worked with Prof. X, but I happened to see them treat multiple students poorly over time, belittling the student, disciplining them in front of groups of others, giving them non-research chores like a personal assistant rather than a research assistant, not letting them pursue their research, etc., so I blurted out, concerned, "Oh no!", and that X was a bad advisor.

This other grad student surprised me by snapping back at me, sternly, "You shouldn't say that!", and something about reputations. That other grad student's parent was a prominent academic, so I figured they were admonishing me in some decorum that they were brought up in, and which they knew better than me.

It might've been weeks later, that same grad student came back to me, apologetically, and spoke with surprise, of how miserable the new student was, once they realized the career disaster that they'd stumbled into.

Epilogue: A long time later, that grad student, who'd admonished me and then apologized, contacted me about a different professor, because they knew a prospective new student of that professor, they had some suspicions, and they thought that I might know something. The truth was much worse than they suspected, and the student fled after hearing only a little, in vague terms.

BTW, there's apparently a lot of all kinds of poor behavior, but the people doing it are almost never cliched evil, IME. For example, a couple times I saw Prof. X do something kind, and I think probably they had something like a very stern taskmaster upbringing that caused their other side. There was also another one, who was kind to me, but I later learned that they were decidedly unkind to some others, and were actually nudged out. And one of the most body-count professors I saw was actually genuinely warm and charismatic and humble in some ways, and I don't think they realized that they seemed to have emotional/cognitive problems that they let kill other people's careers in an awful way. Off-the-record gossip with grad students and (later) professors will tell you of all sorts of other misbehavior, especially by people who aren't all bad, but very driven and pressured. (Raging narcissist/psycho, however, seems relatively rare, or their trail of bodies doesn't survive long enough to complain about it. Maybe the worse people tend to go for careers with more money and power?) And, back to this article, I once met with Ariely, and he came across as empathetic, down-to-earth, and of goodwill, so -- iff it turns out that he's found to have done something academically dishonest -- again, that would seem like a bit of human frailty, in a more wholesome larger picture.

(Note: I've been a grad student a few places, and have talked with people at countless other places, so am not calling out a particular school or person. A lot of people have seemed paranoid about saying anything at all, myself included, so please don't speculate, or I think that would have even more of a chilling effect than already exists.)


This is exactly the result when you have a large power differential. The people in authority may not be intrinsically mean or evil, but they're not getting the proper amount of feedback about their own behavior, and the reason is that they have too much power and the people they're dealing with have too little. Everything about the grad school (esp. Ph.D.) power structure is wrong, giving the student virtually not recourse and the advisor way too much authority.


Exactly.

Power differentials experienced too long can create character flaws, even when originally used constructively and with all good intentions.

I saw the cliche of a professor taking a students original and significant work, done without their direction, and quickly presenting it as mostly theirs, after adding their critical professorial value by waving the result around for their colleagues to congratulate them on.

The nice short paper that resulted was notable for an author list starting with the professor, followed by at least a dozen of the professors colleagues, and ending with the student who had solved the problem whole by themselves.

I also watched a professor's highly evolved and useful ability to dodge and protect his team from endless administrative performative/mata/made-up work in a university department translate badly to a business partial-ownership situation where their slippery ability to shirk work onto others gave them strong leverage over a project. Leverage that had much worse indirect effects than the obvious direct one of not contributing a lot.

This was someone with many great qualities, but also an internalized entitlement and talent for control.


I'd think that could be managed with the help of faculty peers and the levels of administration, but barriers:

* Students who are mistaken, entitled, and/or lying... is a thing. Which do you trust, and is there a crying-wolf effect applied to students as a whole.

* At the faculty level, they're colleagues, often friends (sometimes more), are more likely to being seeing a better side of people with problems, and will tend to identify with other faculty. (Though there's also internal dramas, but they might unite against the common enemy that is students.)

* The administration can be made up of people with similar problems (especially overconfidence, and aggressive self-interest).

* In some facets of university structure, it can be strictly a business, no vestigial ideals of academia, only branding. Which is a problem both for how they behave in those facets, and for students haplessly trusting the university when a sociopathic facet is involved.


Those are all real complications!

A big one is students have far more to lose. “Solutions” are generally worse than the problems they address.

Grad-student / professor-advisor relationships are extremely personalized as well as career impacting. And not hot swappable.

Losing an advisor and getting any kind of rep can be disastrous. Degrees can flounder and die in an awkward aftermath.

I have no idea how faculty & peers should help, after the fact. In both examples above, people knew & didn’t approve. But stepping in would have been war - with the probability of everyone regretting everything.

A culture that emphasized ethical student treatment, with specific examples about research and going into business with students, might be preventative.

But on the going into business side, a lot of universities want to parasitically insert themselves into that too.

To me that is a conflict of interest with their educational mandate, but administrations think they must feed to grow & grow to feed.


This reminds me of someone who was athletic, played sports and had some strong opinions about them.

Basically, she liked sports where merit counted and disliked others.

For example, if you're playing a game with a score, or running against a clock, the sport tends to have a lot less BS compared to say gymnastic or diving competitions where you impress judges.

I think the trick in academia is optimizing for something similar.


In case anyone hasn’t read it, the Datacolada article demonstrating Ariely committed fraud is a great read and extremely convincing.

https://datacolada.org/98


FYI datacoloda has a GoFundMe to defend against a lawsuit filed by Francesca Gino.

https://www.gofundme.com/f/uhbka-support-data-coladas-legal-...

In early August 2023, Professor Gino filed a lawsuit for defamation against Harvard University, and against Leif, Joe, and Uri personally, claiming 25 million dollars in damages. Defending oneself in court is time-consuming and expensive regardless of the merits of the lawsuit – as First Amendment lawyer Ken White put it to Vox , “The process is the punishment.” Targets of scientific criticism can thus use the legal system to silence their critics.

At present, Leif, Joe, and Uri do not have pro bono representation. The lawyers they’ve spoken to currently estimate that their defense could cost anywhere between $50,000 and $600,000 (depending on how far the lawsuit progresses). Their employers have so far only agreed to pay part of the legal fees. Defending science requires defending legitimate scientific criticism against legal bullying.

Edit: I initially wrote that they met their GoFundMe goal of $350,000, which is true. However, I’m not sure why they set their goal to only $350k when they mention that legal costs could skyrocket to $600,000 which they have not met


Discussed at the time:

Evidence of fraud in an influential field experiment about dishonesty - https://news.ycombinator.com/item?id=28210642 - Aug 2021 (51 comments)

(Lots more related links at https://news.ycombinator.com/item?id=37719476)


Great article. nice job. In the response letter Arielly says that he got the data from the insurance company (with which he collaborated on this experiment) and didn't suspect a thing. What is the level of responsibility which is expected from a researcher? What should be the consequences for the researcher on such cases?


> Arielly says that he got the data from the insurance company (with which he collaborated on this experiment) and didn't suspect a thing

The insurance company confirmed the data Arielly represents he got is not the data they sent. Arielly is a fraud.


I'll grant that the motive is more obscure on the insurers side, but I wouldn't be so quick to take their word for it either.


> I wouldn't be so quick to take their word for it

Dan Arielly is a curious figure to give this sort of benefit of doubt.


I'd say the same for a random insurance company.


Insurance adjusters are straight-laced to the point of being anal. They go to prison if they're not. I trust them here.


Great article. The "Author Feedback" section at the bottom is interesting. All 4 that are there read exactly like you would hope: dedicated researches agreeing with the Datacolada analysis and expressing disappointment that they did not catch this error in time. One wonders where the truth is.


My favorite thing is that Gino is now suing DataColada [1], and basically is claiming that 'they were right in every other case and I supported them, but with me they are lying'.

1: https://datacolada.org/113


(Not an excuse in any way)

This data was so shoddily faked that I have a hard time believing someone did this with an intention to deceive. Uniform distribution with a hard cutoff at 50K??


In science we only catch the sloppy fakers, we don't catch the 'good' fakers.

There was a similar case a while ago in spider biology: https://www.nature.com/articles/d41586-022-02156-2

Pruitt had several influential spider biology papers out and when others failed to replicate and dug into the Supplementary Data, they also found lazy patterns.

>When Laskowski dug into data sets that Pruitt had provided for the study, she was shocked to find stretches of data that seemed to have been duplicated, to represent findings for multiple spiders. This questionable data helped to bolster a long-unproven theory that repeated social interactions in a group of spiders cause individuals to behave predictably.

So yeah, who knows how much scientific fraud there actually is; I haven't seen a case of fraud where the data was convincingly faked, which means that these cases are hard to detect or hard to prove.


If you fake and get away with it you’re already admitting to a type of laziness, so it expands y til finally you are faking it so lazily that you get caught.

Faking data realistically is almost as hard as getting it honestly. I know, I “expanded the data pool” for my eight grade science project.


Except the "it" in "getting it" is a significant or even bombastic and sexy finding. There's no reliable way to generate those, it's not just about hardness. The more correct and meticulous you are with your data, the more it will generally tell you that the bombastic claim is false. The more meticulous and effortful you are about faking, the likelier it will slip by.

Your suggestion only works in the simulated science, like school projects where you are retracing the footsteps of past successful scientists, to verify an already known result. There putting in more work will reveal that effect more and cleaner, because your teacher already knew how the thing works in the first place. This is totally unlike real science where we confront the frontier of the unknown.


That somehow loops around to becoming an interesting 8’th grade science project.

Do an easy experiment once correctly, then again with falsified data. Present the results side-by-side. Ask people if they can tell you which is which. Present have a bit on what sort of statistics could catch your faked data set.


Yeah, I did the simplistic thing of decide a question and an answer that sounded good; ran five or so tests, and then extrapolated additional sets based on that.

In my defense I would have run more tests had I started when I should have :)


Yeah that is fair, but it's mind bogglingly lazy/brazen. How much more effort is it have taken to just take an average of a few RAND calls to get a normal distribution instead of a uniform one? Did they not know the central limit theorem??


> I “expanded the data pool” for my eight grade science project.

My teachers warned against this and told us they would catch us if we tried.


Amazing that whoever faked this data was ever able to get to harvard let alone get tenure.

It's just so laughably faked it's not even funny.


> In statements, each disowned any responsibility. Gino, unaware that she was also being investigated by Data Colada, praised the team for its determination and skill: “The work they do takes talent and courage and vastly improves our research field.” Ariely, apparently taken aback, underscored that he had been the only author who handled the data. He then seemed to imply that the findings could have been falsified only by someone at the insurance company.

The insurance company then showed their data and it wasn’t falsified. That’s just amazing. They both falsified data, and when caught, pointed at each other.

Gino was fired but Ariely kept his job then sued Data Colada.

She had claimed it was misogyny and discrimination. She may be right in respect to how she was treated compared to Ariely.

Was it still worth cheating for both of them? Money-wise, absolutely! Add up all their book fees and speaking fees that they’ll never have to repay back. They can already retire comfortably.

EDIT: Gino wasn't fired. She was placed on administrative leave. Revocation of tenure was only implied as a possibility.


I used to work at Two Sigma Investments, and back in 2012-2014 or so, after the publication of his "irrationality" books (before the ones about dishonesty), Dan Ariely gave a well-attended talk at the company headquarters (I wonder what his speaking fee was!).

I remember thinking even then that there was something off about his arguments, and I'm not surprised that he has since been exposed as a likely fraud. For example, throughout his talk, he kept making the point that when someone made a self-serving claim or argument, he would "hold on to his wallet," making an analogy to pickpockets. He then concluded his talk with a transparently self-serving argument that the importance of studying irrationality was growing over time because (as just one example, I suppose), the share of deaths attributed to preventable causes (self-inflicted, etc.) was increasing over the decades, making it sound like society is becoming more irrational. This seemed very weak to me, because that's exactly what you would expect if civilization is making progress over time... if science and technology keep eliminating the exogenous causes of death, over time we should be left with just the endogenous ones. Anyway, I thought of raising my hand to ask if I should reach for my own wallet, but was too young and nervous.


I, too, saw through Dan Ariely's dog and pony show. I just didn't want to bring it up because people would think I was just showing off my clearly superior intellect.


> Add up all their book fees and speaking fees that they’ll never have to repay back. They can already retire comfortably.

Don't forget their faculty salaries. Duke is a private institution, so we don't have access to his salary, but absolute minimum Ariely makes 400K (and probably way more). You don't leave MIT for a chaired professorship at Duke unless it comes with a very, very hefty salary. 25 years of a salary like that will by itself provide you with a good retirement. You don't need to get the media attention of Ariely to benefit from fraud.


Not to mention the hefty consulting fees he’s been getting for years from places like Google. IMO those probably dwarf his faculty salary and money from books.


Salaries in academia are hardly a strong incentive. It's been a long time since a professor salary would put you anywhere near the top of the middle class. Especially if one considers the time and risk that you spend at lower rungs of the academic ladder. Typical professor salaries are in the mid 100ks, and you only get to that point after a PhD and several years as postdoc working 60+ hours a week for maybe $45k.

If your main incentive is financial you would be much better off to go into industry, a starting salary straight out of undergrad I a consulting or software development role is on the same level as a professor salary.


The incentive is that the salary for a tenured professor is a sinecure. At that point you can just produce bullshit and maybe cash in on speaking fees and publishing. Source: TFA. But that's loserthink.

The freedom to fuck around and (probably) never find out is considerable. And if you're actually inclined to apply yourself that 100-150k a year plus your grad students basically means you're your own y combinator year after year every year. Tenure is HUGE. It's worth way more than the equivalent salary in industry.


Glassdoor [1] seems to imply full professors at MIT make around 200K.

[1] https://www.glassdoor.com/Salary/MIT-Professor-Salaries-E288...


That's not really relevant. Ariely was an economics professor in the business school at MIT. He then got an offer as a chaired professor at Duke (certainly with a big raise). Economics professors at top schools have very high salaries relative to most fields, but chaired professors with big media profiles get way more than that.

If you want to see what I'm talking about, here's the data for the first full professor of economics at Michigan that popped into my head. His salary last year was $466K:

https://www.umsalary.info/index.php?FName=&LName=shapiro&Yea...

That's normal for the elite schools. There were rumors 10 years ago that Chicago made a 7-figure offer (denied by the department head at the time).

Edit: To be clear, this is what wealthy schools pay. Salaries fall quickly once you exit the top 15 or 20.


Francesca Gino made $1M/year from HBS alone:

https://finance.yahoo.com/news/harvard-professor-raking-over....


So those glassdoor numbers are completely wrong?

Because nothing there reaches 300K.

Is the disparity between disciplines that high?


> Is the disparity between disciplines that high?

At the wealthiest schools, yes. 200K is a high salary in many disciplines. Here's the data for a chaired professor of philosophy:

https://www.umsalary.info/index.php?FName=elizabeth&LName=an...

and history:

https://www.umsalary.info/index.php?FName=juan&LName=cole&Ye...

Still good salaries, but not at the same level.


> You don't leave MIT for a chaired professorship at Duke unless ...

Or maybe he was kicked out of MIT? ;)


That’s the rumor.


Context, before he left MIT he was (temporarily) suspended for running an unauthorized experiment on people giving them electric shocks...

https://www.jpost.com/international/dan-ariely-scrutinized-f...


Wasn’t Gino the one who filed the lawsuit?

https://www.gofundme.com/f/uhbka-support-data-coladas-legal-...

Leif Nelson, Joe Simmons, and Uri Simonsohn are professors who together publish the Data Colada blog. In June 2023, they published a series of blog posts (linked below) raising concerns about the integrity of the data in four papers co-authored by Harvard Business School (HBS) Professor Francesca Gino. They waited to publish these blog posts until after the HBS’s investigation concluded, with HBS placing Professor Gino on leave and requesting retractions for the four papers. In early August 2023, Professor Gino filed a lawsuit for defamation against Harvard University, and against Leif, Joe, and Uri personally, claiming 25 million dollars in damages.


I think it's misogynistic to think that women can do no wrong. Gino certainly can stuff her shit where she found it.


I don't know if her claims have any real merit but I suspect it's become part of the unofficial vindictive employee playbook to make a discrimination complaint anytime you're facing termination for cause.

Everyone who isn't a straight white male is a member of at least one protected class. The most we can do is play the whistleblower-retribution card, even if it's a farce.

It appears to be worth attempting-- it costs you nothing and shakes the employer down for a quick and favorable settlement to shut you up. I've seen it done by two ex-employees in the last three years, and there was a recent news article about a woman at some insurer who was submitting timecards for hours worked while caught redhanded as AWOL...she immediately cried discrimination too.


> She had claimed it was misogyny and discrimination. She may be right in respect to how she was treated compared to Ariely.

Ariely has been well-known outside academia since his TED talk and wildly successful book 15 years ago. Surely that offers a good deal of insulation for him, without needing to play the discrimination card too.


Given that he’s famous for presenting faked data, his previous accomplishments aren’t real and shouldn’t get him off the hook.


GP is not attempting to exonerate Ariely, but blunt the accusation of sexism on the part of everyone else (who treated him differently from Gino).


Ariely's book "Predictably irrational" was an absolutely delightful, short, pop-science read. It so clearly and usefully condensed science into tidbits you could sprinkle into intellectual conversation or incorporate into your lifestyle. Turns out the science behind it might just be trash, but since I repeated the BS in the same unwarranted confidence, people never questioned me.


They are employed at different institutions, only Gino works at Harvard. I dont believe she has been fired either, yet.


You're right she wasn't fired. It was implied it might happen only. I misread:

> she would be placed on administrative leave, and that he was instituting perhaps unprecedented proceedings to revoke her tenure.

They are at different institutions. I meant that her claim at her own institution about misogyny seems like lashing out. She falsified data, the evidence is fairly convincing. But in a broader picture, in the media, she has gotten more scrutiny and flak than Ariely, that's what I meant.


> The insurance company then showed their data and it wasn’t falsified. That’s just amazing. They both falsified data, and when caught, pointed at each other.

Can you or anyone give more information on this? That data was definitely falsified at some point, so I don't know what you mean that it wasn't. Are you saying the insurance company gave the four researchers reasonable-looking data, and therefore, the four must have falsified it? Do you have a source?


> They both falsified data, and when caught, pointed at each other.

Why is that amazing?

It's just the default go to. These people don't reach these positions by being honest and admitting their mistakes.

They do it by playing these kinds of games.


Well most of all because their papers are focused on dishonesty and how to nudge people towards being honest.

“It takes one to recognize one…” or however that saying goes. Or another one is how people study their own affliction: a common occurrence in psychology from what I hear.


Another take: https://www.experimental-history.com/p/im-so-sorry-for-psych...

> So what was the scientific fallout of Stapel's demise? What theories had to be rewritten? What revisions did we have to make to our understanding of the human mind?

> Basically none, as far as I can tell. The universities where Stapel worked released a long report cataloging all of his misdeeds, and the part called “Impact of the fraud” (section 3.7 if you're following along at home) details all sorts of reputational harm: students, schools, co-authors, journals, and even psychology itself all suffer from their association with Stapel. It says nothing about the scientific impact—the theories that have to be rolled back, the models that have to be retired, the subfields that are at square one again. And looking over Stapel's retracted work, it's because there are no theories, models, or subfields that changed much at all. The 10,000+ citations of his work now point nowhere, and it makes no difference.

> As a young psychologist, this chills me to my bones. Apparently is possible to reach the stratosphere of scientific achievement, to publish over and over again in “high impact” journals, to rack up tens of thousands of citations, and for none of it to matter. Every marker of success, the things that are supposed to tell you that you're on the right track, that you're making a real contribution to science—they might mean nothing at all. So, uh, what exactly am I doing?

It's kinda like finding out an athlete has been cheating. You probably want to throw away any of their inspirational talk, but it doesn't affect the rest of your life because it was just a sport to begin with. Somebody was gonna win, somebody was gonna lose, and it matters a whole lot to the players (and fans) but that's it. Except worthwhile science is supposed to actually matter whether it's true or false.


And that is why I don't consider psychology as a whole a science. It simply isn't. Imagine if something like this happened in physics, or mathematics. It would have huge impacts on all work that cited it. In psychology, not so much.

Does this mean psychology is useless? I don't think so, there have been many people helped by application of psychology, but the sooner we take it out of the same category as for example mathematics or medicine the sooner we can move on. I'd love to be convinced otherwise that we as humanity have a chance of understanding ourselves to the point where we can draw meaningful conclusions about human happiness or other feelings/states of mind. However, I think at this stage our chance of accomplishing this is similar to a medieval blacksmith doing a successful root canal treatment on of his dental patients.

I'm definitely looking at the advancements in AI with hope that perhaps replicating our own thought processes in machines will enable us to study them more systematically and to understand them, but these are still early days.


It's popular here (and in other more "technically" inclined communities) to dunk on the "soft sciences", but the hard sciences also had their share of fraud cases (the Schön revelations being some of the most prominent). The discovery of Schön's fraud also didn't lead to theories suddenly being rewritten. This is simply a misunderstanding of how science works and the contribution/impact of a single scientist. In the end sure some minor directions will be impacted, people will see conclusions still hold, but overall theories will only marginally be impacted.

Something only becomes well accepted theory if it can be reproduced again and again, fraud will come out eventually. That is the beauty of the scientific process.


Definitely agree that this is not just a psychology issue[^1].

I'm not quite as optimistic about the implications, however. The way academia operates is very trust heavy. This makes sense, no lab can be experts in all fields of study, and perform every experiment themselves. If something doesn't work for you, the default assumption is that something in your system is incompatible with the previous work, or you're just doing something wrong. So sometimes it takes quite a concerted effort to identify bad science, because legitimate work can be nudged towards supporting + compatible with this bad science. You basically have no choice but to have basis in what is widely believed to be true if you want grant funding or papers published.

From a purely theoretical/knowledge standpoint, I think I agree - the truth will eventually be accepted. But in fields like biology, this can be at the cost of drugs not working, actually promising avenues not explored, wasted resources - tangible harms.

[^1]: E.g. the recent Alzheimer's controversy, Stanford's president scandal, 5-HTTLPR, etc. Ironically, Ronald Fisher argues that Gregor Mendel's work, the foundation of modern biology, to be essentially fraudulent despite being correct.


> Ironically, Ronald Fisher argues that Gregor Mendel's work, the foundation of modern biology, to be essentially fraudulent despite being correct.

There's nothing weird about that. You'll see exactly the same phenomenon by going through a low-level mathematics textbooks and noticing how many of the problems have small integer solutions. Does that happen in real problems? No.

Does the fact that all the problems are fraudulent mean that the math they're meant to teach you is wrong? No. The math is right; that's why the problems were written up the way they were.


The soft "sciences" aren't science if they aren't reproducible. That's why the schon revelations was easily discovered. Recently there was a trend of a published room temperature semiconductor, neither was theoretical.

Psychology is worst than useless. It is fraudulent. It is religion except backed by non reproductive "science". None of the theories matter because they have no real world function except to replace religion, which would serve the same purpose anyway.

Your idea didn't work to help anyone. Did it hurt anyone? How will we know? You can be trusted again. Your bridge collapsed? Who will trust you again, ever to make a bridge?


Psychology is really the tip of the iceberg when it comes to research misconduct. We've noticed a lot of dodgy papers because of the broad public interest in the field and the overlap with the much more rigorous field of psychiatry, but I have no doubt that there are scores of Stapels and Arielys in most fields of research.

In physics, we've seen an immense amount of recent controversy over LK-99; while the material might have some interesting properties, it is almost certainly not the room-temperature ambient-pressure superconductor that Lee and Kim claim it to be. How many totally bogus physics papers might be lurking in the literature, ignored and mostly un-cited? We don't know, because most of those papers don't make sufficiently interesting or exciting claims to inspire a raft of replication attempts. For all our aspirational beliefs about the scientific method and the power of peer review, most papers are subject to little more than a superficial sniff test.

There are plausible reasons to believe that psychology might have a worse misconduct problem than other fields, but the idea that psychology is uniquely broken is, I fear, simply wishful thinking.


That was some dodgy science (and people not being skeptical enough) but at least nobody out-right lied (afaik). And the hype only lasted a few weeks. The Schon case is worse IMHO and similarly points to bad incentives, although at least that one is 20 years old and there hasn't been a similarly large-scale fraud since (again afaik).

https://en.wikipedia.org/wiki/Sch%C3%B6n_scandal



It already is in a completely different category than mathematics; mathematics is a humanity while psychology is a science. Do the standards need to be improved in psychology? Almost certainly, and this rings true for most quantitative social sciences. But research should be evaluated on its own, not by field. There is still a lot of good, solid knowledge being produced through the scientific methods in these fields. We just generally lack the unifying theories we know from eg. physics, because the subject matter is so much more complicated, and the field is much younger. And in a lot of ways, the way academia works is not conductive to producing high-quality, impactful research in these fields.


Mathematics is not a humanities, it is a formal science, unless you are using some very idiosyncratic definition.


Mathematics isn't a humanity but it isn't a science either. It's purer than a science, demanding logical proof, not just evidence.


reminds me of a quote from Sipser's Theory of Computation textbook, towards the beginning. He says something like "in mathematics, not even evidence is enough."


Fair, I'd say it is better to be idosyncratic and consistent than conforming and incongruent. "Formal science" does not make any sense as a subcategory of science if the definition of science basically omits anything that makes mathematics and philosophy what it is. The "formal sciences" do not use the scientific method, and are basically completely distinct from the natural and social sciences in the ways they generate knowledge. In my language, it is generally linked to the humanities, but I'm not hard set on that.


Most mathematics is built on pure reasoning, not experiments. Calling it a science is a bit of a downgrade.


Most research is kind of irrelevant outside of research which is why one can get away with dishonesty as no-one places big financial bets on the findings being correct.


But the point is that the retracted research is irrelevant even inside research. That is kind of a blow.

It points to the discipline not being coherently built on shared theories but rather being a set of disjointed hypotheses.


> It points to the discipline not being coherently built on shared theories but rather being a set of disjointed hypotheses.

My understanding is that this is one of the hallmarks of the replication crisis in psychology - studies that claim high effect sizes or otherwise high impact results without any plausible mechanism of action to explain them.


A lot research is even irrelevant to the broader research community (beyond the specific researcher and perhaps the associated teams) - nothing new, I fear.


But if that’s the kind of research that racks up tens of thousands of citations, does that mean all the researchers in the field are simply LARPing?


Doubt it. I cannot speak to that field, but references/citations can have their own dynamics and life. As the OP pointed to the limited consequences of the papers being retracted, the citations could be just "expected" nods to certain things in papers.


Yeah citations can be as simple as "this concept was mentioned in this paper first".


Sure. That's why the banking system is the epitome of the scientific method.


Banks use other people's money... but, yes, actually at the leading places people are quite thorough.


Oof, that article is brilliant. Seldom do I see such an honest perspective, with penetrating insight into the essence* of the matter.


In the case of Ariely, it seems that some of the institutions who tried to follow his recommendations found the recommendations ineffective. So in this case practical harm was already done in the form of wasted time and effort.


> The 10,000+ citations of his work now point nowhere, and it makes no difference.

One has to wonder about the practice of citation in modern academia, when not one in 10,000+ is consequential.


People are right to distrust the institutions


Nice to add a very vague statement here.


There's a lot of celebrity chasing going on even in the physical sciences. I see it most in theoretical physics where papers can be published at an astonishing clip. Experiments however take much longer and if the numbers don't line up, they don't get published. In the rare cases that they do, the penalties are severe. But, when a theory or family of theories gets wiped out by a measurement, there's not the same blowback.

I am also concerned by the willingness of so many physicists to cozy up to the oligarchs.


In theoretical physics it's understood that the work is speculative, not empirical - reality always wins, after all. There's no ill will towards people who propose wrong theories of physics, because they are trying to guide where experimental physics should look next. Dan Ariely and Francesca Gino were faking empirical evidence, the thing that, more than anything else in science, determines what is actually true about the world.

  > Experiments however take much longer and if the numbers don't line up, they don't get published.
If by "they" you mean experimental physics results which don't line up with popular theories, you must be talking about another planet than Earth. Physicists love nothing more than a surprising result.


An oligarch is just a label for a rich person you don’t like.

If you have an expensive experiment and need a lot of cash, it makes sense to go with people who have extra cash and an interest in funding science.


People have been pointing out these frauds for a while now.

Aug 2021: https://www.dailymail.co.uk/news/article-9914493/Landmark-st...

July 2023: https://www.npr.org/2023/07/27/1190568472/dan-ariely-frances...

etc etc


Similar to Gladwell, the lines between research, journalism and storytelling are becoming increasingly blurred. Dan Ariely's backstory of being a teenage burn victim somehow factors into his research as adding credibility. Ideally, the anecdotal should be separate from the actual data, but research which may be dubious is given the benefit of the doubt because it confirms what we want to believe through storytelling, which confirms or revivifies a preexisting experience or bias.


This quote from a statement to NPR by The Hartford (the insurance company that provided Ariely with the car odometer data), made my jaw drop.

> The published study data includes two different fonts. All data in Calibri font can be tied to our data while all data of Cambria font appears to have been synthesized or fabricated.

https://www.npr.org/2023/07/27/1190568472/dan-ariely-frances...


If you are into this kind of thing (science going wrong), I recommend the book Bad Science. Also, Science Fictions, by Stuart Ritchie, who also now has a great podcast called The Studies show.


Related. Others?

Crowdfunding a defense for scientific research - https://news.ycombinator.com/item?id=37393502 - Sept 2023 (47 comments)

Is it defamation to point out scientific research fraud? - https://news.ycombinator.com/item?id=37152030 - Aug 2023 (13 comments)

Harvard professor Francesca Gino was accused of faking data - https://news.ycombinator.com/item?id=36968670 - Aug 2023 (146 comments)

Fabricated data in research about honesty - https://news.ycombinator.com/item?id=36907829 - July 2023 (46 comments)

Fraudulent data raise questions about superstar honesty researcher (2021) - https://news.ycombinator.com/item?id=36726485 - July 2023 (33 comments)

UCLA professor refuses to cover for Dan Ariely in issue of data provenance - https://news.ycombinator.com/item?id=36684242 - July 2023 (131 comments)

Harvard ethics professor allegedly fabricated multiple studies - https://news.ycombinator.com/item?id=36665247 - July 2023 (215 comments)

Harvard dishonesty expert accused of dishonesty - https://news.ycombinator.com/item?id=36424090 - June 2023 (201 comments)

Data Falsificada (Part 1): “Clusterfake” – Data Colada - https://news.ycombinator.com/item?id=36374255 - June 2023 (7 comments)

Noted study in psychology fails to replicate, crumbles with evidence of fraud - https://news.ycombinator.com/item?id=28264097 - Aug 2021 (102 comments)

A Big Study About Honesty Turns Out to Be Based on Fake Data - https://news.ycombinator.com/item?id=28257860 - Aug 2021 (90 comments)

Evidence of fraud in an influential field experiment about dishonesty - https://news.ycombinator.com/item?id=28210642 - Aug 2021 (51 comments)


The nice thing with science is that if you cherry-pick your scientists, you will always find some who "prove you right" ;)


it is a pity that "social science" has the word "science" in it. It is like calling homeopathy a medicine.

Neither science nor medicine should be judged based on these examples.


I think you're naive if you imagine that medicine is innocent of these problems. Medical companies have huge incentives - literally worth billions - to make their results "come in right".


The number of medical researchers who sell their hot prospect to a corporation that is unable to replicate the initial exciting results, is clearly more than the statistics of small sample sizes vs. large would indicate. This suggests that this problem exists anywhere that the incentives are in favor of it.


>It is like calling homeopathy a medicine.

It is medicine, but don't take my word for it:

>allopathic medicine: a system of medical practice that emphasizes diagnosing and treating disease and the use of conventional, evidence-based therapeutic measures (such as drugs or surgery) [1]

>osteopathic medicine: a system of medical practice that emphasizes a holistic and comprehensive approach to patient care and utilizes the manipulation of musculoskeletal tissues along with other therapeutic measures (such as the use of drugs or surgery) to prevent and treat disease [2]

>homeopathic medicine: a system of alternative medicine that treats a disease especially by the administration of minute doses of a remedy that would in larger amounts produce symptoms in healthy persons similar to those of the disease [3]

>medicine: the science and art dealing with the maintenance of health and the prevention, alleviation, or cure of disease [4]

Homeopathy is still a form of medicine, though it is not allopathic medicine. That said, the lines are quite blurred. I know that at least one type of leukemia is treated with a cocktail of arsenic and vitamin A. That treatment is given in a standard hospital setting. By the definitions above, such a treatment is actually homeopathic medicine, not allopathic. Yet, the treatment does indeed work.

[1] https://www.merriam-webster.com/dictionary/allopathic%20medi...

[2] https://www.merriam-webster.com/dictionary/osteopathic%20med...

[3] https://www.merriam-webster.com/dictionary/homeopathic%20med...

[4] https://www.merriam-webster.com/dictionary/medicine


It's always great to see a study made by real experts on the topic.


As they say, "research is me-search".


An exception to the general rule: “if a newspaper headline ends in a question mark the answer is no”

Here the answer is yes. Dan Ariely is a complete fraud and so is Francesca Gino.

There are so many credible accusations of fraud against Ariely that about four of them get treated together very quickly at the end of the article, else the article would be several times longer. If you own his books, throw them away or shelve them with your fiction.

Also note that there are serious behavioral economics researchers who do hard work that isn’t remotely like this “nudging” or “priming” BS.


Nudging is real, but I think can only really be detected by large-scale web services a/b, not university scales.


[deleted]


> and throwing accusations at every chance while appealing to the mob's emotions.

Then argue against the accusations rather than lobbing ad hominems.


There was also a piece in the nyt this morning [0].

Seems like someone was rushing for a scoop (what scoop? This is old news.) or someone’s PR firm is getting results.

[0] https://www.nytimes.com/2023/09/30/business/the-harvard-prof...


The TV show "The Irrational" https://en.m.wikipedia.org/wiki/The_Irrational just premiered in the US.

This show is loosely based on Ariely's work.

As PR ramped on the show, so did dig-ups like these.


The other reply to your comment mentioned the TV show, but also, Gino's lawsuit was quite recent: it was filed in August.


This piece is very readable, but a tiny bit funny is how loaded the storytelling is, given the subject matter.

The storyteller wants the reader to think certain things about characters and events, and facts and impressions are cherry-picked and deployed just so, to support that.

I guess seeing the persuasion as that of a storyteller might be understood in this kind of journalism (I don't know, especially since the subject matter seems delicate), but I believe that scientific research needs higher standards.


I think it was done somewhat deliberately here. The author first portrays Ariely in one way, and then gradually starts revealing the doubtful and conflicted elements. The other characters are all portrayed more or less neutrally.


Day after day I grow more convinced psychology research is a sham.


"Eschew flamebait. Avoid generic tangents."

https://news.ycombinator.com/newsguidelines.html


The only social sciences research that isn't a sham is the kind that can be used to consistently predict non-obvious things or consistently modify otherwise non-modifiable things.

I don't know of any such research.


They should all be called social studies and not “social sciences”. As you’ve pointed out, they’re not real sciences because the requirement for being a science is having a repeatable theoretical model.

To be fair, they’re not completely useless either. They just need to graduate from being an astrology into an astronomy, but I’m not sure how that’s possible without a near perfect world simulation.


The joke has always been that psychology studies is the study of small groups of psychology students and their afflictions.


That or Mechanical Turk workers...


As a patient i see the field at best as a set of best practices in theoretical scenarios. But i imagine getting valuable insights out of (troubled and ashamed) people minds to help them is really hard. Surprisingly pharmacology in Psychiatry instead works well and reliably for a lot of conditions


You might like the psych writing over at https://www.astralcodexten.com/archive?sort=top or gwern.net


Despite his positive reputation here, Scott Alexander has similar issues - he does not only rely on gold standard research


Including less-than-ideal data in a broader analysis is completely fine as long as you're aware of the data quality issues and adjust your weights accordingly. IMO Alexander does a pretty good job of this - I recall many occasions where his writing references a study while also caveating that reference because of a limited sample size or a slightly implausible effect size or some other qualm.

There's a world of difference between trying to make sense of a topic when none of the available data is all that great, and outright fabricating data to make a name for yourself.


[flagged]


“Science and industry only have our best interests at heart”

Coming from both science and industry, that is absolutely the last thing on their minds. Generally it’s some combination of the bottom line, pursuing whatever is most interesting, and not getting sued.


You're ignoring the universal religious faith in those institutions that has been demonstrated over the last few years. People who question them are conspiratorial dimwits, who are dangerous to society and should be punished.


whooosh


I was 50:50 on this being sincere. It’s hacker news after all.


It really is comical to watch


Science is imperfect, but usually pretty good at detecting bad science. And, as far as I can tell, this is exactly what happened here, so... yeah, this case is a good argument for trusting science.


"trust the science" is usually wielded by people who know nothing about science.

Trusting what a few "experts" say in public now, when it hasn't had time to be tested in the rigorous manner that outs bad science (like Data Colada) is nothing but forced religion.


Good point.


[flagged]


Right except this (alleged) fraud has been going on for a little over a decade. So error correction is always nice (and better late than never) but it also means that any particular finding can be very unreliable. If I'm deciding on a drug to take (or a medical procedure) I need to decide pretty soon; if I'm getting bad information it doesn't help me if in 10 years the field says "we're sorry, our bad".


The problem you mention is very real but do you know of any criteria that actually work better than "trust the science"?


> […] better than "trust the science"?

Probably change that to a gradient:

“Trust science more, when it’s been tested by multiple groups, it’s been fruitfully built on, etc.”

I think it is fair not to overly trust any single paper.

Also worth considering the different qualities of evidence in different papers.

Mathematics has the highest form of evidence, being easier for readers to vet and reproduce themselves.

Human behavior the lowest given how impossible it is to isolate all conditions in a human head, and the lack of any reliable overall models of a human mind.


Fair enough.


The point is to not treat it like a religion. Scientific conclusions are not blessed by god as divine and absolute truth. As much as we wish the world was black and white, it's much more shades of gray, and we should each do our part to make sure that religious zealots do not use "the science" as a political cudgel.


Maybe: only trust science that is statistically significant and can be replicated?


I'd say that it's not science until it is replicated, but you are right, this is not always what people mean by "science".


At this point trusting the institutions is being naive and dumb


What do you trust instead? Like, how do form a conception of reality based on events you don’t directly observe?


This question is the best illustration of why it makes me so angry that the institutions have been so untrustworthy. Without the institutions, you end up in the land of can't trust anyone/anything and choose-your-own-reality. It's not going to end well if we can't make serious reform and right the ship. The worst part is there doesn't seem to be any appetite for doing that, just denial and self-interest.


I think people are overthinking this.

In Asia, you kinda just assume as a tourist someone is going to take advantage of you or try. You can't be naive about reality, once you accept that you can then figure out what you want to do about it.

Eventually new groups will forms as the old ones dies probably violently as they fight to protect themselves.

I think 100% the institutional corruption has destroyed public trust in them, and the public is right to believe they are untrustworthy.

The unity debacle recently shows how easy it is to loose trust, and how difficult it is to get it back.


It's the zeitgeist: attack all manner of institutions with absolutely no plan or appetite to rebuild them.


Why should institutions be trusted in the first place? Isn't it up to the institution to maintain their reputation/quality, perpetually? Since when did the public have to bear that weight? For what reason?


Do institutions even try to be trustworthy these days? Everybody is just pushing a narrative.


Isn't the appetite to tear down the castles and towers arguably due to years of frustration following failures to change/reform them? Average people can definitely be annoyed by institutions like the court or congress or the who, but they have very little agency to effect any change there, much less "rebuild" anything. Arguably it takes a certain idealistic naivety to hang hopes on things like "change it from the inside!" and "something something grassroots". If you're not perfectly satisfied with the status quo then tearing down or sowing chaos to change something somehow does start to look rational.


do you think the distrust of institutions is warranted?


Trust no one and nothing. Assume every narrative you're being given of the outside world has been corrupted and distorted to manipulate you towards unknown ends, and that every authority a liar and a fraud. Stop engaging with the media, pop culture and modern communications technology as much as possible.

Live your life and accept that there's nothing you can do about any of it. You're being lied to by everyone and everything and you'll never be capable of knowing the truth beyond what your senses immediately tell you (and even they can be fooled) - and it doesn't really matter. We live in a "post truth" era so just pick the lie that suits you the most.


Nobody?

Nowadays the only way to get some legit information is to cross-check multiple conflicting sources. And then apply some gut feeling based on historical track record. Which also needs to be formed by cross-checking multiple sources.


If you “trust nobody” and you aren’t immediately and cripplingly paralyzed by the task of rigorously verifying every piece of information you rely on to live, then you’re not being serious.

How do you trust that your car, electrical wiring, water pipes, food, medicine, and consumer products are safe? Foreign reporting is true? Politicians aren't secretly selling votes?

Your have to trust other people. If you “trust nobody” you’re just obscuring who you actually trust, and that makes it harder to think about whether you should actually trust them or not.


How do I trust my car? I see plenty of cars of the same model being driven around. I assume my car will act the same.

Electrical wiring and water pipes? When my house was being built, I visited the site almost daily, took tons of pictures and read a ton how the things should be done properly.

Medicine? Read multiple sources upfront and possibly visit multiple specialists. I and my relatives were burned multiple times by not doing this and trusting first specialist they bumped into.

Regarding foreign reports and politicians, as I said, triage multiple reports. And I’m pretty sure vast majority of politicians have biases, either paid-for or ideological.


I don't need the level of psychological assurance you do in life. I use judgement. I know my car works because my mechanic seems like he has his shit together and knows what he is doing. I don't get that impression from virtually anyone in the social sciences.


And you make that assessment based on what? I would wager taht there is significantly more fraud amongst car mechanics than social science researchers. Let's not even talk about the massive fraud that car companies have been engaged in (VWs Diesel scandal, Teslas autopilot...), but somehow you find them more trustworthy than social science professors?


That’s why there are lots of word-of-mouth in some industries. Or at least review websites. Nobody says „trust the mechanics“. More like there few good ones in the sea of average and flat-out fraud.

Meanwhile „trust the X field“ crowd lacks this component. Any field will have frauds and word-of-mouth does not replicate easily.


Sure, “trust nobody” is a literal impossibility even if you’re way off the grid like Ted K. But I don’t see many people pushing that sort of extreme ideal as a response to the replication crisis or the fraud coming out of highest levels of soft sciences.

You just end up having to treat sci-news the same way most of us likely already treat regular news media: with heavy suspicion by default for any unfamiliar topic. Reverse Gell-Mann Amnesia, I suppose.

It’s not particularly good for one’s mental well-being, but it’s a rare person who can go back to being blissfully ignorant of something this widespread.

Mini-rant: I just now realized that the Andrew Tates of the world have made it nigh impossible to casually drop a pop-culture reference to the Matrix in these sorts of convos.


Speaking of naive and dumb…


Take your anti Sam Harris elsewhere conspiracy theorist


You could also use your common sense when these results were mentioned and realise anyone believing them was intellectually challenged


> Some behavioral economist is going to win the Nobel Prize—what do I have to do to be in contention?

Just a reminder that there is no Nobel Prize in Economics.


"Kahneman and his partner, Amos Tversky, had pioneered the field of “judgment and decision-making,” which revealed the rational-actor model of neoclassical economics to be a convenient fiction. "

There was nothing to "reveal". Neoclassical economics and the mental model of "rational actors" is no more science than phlogiston or alchemical conversion of lead to gold. It's worse than those, since it's an ideological construct used to buttress the social order.

Which is why we should understand how whole branches of economics are strange pursuits within the realms of fantasy devoid from reality; and when someone introduces a shred of it back in, this is hailed as some great achievement.

Reminds me of this segment in "Yes Minister":

https://www.youtube.com/watch?v=KgUemV4brDU




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: