Hacker News new | past | comments | ask | show | jobs | submit login
Harvard dishonesty expert accused of dishonesty (ft.com)
292 points by hansmeierbaum on June 21, 2023 | hide | past | favorite | 201 comments




The FT article says: 'A group of academics who compile the Data Colada blog about the evidence behind behavioural science has started publishing a series of posts in which they say they will detail “evidence of fraud in four academic papers” co-authored by Gino'.

The first two of these blog posts are:

https://datacolada.org/109

https://datacolada.org/110


From the second link:

> students approached this “Year in School” question in a number of different ways. For example, a junior might have written “junior”, or “2016” or “class of 2016” or “3” (to signify that they are in their third year). All of these responses are reasonable.

> A less reasonable response is “Harvard” [...] Nevertheless, the data file indicates that 20 students did so. Moreover, and adding to the peculiarity, those students’ responses are all within 35 rows (450 through 484) of each other in the posted dataset

In addition, of these 20 very suspicious rows, most were strongly confirming the hypothesis of the authors.

Likewise the first link shows that, in a spreadsheet containing outcomes sorted by treatment group, someone had manually moved rows from the span of rows containing one kind of treatment to a span containing outcomes from a different treatment. These provably manually reordered rows also contained most of the strong evidence for the predicted effect...


It all suggests that, in addition to falsifying data, they had become so blase about falsifying data that they weren't particularly careful about it. Which suggests that they may have been falsifying data for a long time...


Alternatively, they had such little practice that they weren't good at it.


Perhaps, but I think a first-timer would be able to avoid those mistakes, especially if they were thinking, "oh boy, this is a really bad and risky thing I am doing here, I better double and triple check". On the other hand, if it's a normal (for you) thing you're doing, which you have to do on most of your papers, then on some days you're going to rush it.


Normalized deviance, sure


From the first blog post linked above:

> We discovered evidence of fraud in papers spanning over a decade

> We believe that many more Gino-authored papers contain fake data. Perhaps dozens.


YOU SWITCHED THE SAMPLES.

Ladies and gentlemen, this is my friend, Doctor Kimble…


Not just most. All but one strongly supported hypothesis


The first blog post would have been a much better submission than the Financial Times article.


from 110:

> It is worth reiterating that to the best of our knowledge, none of Gino’s co-authors carried out or assisted with the data collection for the studies in this series.

I'm confused; I thought 109 described a paper in which Ariely was the sole person to handle the data. How is this not contradictory?


The article in question has five co-authors. It reports four studies. Data Colada post 109 is about Study 1, which Gino handled.

An earlier Data Colada post -- https://datacolada.org/98 -- is about Study 3 in the same article. It seems that Ariely is the only co-author who might have handled the data for that study.


The ariely paper contained several studies by a variety of authors. The first post of this series details issues with one of the studies in that paper, which Gino did (but Ariely wasn't involved in). So 'studies' refers to experiment/bit of research, not paper in this context


Reading this makes me want some CSI like TV show based on forensics using statistics and excel files!


there are several of them on youtube. the most popular being related to sports statistics (e.g. SB nation)


There is this pattern where things are always the opposite of their message:

- Dishonesty expert is dishonest

- Politicians who constantly talk about freedom want to restrict anything that doesn't fit their world view

- Militant anti-gay preachers hire call boys

- Diversity teams are the least diverse


We like these fall-from-grace stories but there's a subpattern to this that can't be articulated so easily. Literal "corruption" comes close.

See also: every freedom fighter that launches a coup to overthrow a dictator, then becomes a dictator themselves within a decade. Or cops that get caught stealing child porn from the evidence locker, despite not being pedophiles by any definition.

Not all these types start this way, or are inherently hypocrites. Repeat exposure to something tends to rub off. You can't rail against the evils of prostitution as the basis of your career without being forced to spend cycles processing what about them is so awful. Thinking about round tits on display, the mechanics of sex, etc. The unthinkable is literally on your mind 24/7, which effectively normalizes the behavior and makes it less outrageous over time. It's a weird cycle of grooming yourself to accept what you purport to hate.

I'm an anti-fraud professional. I'm surrounded by so much of it (seemingly without consequence) that I'm frequently tempted to break bad myself. I never will, but it's a "conscious" decision (as an extremely stubborn Slav) and not because I'm a saint. Temptation just comes with the territory. It wears you down until you fail and make headlines in the most humiliating way.


Therre is this amazing political theory book called The Dictator's Handbook [1] by Bueno de Mesquita and Smith which itself is a popularized version of their Theory of Selectoriat (i am not sure if it is a cortect translation) from Mesquita's book called The Logic of Political Survival. I highly recommend it, for me it is one of the most important books I've ever read. It explains exactly what you wrote about.

CGP Grey made great recap in one of his videos [2] called Rules for Rulers.

[1]https://en.wikipedia.org/wiki/The_Dictator%27s_Handbook [2] https://youtube.com/watch?v=rStL7niR7gs&feature=share8


That is fascinating; thanks for the recommendations!


I cannot prove it with ready evidence but I suspect there is a tendency during bad economic times for there to be more high-profile corruption cases because of a psycho-social need of frustrated people to vent their displeasure on perceived causal agents.

TL;DR: If you were a public person, watch your actions during bad times.


> Or cops that get caught stealing child porn from the evidence locker, despite not being pedophiles by any definition.

Has this happened? He thought it was normal porn?


Not quite. Before commercial forensics tools started pixelating and obfuscating images so investigators didn't have to witness CSA constantly, they'd get desensitized to this stuff and occasionally end up spiraling. We're seeing it again with Facebook moderators and AI trainers.

I haven't heard any new cases of exfiltration in a very long time (at worst, just vice cops soliciting prostitutes) so I like to think the technical/procedural changes are working as intended.


Ye well I imagine watching CSA to any great extent messes you up.

I read some article about some poor cop watching and trying to identify victims for all CSA cases in the country where I live. I wonder if you give such a task to some "psychopath", or atleast I hope so.


> - Diversity teams are the least diverse

IME DEI is usually a function rolled into HR (although they often contract with diversity training firms), rather than a separate team, except at large companies. Where there are separate DEI teams, they tend to be the most diverse, at least among the white-collar workforce - the requirements for these jobs tend to be quite nonspecific, because they don't really need to do much besides write stuff and give presentations related to diversity. These tasks can, at a pinch, be mostly handed off to interns and ChatGPT, or else involve cribbing from preexisting presentation templates. The actual engineering, PM, etc., teams tend to be less diverse - graduates of specific university programs tend to be less diverse, in the non-euphemistic sense, than university graduates as a whole.


They may be more diverse in terms of race and sexuality, but have very little diversity of opinion and worldview.


Little diversity of opinion and worldview how? (Should DEI teams be hiring people who are against diversity, equity, and inclusion?)


They would benefit from being open to discussion, which they are generally not. There are an absolute plethora of examples. See Peter Boghossian's street interviews/surveys for example.


> Should DEI teams be hiring people who are against [...]

Yes, otherwise you're saying hiring is just a political test.


No it's not, lol. Logistically speaking, it makes zero sense to hire an anti-diversity advocate for a DEI role. Companies only have DEI teams because they want to promote diversity, equity, and inclusion; they wouldn't hire someone who's actively going to detract from that effort.

It'd be like hiring a reckless loudmouth who loves to make people look bad as your head of PR. Literally the opposite of what you need them to accomplish.

(I also find it suspect that equity or inclusion are "political," but that's another matter.)


> I also find it suspect that equity or inclusion are "political,"

The way you say "suspect" seems like an attempt to sway opinion, which seems like a tacit admission that that the terms are political.

For instance, inclusion of who? What groups do you divide the world's people into such that you need one of every group for proper representation?

> Logistically speaking, it makes zero sense to hire an anti-diversity advocate for a DEI role.

I think most people can do jobs they don't agree with. An engineer can design a bridge that they didn't vote for, an administrator can follow rules they wouldn't have chosen, etc.

> Companies only have DEI teams because they want to promote diversity, equity, and inclusion;

How should you do that though? Is any diversity of thought allowed? For instance, is anyone allowed to think that affirmative action is ultimately harmful and focus on the diversity and inclusion over "equity", and/or argue for what they feel are more helpful methods?

Is DEI defined as a specific set of practices or a general goal of bettering society?


Generally, the people in support of DEI are fond of the expression "the personal is political" or even "everything is political", so why wouldn't DEI be political? Given that it involves the organization of society and is controversial, DEI seems like the prototypical example of something political. It would be clearer for everyone if DEI openly accepted their role as political commissars.


I have literally never once heard language like that from DEI professionals; in every context I've encountered them, they've basically been an emissary of HR and had a "be nice but don't rock the boat" mentality, for better or for worse.

And I'm speaking as someone who falls into several ""diverse"" hiring categories. As much as I appreciate the philosophy of DEI work, I personally find most of it pretty toothless. There have even been multiple instances of myself and other rank-and-file coworkers trying to have discussions about bigotry at large (not even in the workplace itself), only to be shut down by those very same DEI coordinators under the guise of "let's use kind words instead of unkind words," because it's not good HR-speak to call a spade a spade.

Like, these are people employeed by huge corporate entities, not radical freedom fighters. A lot of it is just image work to make sure your company doesn't do or say shit that makes you look bad.


> There have even been multiple instances of myself and other rank-and-file coworkers trying to have discussions about bigotry at large (not even in the workplace itself), only to be shut down by those very same DEI coordinators under the guise of "let's use kind words instead of unkind words

How do you have these conversations such that you're using unkind words? Are you racially stereotyping people as bigots?

> I'm speaking as someone who falls into several ""diverse"" hiring categories.

Yeah, that makes sense. If your intersectionality was switched those conversation probably would have gotten people fired.

> I have literally never once heard language like that from DEI professionals

Like the "whiteness" thing from the other post. Rules for thee and not for me.

> not good HR-speak to call a spade a spade.

"Although the term has its origins in Greek literature, the subsequent negative connotations with the word 'spade' means that the phrase should be used with caution or not at all." -- Elimination of Harmful Language Initiative, Stanford

You might not be held to the same standard as other employees, if you can freely speak in this fashion...


I had someone from our diversity team say, in a meeting of 100+, that the problem that the team are trying to combat is "whiteness". This coming from a white woman who quite openly said that she'd only recently started identifying as diverse because one of her parents spoke a non-English language.


Oh boy... sounds like a piece of work :P

I know the type you're referring to—people punchdrunk on their own performative "activism" who love to make everything about themselves—and they're pretty cringey, but at least in my experience they're far from the norm on corporate DEI teams. Honestly super ballsy of her to pull the Diverse White Lady move like that... I'd be shocked if no one else on the team gave her a talking-to, unless (1) she's the boss or (2) everyone else on the diversity team is white.

(Which does happen, and isn't great, but also is pretty funny in juxtaposition with the claims you'll hear about DEI teams being radical Marxist plots or whatever. Half the time these teams are just a bunch of HR managers named Linda who get the letters in LGBT mixed up.)


It's a make work job for providing political favors, e.g. politician's niece is given an easy job in exchange for future or past favors.


"Where there are separate DEI teams, they tend to be the most diverse,"

That's not my observation at all.


I've always wondered if ethics class impart any ethics at all. If you're not an ethical person, why would you become one after taking such a class? If you're already an ethical person, taking a class in it is pointless.


There is a distinction between understanding and practicing ethics. It's like taking a course on leadership vs being a good leader - some people might have a legitimate desire to be a good leader, but not know the best techniques; taking a course allows them to understand how to be a good leader so that they might become one. As another example, an atheist could study theology not to better practice religion but merely to better understand it.

Someone can likewise have every intention of being ethical, but just be bad at it. Conversely, someone could know ethics well and simply choose to disregard them and do unethical things anyways. An ethicist isn't necessarily someone who practices ethics particularly well, it's someone who understands the concepts of ethics well.


> Someone can likewise have every intention of being ethical, but just be bad at it.

Is there really any empirical evidence that such people exist and could be helped by an ethics course? I think laymen [to ethics] with a desire to be ethical can generally go about being ethical with little more than the Golden Rule. And if somebody hasn't figured out the Golden Rule yet by the time of their mandatory ethics course in college, there's probably something wrong with them (narcissism? psychopathy? some compulsion that makes them listen to intrusive thoughts?) that will make the ethics lectures basically pointless.

The most effective time to teach ethics is probably Kindergarten, plus or minus a few years. I just don't believe in the existence of adults who want to be ethical but don't know how because a college professor hasn't explained it to them.


> Is there really any empirical evidence that such people exist and could be helped by an ethics course?

Yes. Many, many people (including me) don't speak up when we see someone else doing something wrong. These ethics courses not only drill home the point of the importance of speaking up, but helpfully give many mechanisms in which one can speak up anonymously, or otherwise in a protected manner.

A hypothetical of a situation the GP may have had in mind would be a person who is thinking of fudging or taking a shortcut, but doesn't want anything bad to happen. Ethics courses can show how minor deviations in ethical practice can have major impacts (jokingly - https://www.youtube.com/watch?v=1dWjKkF0Zi4 ). These courses can also show that actively stopping work for an ethical reason will not be punished or looked down on in any way. Because of this the person will feel confident in checking their initial impulse to fudge and instead do what's right.

Showing people how they can be ethical is just as important as showing them what ethics are. We don't need to be moral heroes in order to take more ethical action.


If you haven't learned how to speak up before you get to a college ethics class, I honestly question whether you ever will. Normally this sort of principle is drilled into people at a very young age. If it doesn't stick, I don't think a professor telling you to read Kant or Singer is going to do any good.

And you don't, or shouldn't, need an ethics course to tell you that minor deviations from the plan can have unintended consequences that might have ethical implications. Take taxi drivers for instance, a minor shortcut over a sidewalk could have lethal consequences; do we expect taxi drivers to know this already without having taken an ethics course in college? Of course.


Honestly I was thinking of the kind of 1 hour "course" that college students and employees get mandated once a year. Those are chock full of useful information and tips. Probably a big change from semester long for-credit college "courses".

Many people learn early in life that to speak up is to get shut down. This can be combated later in life with good examples and policies.


What you are describing is morality, not ethics. You would probably benefit from an ethics course.


M8 I took an ethics course. Thank goodness the professor taught me insightful things like "don't write software that hurts people", I never would have thought of that myself! Don't write code for x-ray machines if I don't know what I'm doing.. got it!

Incidentally I still have my assigned text from that course, right next to me in fact. "Cyberethics: Morality and Law in Cyberspace" Hmmm...


I meant a competent ethics course.


Ethics (as a philosophical field) is about applying some level of rigor to the question "What ought I (and others) do and why?". "An ethical person" is a meaningless statement without an underlying framework of ethics, which obviously is a hotly debated topic, otherwise it wouldn't be an entire field of study.

What studying ethics offers the individual is the tools to analyze their moral beliefs and ensure they are at least internally consistent. That "and why?" is probably the most valuable takeaway from studying ethics. Rather than saying "that act was wrong", a student of ethics can explain, down to axiomatic beliefs, why that wrong. If a disagreement between two people boils down to axiomatic beliefs, then you're never going change each others minds. But a difference in interpretation can be argued with the potential for persuasion.

It also gives you a lot of practice justifying your own actions. Useful, but perhaps not exactly altruistic.


An ethics class doesn't make you ethical, no more than a criminology class makes you a criminal. Ethics isn't about making you ethical ( though it may ), it's about studying the history and principles of ethics. It's about giving you the foundation to reason about ethics.

Try the trolley problem. Is it okay to divert a trolley to kill 1 innocent person to save 10? What about 5? What about 1? And this is just an abstract thought problem. Now try war, nukes, abortion, death penalty, etc. Everyone has a simplistic and superficial idea of ethics. An ethics class helps you develop a deeper and broader understanding of ethics. To develop the what and why of ethics.


Ethical theory is an interesting subject in its own right. Even if studying it doesn't make you a better person, that doesn't make it useless, any more than studying abstract areas of math or physics is useless.

That being said, some work in practical ethics has had a huge impact. Peter Singer's work on animal ethics started the modern animal rights movement, and his essay "Famine, Affluence, and Morality" has inspired many people to donate a lot more to charity than they otherwise would.


Similar to a dating coach that has hundreds of dates to draw expertise from, making them actually terrible at dating, if the point is to form long term relationships.


Question from a non-student of philosophy:

Why do people use the term "ethical" as though there were a single agreed-upon "right" system of ethics?

My understanding is that there are numerous concepts of right and wrong, some of which are very divisive.

So i.e. there are people who would be widely detested for being "ethical" in unfashionable frameworks.


It depends on your beliefs about the status of ethics. Moral realists believe that there are independently true facts about ethics. So for a moral realist, if "X is ethical" is true, then people who believe "X is unethical" have a false belief. On the other hand, moral relativists believe that ethics is context- or culture-dependent. So for a moral relativist, "X is ethical" has no universal truth value, but must be judged in the context of a certain time, place, and set of cultural practices.


Except that moral relativism ultimately boils down to a universal axiom of "One ought behave in a manner aligning with the moral framework of their setting", offering an inherent contradiction with the belief that there aren't axiomatic ethical rules.

All ethics assert a universal ethical framework. Moral relativism just offers a really complicated one.


>Except that moral relativism ultimately boils down to a universal axiom of "One ought behave in a manner aligning with the moral framework of their setting", offering an inherent contradiction with the belief that there aren't axiomatic ethical rules.

Except it doesn't. Moral relativism doesn't make any claims about the way one should behave. It's entirely consistent with a moral relativist point of view to oppose commonly accepted morality. Moral relativism merely asserts that there is no absolute arbiter of good and evil, no immutable arc of history towards freedom, no inalienable right. Everything is a matter of opinion and consensus, and consensus is always fluid.


Maybe your objection works for a relativist who makes claims like "one ought to tolerate moral practices of other cultures". But I don't think all relativists are making that claim. Some are just rejecting any notion of objective moral truth.


Most people have incoherent answers to any question of right and wrong. Philosophers like to overcomplicate things and throw buzzwords too. I think it's a very difficult conversation full of emotional traps and personal biases. Once upon a time, we all shared the same moral foundation (religion) and did not have this problem.


We were "all" wrong, but at least we agreed!

(Of course it wasn't really "all" , either within one community or between communities. People in power were just better about killing the weak people were just better about silencing people who disagreed.)


But I didn't say whether this was a good thing or a bad thing. In fact, I'm happy with some heterogeneity in our values, but I feel like you jumping the gun kind of proves my earlier point.


If there are swift and just penalties for unethical behavior, then one does not need to actually learn how to be internally ethical, they just need to learn the social and political framework that's already established.

As you might expect, it's among "one of the most popular" courses offered at Harvard.


It's popular because it's a profound topic, and Sandel is an amazing orator who makes the topic interesting, and when something is enjoyable, almost everyone wants to do it.


The fact that the term "ethics" is used so much in a CYA way, or else as transparent rationalization of whatever worldview is currently dominant among the PMC, has given me an almost allergic reaction to that word.


That's overly binary for my tastes. You talk as if there's no nuance to being ethical. Ethics is hard because people are complex and the ultimate effects of actions in the world are hard to predict. An ethics class can't make a sociopath normal, but it can alert normal people to the many important considerations, and the long history of people fucking it up via shallow, too-simple understanding.


One of Jung's more interesting concepts is that of enantiodromia - where things taken to extremes become their opposite. It's also a concept in the I Ching (where the "strongest" lines are the ones the ones changed into their opposite), and in Nietzsche's BG&E.


At least ethicists are about as ethical as other people: https://qz.com/1582149/ethicists-are-no-more-ethical-than-th...


Corporate entities with “innovation” in their titles are most definitely not innovative

e.g. Innovation Managers have most likely never done an innovative thing in their life


And "excellence" initiatives are usually led by people who have never accomplished anything :-)


Good managers are not necessarily good at the task they are managing. There's nothing odd about this. A good manager should have a great understanding at what they are managing, but not necessarily be individually competent at it.


If that were true, why do the most innovative companies not have Innovation Managers?


Because all of their employees were hired in advance to be good at self-innovation-management? Because a higher-level executive is taking on these duties in addition to, or in lieu of, their other duties. You can do a job without it being your job title.


ITT: we start to understand "Projection"


- Mis/Disinformation experts spread mis/disinformation


Individual researchers have huge incentives for fraud, but large universities are the ones who most need to look inward here. HBS could become a beacon for stronger research practices in this area - requiring study preregistration and data collection methods that carefully trace provenance and changes. If HBS has fewer but better quirky marketing psychology studies, their reputation will do well. However for an individual researcher, the benefits to using extremely rigorous practices are not as clear, while the costs are high. A random grad student is not going to be able to establish a norm of data traceability.

I worked in this area (at Harvard as well) a bit as a grad student and can absolutely understand the temptation for the lighter version of this. If you can drop one outlier group, you get a cool story and a job at a top 10 b-school. Or keep it, get a muddled result, and try again for a better paper next year. I ended up just leaving the field entirely as the whole "our system massively rewards dodgy practices" vibe really bummed me out.


I suspect this is also why large industrial ML labs have mixed productivity. My observation has been that people decide that a problem is important because someone high up thought it was important. The researchers using the methodology that shows the largest offline impact are rewarded - not those with the most rigorous finding.

This creates a strong incentive to fudge the results and never ship anything. I knew of one scientist who figured out you could just fake the entire thing and skip the work part. He coasted for 5 years doing this.


Same goes for politics. Grant money available on partisan issues is quite the motivator too. I was doing research on a researcher and publisher at rutgers: https://gunviolenceresearchcenter.rutgers.edu/mike-anestis-p... and was surprised to learn how much funding him and his team has received. Literally millions, much more than small market research firms.


I remember a thing a while back where Texas cut family planning funding. Around the same time maternal deaths skyrockets.

Tons of research, sponsored by pro-choice medical groups

Turns out the rise in maternal deaths preceded cuts in funding.

Speaking to my relatives in Texas, a bunch of ERs shut down around the same time - which could explain the deaths.

Abortion is a hit button issue - family planning is to a lesser degree.

But plain old ERs closing - that’s not political. Just a statistic.


What happened after 5 years? Did he get caught? Or just founded his own unicorn ai startup


I think around year 4 a manager made the mistake of believing the research and pushing hard on it. This in turn became a tracked goal with a multi-year timeline. The individual left before things could fall on them too heavily.


how do you work 4 years on a project before having a tracked goal. absurd. sounds like management's fault.


The goals before the 4 years were probably based on results claimed, not making a product. Research management is difficult and efficiently running a lab requires some degree of trust that researchers don't lie.

I think the bigger part of management's fault here is that they should have spent some time on verifying the research before dedicating more resources.


Also setting up mechanisms to verify results and share results. Even a basic mechanism of “check in your code” would have served the purpose of identifying fake work. A code review mechanism can help stop irresponsible work.


“Check in your code” can identify certain kinds of blatant fraud, but it only does so much. For example, you can p-hack by evaluating your result in many different ways and then commit the version that gives the best number. It takes a lot of discipline to prevent this at an organization level. You can even have an emergent group form of this, where people collectively p-hack by individually evaluating their work and randomly finding lottery tickets. I’ve seen this phenomenon extend from research all the way to shipped product changes.


Incidentally that's how the best research with the most impact had been made in the past. Letting brillians minds play around and supporting their tinkering.


I’ve been involved in discussions over what numbers to give to the Board of Trustees. CFOs go to prison over this.

It’s funny how greedy, for-profit companies are strict about honest accounting, while non-profit universities are fine with dodgy research.


While I strongly think HBS should do much more to prevent fraud, I would also note that they have suspended Gino and seem likely to fire her. So it’s not like they are completely ok with this…


Well you said it yourself, CFOs go to prison for it. I’m sure if as a society we treated academic fraud like financial fraud the situation would be different. I don’t think it’s linked to companies being for profit or not


For similar reasons, I left academia.

I joined academia for the pursuit of truth, and forth the glory of Knowing. But it turned out that academia doesn’t really do this anymore, it just sells itself as doing these things.

Academia is entirely a “reputation” system, as it turns out, from paper publishing to student-evaluations to “public rankings.” As it turns out, the capitalist marketplace must also contend with these issues, but imho (and somewhat counterintuitively), it does so somewhat more honestly.


Wait a second, the blog which is bringing to light these issues is by academics who ARE seeking the truth.

As someone in academia (very different field than Behavioral psych), I'm very cautious of people who say overgeneralised hyperbole such as:

> I joined academia for the pursuit of truth, and forth the glory of Knowing.

Because doing science is not clear concise straight forward exercise people envision.

(Science has major systemic issues but that's besides my point)


> I joined academia for the pursuit of truth, and forth the glory of Knowing.

I assume they said this from a position of naïveté and then after joining academia realized it was incorrect. I'm not sure we should use that to suggest a defect in character.


> I'm not sure we should use that to suggest a defect in character.

Apologies for the above poster if this was taken as an attack on their character, it wasn't.


To be fair, my field was not in the sciences. Regardless, I left the tenure track due to disillusionment. I don’t presume that others had the same experience, and it is my earnest hope that most people do not.

I was heavily encouraged to give high grades for high reviews and to publish high-volume, quality be damned. I hope that my leaving had some effect on the decision makers at my particular institution. But my impression was that the issue was systemic, and I became very bitter. I thought it better to leave than to remain and further institutionalize that bitterness.


I think your experience is on par with a lot of scientists and academics, the issues you describe are present across nearly every field of science and academia. My comment was not meant to reduce your experience.

However, my main concern is that people enter academia due to some idealistic, romantic and preconceived ideas.

Science (and academia) isn't above the fray of humanities ugly side. It's done by people for people, its messy inefficient monster. I wish it weren't but that means taking the humanity out of it.

These are my crazy ideas. I am probably wrong.


I'd like to hear more about how you've found more honesty in the private sector.

"academia doesn’t really do this anymore"

I've heard this from more than one current or former academic. Some argue it's due to market logic entering academia. It sounds like you take a different view?


> I'd like to hear more about how you've found more honesty in the private sector.

vs

> Some argue it's due to market logic entering academia.

People's behaviour is shaped by incentives (amongst other things).

People in academia are rewarded for pumping out quirky studies that find interesting things, regardless of whether they reproduce. So that's what you get.

People in business mostly just want to make money selling some gadget (or providing some service). So they need methods that help reliably produce gadgets that are more cheaper or perform better. Because businesses want reliability, you get your reproducibility as a side effect.

(Of course, businesses also produce the occasional study where they use the study itself as marketing material. I expect those studies to be just as unreliable and full of wishful thinking as anything academia pumps out: the incentives aren't any better.)


I have no idea what I'm talking about but I would speculate that looking at how.much.freaking.money these schools have provides us a window into what these schools actually focus on. The focus seems to be making and having a ton of money. This compounded by the trends we see in fewer tenure positions, worse pay for associate profs, a glut of admin positions and admin pay, etc.

So yeah, I'd say the observation that market logic is entering academia has some surface level reasons to support it.


Ah, you meant it the same way that a thief is more honest that a corrupt politician. At least the first one is honest about what it does.


The person you are responding to is not me, but they captured the essence of my intent, more or less.


> I'd like to hear more about how you've found more honesty in the private sector.

no intellectual wrangling, just make it rain.

and in my experience, the type-A super-go-hards are respectful enough to stab you in the front instead of in the back.


I think this is a little unfair - academia is stuffed full of people seeking truth and doing good work.

However as a PhD student I decided that science academia wasn't for me - not because of out and out fraud - but because I could see success depended on marketing, networking and self-promotion - simply doing excellent quality work was required, but not sufficient.

Some of this is fair - part of an academics job is not just to discover new stuff, but to educate about it's discovery.

Anyway I decided I didn't have a loud enough trumpet and sharp enough elbows.

I'd agree that you find less politics in industry.


The society makes individuals with high intellectual potential to attach their ambitions to the itself and this process begins usually even before the individuals can see the flaws and untrustworthiness of the society. Those who eventually break their ties to the system are outcast by everyone else.


Ha, did the same and literally had an hour long talk with an emerita faculty member today about how totally skewed the incentive structure is.

People only care about lines on the CV - and are willing to do anything to add another under the “peer reviewed” column. Including, lie, cheat, and steal (ideas).


> But it turned out that academia doesn’t really do this anymore, it just sells itself as doing these things.

What do you mean by 'anymore'. Was there some golden age of bygone glory?


I'd argue there was to some extent. I mean psych has always had issues, but it seems from the late 1800's to the mid 1970's would characterize the golden age. During this time science that produced results and was verifiable and reproducible experienced a huge boom. In part because during this time period almost every nation on earth got involved at some point in a life or death struggle which turns to do a pretty good job at cutting through science that doesn't hold water when it's the difference between your country winning and your countrymen being slaughtered.

As proof of this most of the open question in fundamental physics are the same ones that were open at the time of Einstein's death.


My null hypothesis for that golden age from late 1800s til mid-1970s is that they just found all the low hanging fruit? See https://slatestarcodex.com/2016/11/17/the-alzheimer-photo/ for background.

Compare also how even second rate people could do first rate work in the last fifteen years by applying deep learning to new domains.

You mention that war has a certain tendency to cut through bullshit. Ideally, market competition also does some of that. See https://gwern.net/backstop for a more general exploration:

> I suggest interpreting phenomenon as multi-level nested optimization paradigm: many systems can be usefully described as having two (or more) levels where a slow sample-inefficient but ground-truth ‘outer’ loss such as death, bankruptcy, or reproductive fitness, trains & constrains a fast sample-efficient but possibly misguided ‘inner’ loss which is used by learned mechanisms such as neural networks or linear programming group selection perspective.


To some degree yes. As the world got more globalized and the number of prestigious universities basically stayed the same while the number of people getting PhDs skyrocketed, the competition for jobs at even third-tier schools became outrageous.

You either play the game and survive - often at the cost of your integrity. Or you walk. I walked. No regrets.


World Wars happened and everyone died. US schools had all the money and took off, and the handful of other institutions in the Old World did alright.

Lots of new science to uncover, a lot of which would be basic 100-200 level college courses these days. lotta new things to pioneer. my grandpa did a lot of work with plastics, for example, that are now pretty basic; new stuff then, but now no one thinks about their 0.9 cent plastic lid on their coffee.

lots of mobility but the world wasn't hyper connected, so you could do a trip to somewhere more obscure like Indonesia or Zimbabwe and do research. Now you can just email them the questionnaire.

Years of high schools pushing college now means there are legions of grads, plenty of whom will chase academia. smaller world + growing demand = ruthless academic posturing.


Academics used to be rich people (entrepreneurs or aristocrats) who funded their own work.

Then (late 1800s?) it changed to a profession, and the focus became extracting funding from ignorant patrons.


I think they meant "any more".


> If HBS has fewer but better quirky marketing psychology studies, their reputation will do well.

Provided that there are valuable things to discover in the field of quirky marketing psychology.

But if all they want to do is provide a regular supply of ammo to both sides of internal corporate conflicts on marketing strategy, and promote the consulting brand of their profs, their current approach is optimal.


A very very long time ago I did some programming to help a friend of a friend analyze a data set. The data set was from public schools and it was being used to inform some policies at the state level. All I can say is, the data submitted from many schools had very disturbing patterns of regularity (as in identical records on many unexpected metrics that you would not expect) ... like 10 or 15 records in a row, in the same rough geographic area (we didn't have exact addresses) with the exact same scores in all subjects as well as reading and match assessments. Basically it looked like someone had copy pasted rows of data over and over while keeping the original unique ID numbers (end result, different ID numbers representing different students, all having the exact same scores). And guess what ... whenever I saw that pattern, the scores were all very much above average. You didn't see that pattern with below average scores.

I told the person I was working with about the data and suggested it was fraudulent and she became concerned and raised it with her supervisor. Within about 24 hours I no longer had access to the data sets. And the friend of a friend just said that she didn't need help anymore.

I suppose I could have raised a fuss and contacted a journalist but all I had was columns of data without context. Plus at the time I'm ashamed to say I was playing a lot of World of Warcraft and not inclined to do much else that required effort.


This perfectly matches my experience in similar cases. Just not in academia but in business.

Policy always represents somebody's interests. "Data driven" is an excellent cover when you control the data. You get to eat the cake and look good while doing it.


See also https://en.wikipedia.org/wiki/Dan_Ariely, author of (Dis)Honesty: The Truth About Lies, later accused of data fraud and academic misconduct.


Also Jonah Lehrer, author of pop science books Imagine and How We Decide https://en.wikipedia.org/wiki/Jonah_Lehrer


This guy is featured in the So You've Been Publicly Shamed book, and it definitely seemed like he was pilloried pretty hard. It is interesting, though, which journalists get tarred and feathered and which ones are celebrated.

Certainly https://en.wikipedia.org/wiki/Judith_Miller and https://en.wikipedia.org/wiki/Robert_Novak have massive journalistic ethical errors: the former fabricating news for the US Government and the latter blowing a CIA agent's cover (the Plame Affair). Yet, they are celebrated journalists.

It seems that you have to:

- not be hypocritical (it is a greater sin to preach and sin; than simply to sin)

- be younger (established powerhouses can steamroll these accusations)

And to GP: thank you! The number of times Dan Ariely is involved in this data fabrication stuff does make the whole thing seem a bit fishy. At best, he is bad at trusting people, which makes any claims he makes low-coefficient since they could be from data from fabricators.


Is Miller still a celebrated journalist? I had her down mentally as "noted fraud". Looking at Wikipedia, it looks like she's never done real news again, and has spent her time on what I think of as the "wingnut welfare circuit". And the reactions to her memoir suggest that she's never been welcomed back into the profession: https://en.wikipedia.org/wiki/Judith_Miller#Memoir


She is neither celebrated nor an outright fraud.

She was disgraced because she reported inaccurate claims (on WMDs in Iraq) from government intelligence sources. Specifically she reported them without sufficient skepticism — without checking the sourcing of the claims (I.e. asking questions like “how did the CIA acquire this intelligence”) or giving sufficient credence and attention to counter claims.

Although this is considered bad practice, it is not considered fraudulent. She was indeed takling to highly placed government sources and had reason to believe they were credible.

My opinion: Although what she did was rightly viewed, eventually, as journalistic malpractice, she was clearly pleasing her editors who routinely moved her stories to the front page. These editors also failed to demand skepticism and wider investigation, and while at least one was pushed out, the institution of the NY Times remained credulous about government claims. I routinely see stories where they uncritically repeat government claims — especially where classified intelligence is involved and especially relating to military operations. Therefore I believe Miller is also something of a scapegoat.


It's interesting thar Miller left NYT in 2008, the year Obama won the Presidency, and NYT shifted from (right wing) Establishment to (left wing Establishment) to stay aligned with the President.


This is kind of surreal. I have just realised that I have merged the authors Johann Hari and Jonah Lehrer into one person. I had always assumed Johann Hari had made a fortunate recovery from the Imagine/How We decide scandals to write successful books like Lost Connections.

I was feeling guilty for besmirching Johann Hari's reputation in my own mind until looking at his wiki page and finding the HE TOO was involved in both plagiarism and fabrication scandals like Jonah Lehrer.

Weird. Explains the mental merger I guess.


I read his dishonesty book and now I wonder if any of it is actually wrong (although most of it made sense at the time)


There are few things out there as dangerous as arguments that "make sense." Lots of things can do that, all while being rather flawed.

I am curious to know more on this one. Odd to see so much pointing fingers happening. I'm guessing that isn't as rare as I would assume it is?


Is Dan Ariely implicated in any of this? My understanding was that he was cleared.


Sounds like it. Take a look at his wikipedia page. The Gino/Ariely paper that was retracted in 2021 has this to say:

> Dan Ariely was the only author to have had access to the data prior to transmitting it in its fraudulent form

https://en.wikipedia.org/wiki/Dan_Ariely


Previous HN discussion about Dan Ariely’s discredited paper: https://news.ycombinator.com/item?id=28210642


Adding to the irony:

She is the best-selling author, most recently, of “Rebel Talent: Why it Pays to Break the Rules in Work and Life.”


Yet another example for assumption that all book titles with that sort of semi-colon + second sentence title are universally lowest common denominator crap, at best.


Not that it matters, but that's a colon, not a semicolon


The book comes across as being very anecdotal....it is easy to mold anecdotes to whatever point of view you are trying to get across. In their zeal to find hard facts to back up their theories, did they play hard and fast with the rules?


That doesn't seem ironic.

Someone writing a book arguing for breaking the rules is exactly who I would expect to break the rules.

Or is the irony here that someone actually took an idea seriously and followed through on it?


I suppose the irony is in the fact that it evidently didn't pay off because I can't imagine that one book was worth ruining her career.

I think the general thesis is questionable, for work especially, because in business contrary to the stereotype being honest and following the rules does pay off in the long run.


As the adage goes, “if you talk about it, be about it.”


Ahhh, they misunderstood something vitally important.

This guy's speciality is Applied Dishonesty, so he's not the abstract sort with just theory and no practise. ;)


Pretty sure Francesca Gino identifies as a she! But then, who knows?

[0] https://twitter.com/francescagino


He’s talking about Ariely, who the first blog post seems to implicate by saying the xls files came from his computer and none of the coauthors were responsible (Gino listed as a coauthor).

Does this implicate Ariely or did I read that wrong?


Maybe he is to blame, but the social sciences in general are inherently vulnerable to this sort of problem. There have been many instances of similar incidents over the past few decades or so. not just overt academic fraud but misconstruction of data, excel errors, poor methodology, publication bias, etc. This calls into doubt if anything in the social sciences can really be trusted.


This is just you presenting your preexisting beliefs.

We saw a widely publicized and long career of false data in biology over the past couple of years following roughly the same structure (just sticking garbage data into excel spreadsheets). Methodological errors and publication bias are also widespread in CS. When I was in grad school, the method I was told to use to choose tools was to choose the second best tool in a paper, since the authors' tool was invariably the best but their evaluations were inherently less likely to transition into my work.

Drawing a further conclusion that whole entire fields of research are simply untrustworthy based on media coverage of bad actors, especially as a nonexpert, is throwing the baby out with the bathwater.

The field that is actually tackling the reproducibility crisis even in a small way is Psychology. This is one reason why there are so many "X doesn't actually replicate" headlines. Yes, replication studies should be even more common. But it is hard to conclude that the social sciences are uniquely horrible when Psychology is actually ahead of the curve here.


Everything you say is true, except the part that implies it's just the social sciences. I don't know of any field of science that has looked into replicability in a serious way, that hasn't found similar problems.


life sciences is just as bad if not worse -- you can read news stories about many cases of outright photoshopped images in biology papers by respected researchers in major journals.

physics has a good reputation, but then you hear of things like apparent fraud in that quantum computing paper about Majorana fermions...

computer science actually seems relatively okay as far as avoiding outright fraud, probably because there's such a strong norm of sharing code. plenty of questionable research practices though.


Life sciences is beyond the pale :( Lots of papers coming out of China are entirely fake and made to order by professional paper forging shops. Doctors have to be published to get promoted there, even if they are just regular hospital doctors and not professional researchers, so it leads to really bad incentives. Even the Chinese government's head of research integrity was found to be engaging in research fraud! But, nobody can do anything about it, as journals aren't interested in becoming research audit organizations. So they just publish floods of fraudulent work discussing non-existent interaction effects on random proteins and stuff.

Public health is in even worse shape. During COVID I was forced to conclude nobody in power was actually reading the papers that were driving policy. They'd regularly have major errors in them, like model code that was full of bugs or where things were stated about the real world as if they were clearly 100% true but the underlying paper stated the models had unrealistic assumptions! (same people making both statements). Universities didn't care about any of this.

I never saw outright fraud in CS but I've seen things that were getting close to the line in cryptography research. Some years ago there were a spate of research papers on Intel SGX that kept claiming they'd broken the security. When examined closely they all used an identical version of GNUtls in the enclaves for their proofs of concept. I got curious about why this specific version of this not very popular cryptography library kept cropping up as the reason for using it never seemed to be mentioned in the papers. Turns out that it's an ancient version, the last version before side channel mitigations were added. The techniques they were claiming broke SGX only worked if you ignored basic cryptographic best practices and used obsolete libraries. And last year there was a non-SGX case where the researchers claimed to be able to extract private keys over the network via side channel attack. The demonstration required the use of a very specific algorithm nobody actually uses, and required the target server to not be doing anything else, and for it to be saturated with traffic that did nothing except invoke this specific obscure algorithm and even then it took days. So this wasn't a realistic scenario anyone needed to worry about but it was presented as if it was.


The primary explanation: http://datacolada.org/109


Andrew Gelman has written about this case, drawing parallels with past cases and asking about the psychology of the fraudsters.

https://statmodeling.stat.columbia.edu/2023/06/21/ted-talkin...


He has returned to the topic, this time with a focus on those who are capable of exposing the fraud but instead turn a blind eye. He calls this 'passive corruption'.

https://statmodeling.stat.columbia.edu/2023/06/23/its-worse-...


Recent HN post over US holiday weekend: Data Falsificada (Part 1): “Clusterfake” – Data Colada (datacolada.org) | 37 points by malshe 4 days ago | 8 comments | https://news.ycombinator.com/item?id=36374255


First reported in the Chronicle of Higher Ed: https://archive.is/0zOU8


From the original Data Colada article (http://datacolada.org/109):

"We understand that Harvard had access to much more information than we did, including, where applicable, the original data collected using Qualtrics survey software. If the fraud was carried out by collecting real data on Qualtrics and then altering the downloaded data files, as is likely to be the case for three of these papers, then the original Qualtrics files would provide airtight evidence of fraud. (Conversely, if our concerns were misguided, then those files would provide airtight evidence that they were misguided.)"

It does not appear that Harvard found any evidence that they were misguided...


I swear to god the biggest red flag for any academic work is association with HBS. They perfected the art of bsing and writing airport books


> (4) The evidence of fraud detailed in our report almost certainly represents a mere subset of the evidence that the Harvard investigators were able to uncover about these four articles. For example, we have heard from some HBS faculty that Harvard’s internal report was ~1,200 pages long, which is 1,182 pages longer than the one we sent to HBS.

Great example that highlights the balance of presumption of innocence with the asymmetrical bullshit principal. This person should be referred to criminal prosecution for fraud as a deterrent to others. 1,200 page reports to provide proof beyond a reasonable doubt is not scalable.


All anti-corruption departments are either ousted by the growth or integrated. Or you are integrated, fail that, then ousted for failed integration. Conclusion: ablative process design. Everytime corruption sets in, the system sheds "skin" and starts anew with a fresh layer.


I've lost a lot of confidence in the practice of science in the last decade or so from the Replication Crisis and worse, the response to the crisis from the scientific community (the phrase "methodological terrorist" comes to mind).


We have decided we are a post-truth society. Turns out solving that problem is non-trivial when we rely on experts that are themselves human who view the world as post-truth.


We've always been a post-truth society. I think it's just how humans are. We want easy answers, and if there are none, we invent them. I don't think that the human from 1AD is better than the human from 2023AD for example. What we used to blame on God we now blame on 5G or whatever. Same stuff, different details.

What has changed in the past 2000 years is the availability of information. Nothing is 100% true other than defined Universal constants. Everything else is on a spectrum. Those that want to get closer to 100% truth have many tools to get them along. Those that don't care can ask ChatGPT and get something 25% true. You can lead a horse to water, but you can't make them drink. That's just how people are, and AI isn't really changing that.


yeah but we had that really comfortable era for awhile there, where mass information dissemination was centralized in the hands of a few major institutions, so everyone was more or less on the same page—all you had to do was pick up a newspaper, or listen to the radio, or watch news on television.

this gave us the illusion that we were in a world of Truth, because why would these centralized corporations ever lie to us, or massage the facts in any way? it's The News, of course it's inherently trustworthy.

then the Web came about, causing some to see cracks in the foundations. then the iPhone was released, social media took off, and now the news monoculture is almost (but not quite) dead, and we're back to fending for ourselves in this onslaught of information.


We saw the cracks in the foundations but we also saw the foundations.

It cracks me up every time an alt media source throws shit at mainstream media for having journalistic standards that are far from perfect but still higher on an absolute level than said alt media. Retraction policies, always asking the accused for comment, citing sources, the list goes on. Unfortunately, it seems that people are more impressed by performative hatred of mainstream media than they are by the exercise of actual journalistic integrity, so I guess things will just get worse until (I have to hope) we figure it out again. Ah well, so it goes.


"Nothing is 100% true other than defined Universal constants. Everything else is on a spectrum."

Nicely put, because it explains the widespread disappointment in journalism, media, academia that don't even seem to try anymore.


You don't get it. They are the experts. They know better than you, so they are going to say whatever is going to make you do what they want done. You are merely observing the shadows on the wall of a cave. Why should they let you make any important decisions?


I didn't decide that. I don't think most people I know would. Maybe lots of people feel like "others" have decided that, but they'd hardly do it off their own bat.

(Anyway, what do you mean by "truth"?[0])

[0] https://en.wikipedia.org/wiki/Tarski's_undefinability_theore...


I don't know that any given lie(s) means someone believes in "post-truth".

I suspect most lies are just short term convinces.


Well how else would she be an expert?


Francesca Gino is a woman.



In situations like this, people should demand credible investigations, then wait quietly while that proceeds -- avoiding trial-by-Twitter and summary execution upon accusation.


You're putting the burden on "people" here. I think instead, the right way to look at it is that institutions should provide both prompt reactive investigations and proactive work to keep investigations from being necessary. The reason people get thoroughly upset here rather than waiting quietly is that this stuff gets ignored or covered up all the time.


What if I said "expect" rather than "demand"?

I agree that one of the problems is that we've sometimes been given reason not to expect an institution to have integrity.

One of the other problems is that public discourse is often lacking in integrity, as well.


I am shocked to see no one has yet said "how can they be an expert if they aren't also a practitioner?" :D


The title is clickbait in the sense that it suggests that we should expect this person to be super honest, and instead they were found to be dishonest.

On the contrary, I think we should expect them to be dishonest, and they are. They literally have a book named “Why It Pays to Break the Rules at Work and in Life”, what did you expect? Of course they're dishonest.


The title accurately represents what happened, wouldn't call that clickbait.

The story is just entertaining.


After all, how can you be an expert on something without experience?


A Clear case of "Practicing what you preach" and "Use your expertise".


Yes, the title is clickbait because you're smarter than everyone else and so are not surprised.


Being surprised that an expert in dishonesty is dishonest is like being surprised that a lot of pyromaniacs become firemen or that a lot of pedos become teachers and priests - it's something they spend a lot of time doing and thinking about, it makes sense that they'd want to do it for a job if they can. No, I don't think spotting this pattern makes me smarter than everyone else.


do experts in criminals commit crimes? Are experts in autism autistic? Are experts in military history bellicose?


"are" betrays a very black and white view of the world. I think they're all overrepresented, yes.


you're implying that an expert in criminology is more likely than non-experts in criminology to commit crimes, and similarly "more likely" for the other two examples?


Someone needs to categorize the various types of comments intended solely to increase the commenter's serotonin levels. Here's more that I see all the time:

- "That's not how X works." <comment ends with no explanation>

- "Anyone back in ___ who was paying even the slightest attention could have seen that ___ was going to happen."

- "This subreddit / board / field has gone to sh_t"


Oh look, her primary field is Behavioral Science. I'm shocked. Really.

/s


I have the feeling that the difference between the social sciences, which have been rocked by the replication crisis pretty hard, and every other part of science, is that the social sciences are aware of their problem. I'm not aware of any field of science that has looked for problems with replicability, that has not found them. Most have not looked.


Hey, if you want a job done right, call in an expert.


TL;DR a number of different parties are stating pretty unequivocally that there is fraud here in her work; however, no evidence is presented for our inspection, so while I don't doubt it, it's just a lot of "he says/she also says". I'm not entirely clear on this but I don't think it's too wrong to say: the fraud concerns, for example, papers based on trials where people are asked to fill out questionnaires where some pledge honesty before and others pledge honesty after and these results are compared against actual insurance company data, and conclusions are drawn as to what makes people be honest.


The evidence is available in the linked blog article from Data Colada, which is the real source of the story.

https://datacolada.org/109

https://datacolada.org/110


And then I’m accused of having trust issues!!


Time perhaps to post this classic from XKCD: https://www.r-bloggers.com/2018/09/xkcd-curve-fitting-in-r/

To anyone who has worked in research, the various strategies it shows would be tragically familiar.


well, he is supposed to be an expert in it. Why are you so surprised?


It’s behind a paywall.. also, just stop trusting anything anybody from a social science says. Pretty much everything they say is nonsense.


Someone once told me that ethicists were often actually psychopaths, that people with a working ethical sensibility were not interested in studying it.

Seems possible.


It's behind a paywall so didn't read the article, but title sounds about right: an expert putting his expertise to use. :D


<https://news.ycombinator.com/item?id=36424373>

In Comments ... Don't be snarky. ... Please don't sneer ... Avoid generic tangents. Omit internet tropes.... Please don't post shallow dismissals, especially of other people's work.... Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.

<https://news.ycombinator.com/newsguidelines.html>

And finally, that would be her expertise.


My apologies for assuming it's a he: not sure exactly what of the rules do you think I broke, as I don't think I was snarky (sounds closest).

I was mostly making fun of the language we humans tend to use sometimes: a car expert would be applying their expertise on cars, a botany expert would do it in botany, but it's not an "honesty expert" but a "dishonesty expert"?

I do believe that's an interesting point to make, even if — obviously — at least someone has misread the joke as something else.


HN members are generally presumed to be familiar with standard tools for subverting paywalls (see also HN Guidelines). Or simply reading the comments thread to find such a workaround. In this case the paywall link is now at the top of the comments thread, though it was posted a few minutes after your own comment.

Commenting that you didn't read the article, then making a series of shallow and incorrect assumptions about the circumstances ... seems on the nose for shallow, dismissive snark.

Further, it violates HN's prime directive: "What Hacker News is: a place for stories that gratify intellectual curiosity and civil, substantive comments." <https://news.ycombinator.com/item?id=13108404>.

I'm not a mod, but thought I'd address why your comment might be engendering the response it has.


Part of the guidelines is also to assume best intent: my point was not about paywall (which was admittedly a bad and superfluous excuse, yet I care not to expend time on it further).

My intent was to highlight an inconsistency in naming a skill with a negative connotation (dishonesty). How can I more explicitly direct the conversation to why is this skill a "dishonesty expertise" (or rather, an "expert on (ethics and) dishonesty")?

I still believe my actual point is "intellectually curious". What other skills are there that are like this? Is there a bad-at-sewing expert who's actually good at sewing?


<https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...>

Four replies deep into this thread, are we discussing topics of intellectual curiosity ... or ... something else?

That itself is a strong indicator of the issue here.

I strongly recommend pg's discussion of comments, particularly about their relationship to kudzu:

<http://www.paulgraham.com/hackernews.html>


I mean, it takes one to know one.


Dishonesty is a prerequisite for working for any large, legacy institution.


Whoever is the most honest person on hn, I hereby accuse that person of being a dishonest, impolite dog-botherer.


Seriously, when the goin' gets tough, you don't want a criminal lawyer, all right? You want a criminal lawyer, know what I'm sayin'?

https://www.youtube.com/watch?v=jvlEqAjg8aU


Well, isn't it ironic... don't you think?

(and if you didn't catch that reference, have a look at Prof. Gino's photo.)


That's weak-- accused. Also, who cares if they did lie? Knowing about dishonesty is not the same as positioning oneself as a beacon for morality. This is not front-page HN worthy material.


Reading the material proof and Harvard's reaction, the "accused of" is merely journalistic caution for a red-handed case.

> Also, who cares if they did lie

They didn't just lie, they tampered with collected data used for scientific research, harming their co-authors' work and reputation and giving fake leads to their field of study, thus wasting their peers' time spent on reproducing the case. It may also have had an effect on promotions that their peers should have gotten and on the way people got managed, based on these behavioral studies.

Also the evidence collection displayed in another reply is technically interesting.


The actual blog posts make it crystal clear that the data was manipulated to make it provide the desired results.

https://datacolada.org/109

https://datacolada.org/110


This blog has a lot of detailed posts if you're interested in this sort of thing, deep dives into how you find fake data (helpful if you need to fake your data better!).


Well, this way, we could have a followup story next week:

"Harvard dishonesty expert dishonestly accused of dishonesty"


Hilarious!


This sort of lie can occasionally result in researchers being fired and PhDs being retroactively canceled, it's a fairly big deal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: