Hacker News new | past | comments | ask | show | jobs | submit login
Scientific rigor proponents retract paper on benefits of scientific rigor (science.org)
60 points by pseudolus 50 days ago | hide | past | favorite | 52 comments



This is evidence of science working, and self correcting.

Should we expect everything that gets published to be definitive, the end word, worthy of enshrining on marble and being unerring in perpetuity?

Of course not. There is no more unscientific view of knowledge and of science.

Journal articles are the communications between practitioners. Blog posts before there were blog posts. Sometimes they are revolutionary. Sometimes they are work-a-day, filling in essential knowledge but not revolutionary. Many are arduous journeys that are hated by all by that touch the manuscripts by the time they get through the process of writing and peer review. Some are wrong.

The scientific literature is not a series of proofs and lemmas, it's an accumulation of evidence paired with contemporaneous interpretation of what that data might mean, and those interpretations should be viewed as fluid.

The data should be forever (unless there is error or fraud). The interpretation is moving.

I don't agree that need we tons more "rigor" in science, instead it would be better to have clearer communication along with less busy work.


I wonder if there is a formal term for this phenomenon: when the regulating system is only noticed because it regulates something correctly, but is criticized because this is the only time anyone hears about it. “Properly functioning systems don’t get any press,” or something like that.


Not a formal term, but a common colloquialism: “No news is good news.”


> I don't agree that need we tons more "rigor" in science, instead it would be better to have clearer communication along with less busy work.

We do, even with a self-correcting system. Spurious interpretations and dodgy conclusions remain in the literature forever and the signal to noise ratio degrades significantly over time. Even if no interpretation is perfect, raising the bar is better. And then, the limit between lack of rigour and outright fraud is blurry. It’s better to stay well clear, and that requires rigour.


> Even if no interpretation is perfect, raising the bar is better.

Can you explain why? What metric are you using? What is the end goal?

I ask because there are many metrics that I care about:

- pace of discovery

- cost of discovery

- ability to work in directions that have high risk and high reward

What I don't really care about

- outsiders skimming over the literature, misinterpreting it, and coming to bad conclusions.

Raising the "rigor" bar only seems to be positive for this metric that I don't care about.


Right, the system is self correcting but even so it can take literally decades sometimes for the corrections to happen, and in the meantime someone may have tried to build a career on what turned out to be a lie. There really needs to be at least one organization out there putting as much effort as possible into verifying high impact papers. I've had the thought before that if I was a Billionaire this would be my pet project.



> I've had the thought before that if I was a Billionaire this would be my pet project.

That would be a worthwhile endeavour.



There is a well-known trade-off though. Since publishing is a communication channel between scientists, it only has value as long as it hits some threshold of signal-to-noise. And papers that are badly written, or use sloppy methods, or contain significant mistakes, or of course fraud - all of those add noise. They actively reduce the usefulness of the channel.

Unfortunately, the incentives in worldwide academia encourage publishing at all cost, so as academia grows, so does the noise produced. Hopefully the signal grows as well, and I'm not sure where the ratio is today - but you definitely need to police the medium to keep publishing as a useful channel of communication in science.


> This is evidence of science working

Is it?

Looks more like academic bureaucracy to me

And herein lies a big problem with modern science, the two are conflated. Publishing by itself does little to advance science.

I feel there would be more science done if scientists discussed in an online forum than through papers


It couldn't be just any online forum, though. It would have to support scientists' specific needs. It would need to host datasets; make it easy to create and reference charts, diagrams, tables, figures; track post edits and the reasons for them; disincentivize both lengthy and low-content posts; provide advanced search and filter functionality; assist with jargon control; and support consensus building and measurement.

And then there's need for accessibility, for users with physical or mental handicaps, and for users with limited computing and bandwidth resources.


> It would need to host datasets

I haven't seen ieeexplore hosting datasets and yet we pay them to publish our stuff. Heck, most people don't even bother with publishing code or data, because there is very little or even negative incentive to do so.

The system is completely borked, because most people publish papers in order to advance their PhD studies, not their respective fields.


Yes of course. Github works for some part. Or Arxiv.


>Publishing by itself does little to advance science

I strongly disagree. Publishing advances science the same way communication advances culture. Imagine Einstein didn't published any of his work. How many decades back in technology and quality of life we'd have been now?

It's like any evolutionary beneficial trait gets transferred to all population in one generation instead of many. It saves precious time.


Let me put it this way: there is "publishing" and there's Publishing™

Before electronic communications became popular, publications would publish "letters to the editor" which were kinda like an informal paper with a partial discussion or results (some publications still do that, btw).

My point here is that Publishing™ took place of simply publishing to share results with your colleagues and now the main objectives of Publishing™ is to dodge picky reviewers and to get your publishing score up to get more grants

Thank FSM for Arxiv and for researchers sending "drafts" or "unofficial" versions around


Considering all the endless meetings, conferences and discussions scientists have to attend and find room to do research in between, I doubt that the issue is a lack of means for them to discuss...

I also highly disagree that publishing does very little to advance science, it puts a relatively self-contained result into the record. Replacing publishing with forums would be the euivalent of replacing documentation with a chat.


Lol yeah, cannot imagine equating published papers and forum posts. I knock the peer review process all the time but it is a significant hurdle that acts as a filter for all sorts of trash.


Yes, i expect every latex document section i read to form a tree of links of thesistext and evidence that auto checks online for citations,counter thesis+ citations and retractions and dependency hell + citation cycles among ideas. If its undone , i want it to become red and then fade. Making all things i can read without red, true forever ..


> This is evidence of science working, and self correcting.

This is a statement that can be said, and is said every time when something changes. Is it true? No, obviously it isn't. Is literally any change in literally anywhere an evidence for

  - that thing working
  - that thing becoming better
  - that thing being governed by scientific ideals?
How does that make any sense? Actually 'science' stating A and not A is an evidence for science lying and/or being dumb.


> This is a statement that can be said, and is said every time when something changes. Is it true? No, obviously it isn't. Is literally any change in literally anywhere an evidence for

This is a statement that can be said about any self-correcting system. If there is no change, then either we are at an optimum (I don’t see anybody seriously suggesting that about science), or the system is not working. Obviously, change itself is not sufficient as it could be in the wrong direction.

Our scientific understanding improving is a sign of progress. If our ideas are not allowed to change, we’d hit a brick wall pretty quickly.

> How does that make any sense? Actually 'science' stating A and not A is an evidence for science lying and/or being dumb.

Science is not an person, it can do neither.


> This is a statement that can be said about any self-correcting system.

Definitely can be said about any organization ever, without second thinking. It changed? Then is an evidence that it self corrects, improves. Also it must have happened among scientific principles (because.), so it is a supporting evidence for those too.

>> How does that make any sense? Actually 'science' stating A and not A is an evidence for science lying and/or being dumb.

> Science is not an person, it can do neither.

Meh. Science seen from outside, how it affects us, how it communicates with us, what it does, what is its inner state, etc, etc, it definitely is something that can lie and/or can be dumb. If you open a dictionary, you'll probably find that science is defined as an activity, so science is not even an entity, but that shouldn't matter much.


Reminds me of a conversation with a late friend. We wondered: what would the most academic book possible look like?

It could have: one brief, vague, and hedged statement, supported by a crapton of superscript references -- which constitute the rest of the book.

So like:

"Chapter 1:

Some things about the universe are at least partly knowable.<references 1-100,000>

The End."


The philosopher in me immediately objects, "How can you tell if something is 'knowable'? What constitutes knowledge in this model?"


That's covered in the footnotes...


Reminds me of DFW’s Infinite Jest (although I wouldn’t describe it as brief or vague)


It would not be academic without self-congratulation and lofty promises.


1. The world* is* everything† that is the case‡


Maybe there is a vast dark matter of uncompromising super rigorous science happening that is invisible to us because it is too rigorous to meet its own standard so never publishes anything. But they know things.


I think this may actually be true in a sense. A lot of good science gets published but never wildly noticed, just because good science is rarely sexy.


When it comes to testing materials, ASTM publishes and maintains dozens of kilos of fine-print documentation covering laboratory methods and procedures, almost all of which conclude with statistical ratings of their repeatability and reproducibility.

Very few would ever be published without a significant number of labs able to routinely achieve comparable performance.


I have tons of ideas that are interesting and with good preliminary results, but that will never get published before a lack of time to support them well enough that they are accepted by my peers (i.e., interesting works without enough rigour to clear a partially self-imposed threshold). Everyone interested in their field of research is in the same situation. Lowering the threshold would increase the amount of ideas that get communicated, but it would also increase the likelihood of one of these ideas being bonkers and decrease the signal-to-noise ratio in the literature.


Our math department does publish


There was a prof in my department who routinely passed grad students without a single publication because his standards were extremely high. All of his publications going back to like 2005 were Science or Nature


> who routinely passed grad students without a single publication

I really hope these were just Masters level students, because at least they could get the benefit of the doubt that it was coursework or something.

If it's PhDs, I feel so sorry for them: a major part of the degree is learning to communicate research. Without publications, there is no proof they can do research or communicate it.


I know a professor like this, and I feel sorry for his students too.


PhDs, and they did present at conferences (which in my field were not very selective), but yes it does seem like they did not get the whole PhD experience


Why discount the thesis, the traditional evidence of these things?


It depends on the context. I don’t bother reading American theses anymore because there is usually not much of value in them. Even in countries where theses usually mean something, they are often seen as a box-ticking exercise that needs to be done in the quickest and easiest way possible. If we fail students who don’t have a good thesis, not many would pass.


In my experience, the thesis doesn't properly test the skills involved in being a researcher. All it shows is that you've done some research, and that you can write and talk about it given a large amount of prep time and guidance. Without any publications, it's hard to judge if the PhD knows how to do peer review for instance.

On the other hand, after having gotten through a couple of publications, a couple of seminars and at least one peer review process, the thesis becomes almost just a formality. Since by then you've already had plenty of experience writing, dealing with criticism, and presenting your work to experts.


That is like your prior, man.


Behavioral science is too hard to get right. The effect sizes mostly don't seem to be big and it's too hard to run a trial, so to keep a career going scientists are forced to aim for some sexy outcome which isn't there from an underpowered experiment. But almost nothing really matters in that field. It's as good as plastic recycling.

Still, we are forced to study it because there are interventions that have worked in the past: cigarette smoking, teen pregnancies, etc.

And it's useful to know what made those so successful. But maybe the answer is luck.


There are many basic things we know about the brain that can be attributed to behavioral science. For example there is a ton of solid research from lesion studies that provide insight on, for example, the function of almost every cortical and subcortical region in supporting memory, language, emotion, motor control, all aspects of perception, attention, etc.


Very misleading choice of artwork showing natural scientists in doubt when it's actually things like behavioral theorists who have orders of magnitude more irreproducible results.


This reminds me of how the Dunning Kreguer results might have been misapplied statistics.


I can imagine a world where they hedged the outcome sufficiently to not have to retract. "methodologies based on those used here may prove beneficial based on preliminary findings, subject to further refinement" type comment.

I mean, without over-egging it you would not deploy peer review, based on the modern evidenced outcomes: the papers are demonstrably not better in volume, and have to be retracted, and peer-review has been massively destructive of career progression for academics because of Rei-ification. To me, it's analogous: the model is flawed? refine the model don't junk the principle.

I think at one remove I agree with comments which go to "this is evidence that rigor is good, and self-policing is good"


And a prominent study on dishonesty was itself found to have fabricated data [1].

Like too much refined sugar, this is too much refined irony. What is going on?

1: https://www.theatlantic.com/science/archive/2023/08/gino-ari...


Don't lump in actual fraud with incorrect data analysis. The former is far more serious than the latter. In a long career, most scientists (even very good ones) will probably make mistakes in data analysis. Very few will commit outright fraud.


I'm drawing attention to the irony inherent in both these cases. To wit, a study on dishonesty suffering from dishonesty, and a study on rigor suffering from lack of rigor.

And sure, mistakes in data analysis are entirely possible. But there are lines to be drawn, always. Ariely and Gino, and now Protzko, Krosnick and company are not in the category of reasonable and honest mistakes.

On the other hand, even something with widespread effects such as the mess Excel created for genetics papers is not something that most people would find as deliberate sabotage.


As icing on the cake, one of the authors of the retracted study written about in Science is named Dr. Perfecto.

https://www.nature.com/articles/s41562-023-01749-9#Sec8


Well, they are experts on dishonesty....


But they were caught!

Wouldn't the best experts on dishonesty get away with it leaving others none the wiser?

Perhaps delichon's hypothesis of "dark matter" dishonesty (post above) should be considered seriously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: