The researchers in this paper use an astonishingly biased "fake paper detector", requiring only two conditions to be met for any paper to be considered "fake":
1. Use a non-institutional email address, or have a hospital affiliation,
2. Have no international co-authors.
And they acknowledge 86% sensitivity and 44% specificity. It's a coin-toss which biases massively against research from outside the US and Western Europe.
No. They use 400 known fakes and 400 matched (presumed) non-fakes to estimate the sensitivity and specificity of their indicator, then apply that indicator to the full universe, then employ the estimated sensitivity and specificity to the obtained measurement to estimate the approximate actual rate of false papers.
If you know the true prevalence of a disease in a population, and the sensitivity and specificity of your test, you can predict how many positive measurements you obtain. Vice versa, from the (flawed raw) measurement, given sensitivity and specificity, you can estimate the true prevalence.
Furthermore, they’re explicitly saying that “red flagging” by their simple indicator doesn’t mean that the paper is fake, but that it merits higher scrutiny.
ETA: I mean, it could still all be bullshit (by virtue of some bias or so), but you’ll need to argue a bit harder to establish that.
ETA2: Actually, not sure that’s what they’ve done. They might have just reported the raw (very bad) measurement (that they call “potential red flagged fake paper”), without doing the obvious next step outlined above, and without applying any confidence intervals. So, it might actually be a pretty crap paper (though possibly technically correct) coupled with some mediocre reporting layered on top. Isn’t basic statistics taught anymore?
I've worked on research estimating prevalence from imperfect tests, and something that concerns me about this study is that they aren't showing the error bars for their estimates. Typically, you would report a confidence interval for prevalence rather than just a point estimate, and the confidence intervals can often be fairly wide. There's two sources of uncertainty here, the assumed probabilistic nature of the diagnostic test, and uncertainty in our estimates of the sensitivity and specificity.
I think this paper by Peter J Diggle [0], gives a solid methodology. Instead of treating sensitivity and specificity as fixed values using sample estimates, you can model them as each having a beta distribution. In this case these beta distributions can be found using a Bayesian treatment of Bernoulli trials.
Amazing. Reading more carefully, as FabHK pointed out above, they aren't even applying the obvious correction. They're just reporting the positive rate of the imperfect test. I've implemented Diggle's method [0]. When I have time, I'll see if they've provided enough data to do a proper analysis, and maybe write a blog post about it or something.
> Furthermore, they’re explicitly saying that “red flagging” by their simple indicator doesn’t mean that the paper is fake, but that it merits higher scrutiny.
Then they and science should change their sensationalist headline. It's ironic that a paper about fakeness of something uses a borderline misleading title.
So it's ok to lie in a portion of your work? Where do you draw the line? I draw it when someone starts communicating. Being wrong is ok, being deceitful isn't.
Is this headline really deceitful though? Certainly the research is flawed, but the statement "[bad thing] is alarmingly common" is basically just a subjective statement that lets you know what position the author is going to argue.
I will never understand why everyone bends over backwards to justify lazy af journalism. This a magazine which is supposed to do scientific journalism, yet it didn't even mention the points that readers in HN comments were able to figure out on a cursory look. Peer review isn't just the 3 reviewers who accept or reject something in a journal. It's everyone in the scientific community.
You can’t directly calculate both sensitivity and specificity using equal numbers of positives and negatives groups unless the actual population has that ratio.
A completely random test given equal populations results in 50% accuracy and 50% specificity. Things don’t look nearly as good if only 1% of the actual population has the condition.
So, in other words, the signal they get from it is around 70% of the noise, but it's ok because you can indeed do that with good enough statistics?
They better have a flawless methodology, because any tiny problem is enough to ruin their analysis. And well, just flagging almost any paper not from the EU or US as fraud doesn't usually come together with a flawless methodology.
Paper mills are a $3-4 billion dollar industry that is growing rapidly. That money isn't coming from nowhere. There are a lot of fake papers, and the fake paper industry is growing steadily.
So then the question becomes "where are those fake papers being published, and by whom."
You can converge on answers to those questions in a lot of ways. The fake paper detection method is suggested as one tool to aid journals tackle fraud.
If you don't think the conditions are valid, well, ok. But why not? How would you improve on the validation methodology? Obviously having more known fakes would be nice.
Saying the article is "bigoted nonsense" doesn't make a lot of sense without more information (to be fair, I might be lacking crucial context). Are the authors known bigots with history of pushing bigotry? What I read seemed to be a sincere attempt to improve scientific publication practices by identifying the scope and scale of the fraud problem, while also developing means to address it. That doesn't strike me as bigoted nonsense.
That said, the headline of the article is pretty click-baity, and shame on science's editors for that.
> The researchers in this paper use an astonishingly biased "fake paper detector"
I havent looked at the details here, but if you make a prediction model and if that prediction model is robust enough to explain with great accuracy something with 2 or 3 variables, it's not going to be "biased", it's just going to be robust and right more often than not using only these few variables (as long as the training data was broad enough).
The metrics used in this paper are... deeply flawed, to the point that the authors admit that they label nearly half of known good papers in a curated sample as "fake" -- and particularly likely to generate false positives for researchers whose institutions don't, say, run their own email systems (as is common in large chunks of the world). Here's a rundown of the flaws from an epidemiologist with a sideline in scientific communication:
You know what's funny? Even if the numbers are hot garbage, they proved the point about how easy it is to publish fake science papers, since it got published.
Kinda similar to those researchers years back who proved how easy it was to go into certain social science journals as long as you copied their ideology.
Well, there is a difference between "fake science" and "tried to do correct science but ending being wrong". If the second is "fake science", then basically all that Newton has ever produced is "fake science".
Ironically, this study has generated a lot of "fake news" on the field of social science. The conclusions of this study were widely spread mainly by people for ideological reason. When we look at the study in question, it's clear the conclusions are quite different than what the rumors say. For example, the same researchers tried such hoax before the ones they mention in their study, except that these hoaxes failed to be published, and they "forgot" to mention it. They did not have any control group, neither as "correct article" or "article defending the opposite ideology" (so, how can we conclude that the reason these bad articles were published were because of ideology if you don't know how many articles are published without being critically reviewed). They also count as valid a lot of journals that are pay-to-publish and not seriously used in the field. One of the author, ironically, ended up supporting platforms publishing conspiracy theories (and he was even banned from Twitter) (not that the study should be judged based on that, but it's a funny anecdote: the author who, according to some, had the courage to defend real science against bad woke ideology, who ends up demonstrating that he never cared about real science and is driven by ideology not science)
There's also a difference between outright fake science i.e. lies/fabricated data in the manuscript and bad science i.e. the conclusions drawn by the authors were always "fake" because of bad practices but if you look at the details of the work they are honest about what they did. Of course ideally you would minimize both types of bad paper, but the latter isn't too damaging to the system in isolation while the former can cause a handful of papers to mislead a subfield of science for years. Also how to screen for and how to systemically discourage these two things could be quite different.
And the first one should be divided further into two categories:
1) committed deceiver that started working in this field for years and that somehow managed to not get caught (pretty rare).
2) fake science articles that get published but has absolutely no impact on scientists because scientists don't progress based on randomly found articles, but by meeting the authors in workshops, exchanging with them, ... which make a one-off fake article with fake author totally irrelevant.
If you are a junior scientist, the articles you read are mainly the ones recommended by senior scientists around you, and if you are a senior scientist, you are part of a community, you know the people who publish, and if you see a random article coming from nowhere, you may read it just in case, but you don't let it mislead you or change significantly your own research just based on reading it.
I think it's a flaw on some lawman people when they discuss "fake articles being published": they don't realize how small "having an article published" impact the field. Presenting it in workshop and debating with colleagues does, but what the layman person has in mind will never maintain the illusion.
Unfortunately, it's an open secret that fake or low-effort almost useless papers are very common in every area of scientific research.
Typically, it doesn't affect people working in that specific area - they develop/have a sixth sense to detect bullshit papers - it comes with experience but depends on several factors including the authors reputation, their institution (for the first screening), what journal/conference the paper was published in, authors other work, and sometimes things as simple as how much effort was put into the figures, polishing the text, etc. Some of these things are LLM proof, some of them are not - e.g. a senior professor I was talking to, who's been getting like 50-100 emails a week from non-english speaking countries (primarily India, China, Pakistan, and Bangladesh) mentioned that the quality of the text in the emails went up significantly almost overnight after ChatGPT was made open to public. It'll be interesting to see how things change in the next few months/years.
Right- academic papers are written for academicians who don't have any issues separating good papers and journals from bad. The fact that many journals have set themselves up as or allowed themselves to become part of the tenure and promotion metrics game, is more of an issue with tenure and promotion. If the requirement for simple metrics dissapeared, the fake papers would go away on their own. In any event, it's not really a problem for researchers.
Yup, that’s sums up the incentive for publishing so many papers and get citations.
Some professor put it in a nice way - the current system motivates us to think of research in terms of LPUs - least publishing units. No matter how established your lab is, you’d try to publish as soon as possible, leading to a lot of papers with not a lot of contribution. If tenure committees and all other systems that gauge academicians require people to say present their only top 3 or 5 seminal papers, then people would try to put their best work out there without the constant pressure of always publishing - win win for everyone. Unfortunately, the ones with the power to make these changes are the ones gaining the most in the current system so it’s unlikely to happen.
I mean it is a problem for researchers though. The blatantly fake paper mill ones (which seem to be the topic of this article anyway) aren't, but scientific fraud or even just minor misconduct from people that know how to mask it can waste a great deal of grant funding and scientist time to figure out.
Like look how many times that 2006 Nature paper on amyloid beta in Alzheimer's was cited, turns out some of the images were completely fabricated.
My favourite example is the grievance studies (https://en.wikipedia.org/wiki/Grievance_studies_affair). The authors published, among others, portions of directly copied Mein Kampf. In one "study" submitted, they claimed to have observed thousands of hours of dogs having sex in parks, to observe the "patriarchal" linked to "rape culture". The entire thing is a horrible indictment on the level of scrutiny undertaken in the various activist and social "science" oriented journals.
I have no sympathy for social science journals, but when you look at the details of this study, it's way less obvious that the rumor says it is. On the tens of articles they have proposed, the majority was rejected. The article "copied from Mein Kampf" was taking sentence from the book but changing words to create sentences that were scientifically correct (for example: "this social class is bad and we should avoid it" into "stress is bad and we should avoid it"), which means that the article content in itself had no reason to be refused.
It's very ironical that this study that was all about "bad science" since then created a totally whimsical rumor on the real situation.
Similarly, I read the one about dog parks, and if you approach it with the good faith notion that it wasn't just fabricated whole cloth, it reads like an okay-ish Master's level paper being submitted to an okay-ish journal. And features a decent sample size, which is a rare thing in the veterinary literature.
Right. If one can develop a sixth sense about bullshit papers then so too can an LLM. If you have a bullshit paper you will need to pipe it through an LLM to debullshittify it so that reviewers cannot tell. The reviewers themselves may need LLM to fight the rising tide of passable bullshit papers. None of that seems productive to me, just throwing gasoline on the dumpster fire of phacking, credential inflation, publish or perish, etc..
Love that! I never understood why so many of us would use their affiliation's email address in print if they know that they would only be there for another 2--3 years.
Reading the paper it seems like a pretty accurate description. The paper just calls it a "private email" instead of a "non-institutional email". For example (@@@ emphasis is mine):
> To identify indicators able to red-flagged fake publications (RFPs), we sent
questionnaires to authors. Based on author responses, three indicators were identified: @@@“author's private email”@@@, “international co-author” and “hospital affiliation”.
> For Studies 1 to 6 we identified two easy-to-detect indicators, where a publication was labelled as RFP: @@@if an author used a private email@@@ and had no international partner.
> Then we combined the two best indicators (@@@“author's private email”@@@ and “hospital affiliation”) to form a classification (tallying) rule: “If both indicators are present, classify as a potential fake, otherwise not” (the “AND” rule) (Katsikopoulos et al., 2020).
Fun bonus there with the 2020 book citation for the concept of an AND gate in a classifier.
The rule I omitted from the quote was "hospital affiliation". In the paper, they try a variety of combinations of rules, including some where failing any one rule classifies the paper as fake.
The meat of my complaint remains even when they're intersecting with other rules. We should not be incentivizing people to use emails that predictably go dead in O(years). It is quite a common annoyance to read a paper, want to contact the author, and not be able to because the email they listed is dead, requiring searching for where they currently work and trying to find their email at that new place, with mixed results.
Yes, a private email is predictive of a paper being fake, in the literal sense that P(fake|privateemail) > P(fake|institutionemail). I get weird looks at work for using my permanent email address because of it. And probably if we select on that as a way to discard papers, it will initially appear to work and then start to look like it's working even better because anyone trying to give permanent contact info will be forced to switch to be published/cited/taken-seriously. But that's a bad outcome. Also, if you systematize this rule, paper mills will just start using emails that appear institutional, because this is a simple rule to defeat.
> Such manuscripts threaten to corrupt the scientific literature, misleading readers and potentially distorting systematic reviews.
Is treating "the scientific literature" as a single thing perhaps a habit worth giving up?
As convenient as it would be to be able to just blindly trust something because of where it is published, that model hasn't shown itself to be especially robust in other cases (e.g. the news media).
Elsewhere, this is a red flag:
> I trust it because of which aggregator aggregated it
Should we really make an exception for science? I think that academia is a bit biased towards optimism about publisher-based root-of-trust models because scientific publishing is a relatively unweaponized space. Sure, shenanigans happen, but not at the same scale as elsewhere. The fakers are just trying to get another published paper, they're for the most part not trying to mislead. It's only fake news with a lowercase-f.
Sure, let's try to create a medium we can trust, but let's not get our hopes too high about it. That's energy better spent augmenting the ability of a reader or researcher to decide whether to trust a paper based on it's content or based on it having been endorsed or authored by somebody that they explicitly (or transitively) trust.
I disagreed with you until the last paragraph. Lots of things authentically just rely on a high degree of trust and I suspect trying to engineer human systems to be zero trust will make them deeply pathological.
But tempering our expectations while working to meaningfully improve on conditions? Aces, all for it.
I agree that zero trust is in most cases a problematic goal. It's really root-of-trust vs web-of-trust that I'm on about here.
If peer review is the product then the trust should be peer to peer. It feels like we're treating the publishers themselves as an authority, which I dislike.
The publishers ostensibly occupy a role of stewardship, I suspect the model must have made sense at one point. I admit its hard to see them as much more than rent extractors these days.
The nature of trust relationships seems to trend towards aggregation and centralization. Do you have any thoughts on how a web of trust can sustain itself, or is that perhaps not a concern if a centralization appears to reflect a network concensus?
There's a belief among some distributed systems folk:
> If your system doesn't have an explicit hierarchy then it has an implicit one.
I think it's hogwash. There are plenty of distributed systems in nature that lack a hierarchy (mycorrhizal networks in the soil of a forest come to mind). Truly distributed systems are possible, we humans are just bad at it.
Or rather, we're bad at designing for it. We do it all the time in our personal lives, we've been doing it for thousands of years, but when we introduce systems that are designed to scale globally, it falls apart and you end up with gatekeepers and renteeism.
Another distributed systems thing: the CAP theorem:
Usually, the systems we design are at the expense of partition tolerance (blockchains, for instance, go to great lengths to assure consistency).
But those fungal networks that I mentioned, they put partition tolerance first, which gives the system a sense of locality that is lacking when you instead focus on consistency.
That same sense of locality is found in natural emergent human social networks, they don't even try to achieve global consistency: if you think Jimbob is an asshat, and your friends agree, that's enough.
So I think the key to sustainable webs of trust lies somewhere in that underexplored design space where we make partition tolerance primary. Rather than building tech to tell people who to trust (think of that padlock icon in your browser) we should respect their autonomy a bit more and make the user experience be a function of that user's explicitly defined trust settings.
One thing I like about this is that it removes the edgelord dynamic. There's no advantage to being the guy who posts the most outrageous stuff that just barely squeaks by the moderator. Instead, everybody can publish, but if you want to be heard as widely as possible you need to be trusted (in whatever domain you're publishing in) by people who are themselves well trusted in that domain.
Experts can be found not by listening to some authority that tells you who the experts are, but instead by following the directed graph of trust relationships until you find a cycle. That cycle is a community of experts in the "trust color" you're querying for. So expertise is more emergent and less top-down.
If you can't agree with somebody about a topic, you can follow this graph and either find a mediator (someone you both transitively trust) or find separate experts who presumably exemplify the disagreement more energetically than you do.
Instead of:
> Agree with us or be silent
It would be more of a:
> Here's how we can disagree as fruitfully as possible
Navigating the resulting dataset and deciding what to believe would be left as an exercise to the user, which it already is, but we'd hopefully have given them enough so that we can scale further than our unaugmented trust instincts allow for.
There's unfortunately not much money in building things like this. There's no guarantee that you stay in control of what you've built (the users could just revoke trust in you while still using the software that you wrote) and that tends to be a turn-off for investors.
I've given it a lot of thought, but not much code. I wish I could say that building a proof-of-concept has been difficult, but I'm not even to the difficult part yet, I'm struggling with the boring stuff like time management.
One day I'll have saved enough to take a year off and I'll build that POC.
---
Re CAP, the reason I think that consistency is the problem: It creates high value targets for corruption. Somebody or something has to arbitrate against whatever alternative would threaten consistency. That's a position of power, and too often its one that's easier to retain by abusing that power than by taking your role as arbiter seriously.
The power that comes from being an emergent authority on a topic--where the system arrives at consensus not because its design requires it, but because the thing we're agreeing on really has that much merit--that's a different kind of power. You can't squat on it and abuse it, people will just stop trusting you. The only thing to do with that power is to use it to continue striving towards something worthwhile (the difference being that people are now trying to help you). If this was our model for group coordination, I think we'd end up with leaders of a different temperment.
That's what we should all want for power. It should be hard to get and easy to lose.
One option is to provide a (perhaps less prestigious) avenue to publish non-novel or unsurprising findings. I suspect many people “fake” their results so all their effort isn’t in vain.
> STM hasn’t yet generated figures on accuracy or false-positive rates because the project is too new. But catching as many fakes as possible typically produces more false positives. Sabel’s tool correctly flagged nearly 90% of fraudulent or retracted papers in a test sample. However, it marked up to 44% of genuine papers as fake
> so results still need to be confirmed by skilled reviewers
So there is some human review involved. Which is presumably how they got to the headline figures of 34% of neuroscience papers and 24% of medicine papers are fake.
Still, flagging 44% of genuine papers as fake doesn't sound very useful. The process only about halves your workload compared to just checking all the papers. In any large-scale rollout they would have to set a way higher threshold, and hope they still catch a useful number of fraudulent papers when using a threshold that detects 10% or 1% of genuine papers as fake.
If they thing that people can do research in medicine without having legal access to the patients (AKA some kind of Hospital affiliation), they are clueless about medical research. They don't seem to understand how much hospitals, medical companies and academy are interwined now. Lots of relevant physicians are also teachers, direct a research team or are testers from new products.
They are also tagging all independent non affiliated researchers as fake. Do they know how many young people are doing science in the universities as temporary collabo-slaves without right to a nice personal mail?. Their detector would tag Einstein and Erdos as fake scientists by Pete's sake!. They just have a narrow vision tunnel about how the real research works
I am amazed at how well Alexandra Elbakyan has created and promoted sci-hub to fight these journal cash cows, and appalled at the way these journals have tried to block her. They now digitally watermark every journal downloaded at colleges etc, so they can ID the provenance of the journals = she must obfuscate this as best she can. The journals try to punish universities that leak papers to sci-hub
Give her a wave....
https://sci-hub.se/alexandra
The cash cows, AKA Elsevier et al, need to do more to stem the flow of BS. The problem is the proliferation of well crafted, but fake, papers has grown enormously over the past 25 years as the cows rely on free paper editors - who are swamped by this duty = time for paid scientists to winnow the chaff. Sadly the cows are a greedy lot.
Only way out is fully open. Back in the day when Nobel was born, the journals and authors circulated as near free resources, with authors mailing free copies on request, and now emailing them (often this is interdicted by the cows) and journal fees being modest - covering production costs.
Nobel would be (IMHO) royally pissed at the present state.
So I suggest the Nobel Committee introduce a policy that only openly published papers would be read and considered by the committee - This would put a tiger among the pigeons(Cows) and change things - say, after Jan 1 2024?
Sadly, this doesn't even include the studies with authors who produce poor experiments and theories, or go out of their way to prove their results; effectively, generating additional scientific publication waste we have to sift through to find genuine material, or worse off, that people then use to create policies impacting large populations that are doomed to fail in the long-run. The image this creates for me, is building a house on quicksand.
In essence, there's more than enough to deal with regarding bad science that was done in good faith. There have got to be better ways to filter out bad science offered in bad faith.
I think the incentive structure has to be fixed - either making paper mills unnatractive, or removing the demand for their services, whichever.
I think filtering out bad faith efforts is too challenging because the pool of people capable of doing so is so limited, hell it might take longer to review and reject such a thing than to make it.
An author and journal rating services evaluating merits across time. While poor scientists become evident to competing authors in their respective fields, a policymaker or journalist may not take the time to figure out if the results are meritable or a background of the authors and utilize the results to support their position. In a perfect world, these authors should be filtered during the "peer review" process, but the process seems... corrupted?
>Sabel’s tool relies on just two indicators—authors who use private, noninstitutional email addresses, and those who list an affiliation with a hospital.
Can someone explain why the affiliation with a hospital is used as a key indicator?
A huge part of the problem is driven by Chinese hospitals. The PRC decided that they wanted China to catch up to the west in science, so made a role that to get promoted as a doctor you have to get some papers published in international journals. That applies to all doctors. But they're, you know, busy doctoring and don't have time out energy to do that. At the same time they'd like to get promoted. So they buy papers.
If people think that 100%-fake papers, with completely made up data and process are bad... wait until people learn how bad 30%-fake papers are, with real cherry-picked data and absurd levels of p-hacking :p
I found one the other day in the area of finance. The Chinese researchers claimed to have discovered a small tweak to a long established indicator which they described as giving a remarkable increase in r-squared value across a cross section of markets.
Sounds great, who wouldn't want to use this? So I implemented and find that their increase was due entirely to applying a log transform of the input variables. The resulting clusters were tighter, but it had zero predictive capability.
Very disappointing but in my experience, this is not uncommon.
I've read other stuff related to this issue. It seems to me our current system exists in a social reality for which our metrics of authenticity were not designed and it harms both credentialization -- which is recognized as a problem -- and also serious science in ways that are not readily acknowledged as a problem.
Mendel, father of genetics, failed to become an accredited teacher. His work on genetics would likely get no recognition in this environment of credentialism is king.
Some guy who knows enough about genetics he created his own home pill to deliver genes into his gut to fix his lactose intolerance is being ignored by the world. Someone recently told me on HN that his video sounds like a scam video of a sort that is common (probably in a redacted comment).
I have a genetic disorder, which fails to pass the credentialism test. For that and other reasons, I didn't bother to say anything like "Sorry you don't know enough about genetics to follow it."
The individual wanted to know where the "studies" and "papers" were. And they likely don't exist and will never exist because there's no profit in it for someone else to try to build on his work.
I don't know how we fix this, but the world has changed and it's valuing the facade of scientific work more than actual scientific work and it makes me want to scream.
> Some guy who knows enough about genetics he created his own home pill to deliver genes into his gut to fix his lactose intolerance is being ignored by the world. Someone recently told me on HN that his video sounds like a scam video of a sort that is common (probably in a redacted comment).
To be honest, I know nothing other than your description and it 100% sounds like either a scam or there are some variables that are not being controlled for. I’m a little shocked that you seem to have fallen for it, unless there is just a lot more to the story…
I don't know why you would be "shocked" that I "fell for it." Most of the world thinks I'm a nutter who imagines I'm getting well from my genetic disorder and dismisses my progress as "placebo effect" -- which would give me a mind more powerful than Darth Vader -- or just deluded bullshit.
So either I understand genetics and medical stuff better than average, or I'm absolutely the kind of fool who falls for bullshit scams on the internet.
My comment was assuming you’re the average person. I don’t know you or your reputation at all and I think it’s unfair to assume some random HN poster like me should. But if most of the world thinks that, who am I to disagree?
Somewhat related anecdote: I’m reminded of a good friend who is preeminent in their field. No one would know them outside of their area of expertise, but anyone within that area of expertise (or who has learned that area of expertise from their college textbooks) knows their name. I got dinner with them over the holidays last year and they lamented that, I’m guessing based on name recognition, they receive a steady stream of communications (letters, email, etc) from laypeople who always think they have done something amazing previously thought impossible, or they have a new insight that everyone else ever has missed. Invariably my friend no longer spends time going through these because in every single of the hundreds of comms they’ve read, there’s always some confound factor or something basic the writer missed that invalidates everything. I am not an academic, but my impression is that while laypeople like you and me can brute force things and have amazing insights, mostly we’re just wrong for some reason that a trained scientist or academic would have spotted immediately.
I don’t know you or your reputation at all and I think it’s unfair to assume some random HN poster like me should.
I've been on HN nearly 14 years. A fair number of people know something about me, though certainly not everyone.
But if most of the world thinks that, who am I to disagree?
That's a personal attack, which is a violation of HN rules.
Your entire comment is about me and a justification for dismissing me and not pertinent to the point I'm making.
I provided a source. It's got a nearly 20 minute video. You are welcome to watch it and explain to me what is wrong with it and why it's a scam and I shouldn't believe any of it.
I can’t even imagine giving so much credence to every conspiracy theory or supposed breakthrough that only appears on YouTube that I spend time watching everything. There are only so many hours in the day, time is precious, and the onus is on the person making the claim to support them, not the other way around. It’s like someone telling me to watch everything on YouTube about copper bracelets taking bad energy and putting the onus on me to disprove it, but that’s not how science works.
I will share though that I have heuristics that help me gut check new claims. None of them are infallible but they can give a good indication that saves me watching every layperson video making outrageous claims. Those heuristics include things like getting published in major journals, getting peer reviewed, having large and well thought out studies controlling for confounding variables, having leaders in a field support or coauthor the studies, and more. There are also heuristics in the opposite way, which I won’t get into, but suffice to say that the position of “everyone thinks I’m a ‘nutter’ but this one thing happened to me in one case and it has never been repeated but I believe it and the establishment/BIG GENETICS is trying to keep the research under wraps but here’s a video you can watch that proves everything” is not only not compelling to me, but in fact ticks a lot of the anti heuristic boxes. ;)
You are dismissing a specific source out of hand, apparently without so much as clicking the link provided since he's not a lay person, he's a scientist with a degree in biology.
You aren't required to check out anything at all, but your ugly dismissal without bothering to check the source isn't any kind of meaningful rebuttal of anything.
Im afraid "new tools" aren't going to "tackle" the problem. There are source problem (bad incentives, low integrity, people-pleasing behavior), and second-order tools that amplify that (second-order problems).
Adding new tools to 'detect' that don't solve the original problem, they might reduce the second-order problem, but do not touch the source problem. These are band-aids trying to stop a flood of bad science
I'm not sure what this says about my turn of mind - probably too devious. But I wonder if one tack that fraudsters could follow would be to publish a paper with the named author(s) being legitimate scientists, and then include some citations inside the paper to the fraudster's other papers.
You'd need to use some obfuscated correspondence email to complete the loop.
Hi, my name is Adam Day. I was interviewed for this piece in Science. If you are interested to learn more about papermills, I have a popular blog on the subject. https://medium.com/@clearskiesadam Also happy to answer any questions you might have.
>“It will never be a [fully] automated process,” he says. Rather, the tools are like “a spam filter … you still want to go through your spam filter every week” to check for erroneously flagged legitimate content.
Even the article makes it clear that this is just a wide net for an automatic first pass. Of course, it is biased towards countries with lax standards.
The requirement to be introduced in the club by a international (ehum... anglo-saxon) partner is very condescending, and scientific colonialism at its best. The burden of the white scientist.
People can do science on local problems without being babysat by a foreigner that most of the time will just appear and sign.
Yeah, if the incentives are to publish convincing papers with less emphasis on quality or, you know, good research then this is going to happen and it’s going to happen a lot.
The article you link doesn’t say if the papers are any good. It does suggest that they were in smaller niche journals so I suspect not.
On the upside, I can see the potential though for literature review type papers.
We must always put this in context, and I think we need to be careful about the narratives. Here's a few rules of thumbs
- Realistically the only people who can determine if a work is sound or not are other researchers in that same field.
- Peer review is a weak signal: reviewers are good at recognizing bad papers but not good at recognizing good papers (read this carefully).
- Most papers aren't highly influential. Thus meaning that we don't rely heavily on the results of most works (we rely weakly or purely for citations).
- The more influential a work is the more likely it is to be reproduced and scrutinized.
- Benchmarks are benchmarks, nothing more. Benchmarks are weak signals at best and shouldn't be used to make strong conclusions. Be that a p-value, FID, or even likelihood.
So we have to keep this in mind for a lot of reasons. One is how we discuss with the public. Headlines like this often make people grow wary of science. While scrutiny is good we have a good history of being successful. All processes are noisy but the cream has is more likely to come to the top and the surface is less noisy. It also tells us about who we should be listening to when taking advice and summaries of works. If you believe the news has failed us, then look to the sources.
I see many who only get their science from news sources that claim scientists are corrupt. I found this odd, especially considering I've worked at national labs and I can tell you that no one there is doing it for the money. You'd have to be a fucking idiot to do science for money. It doesn't pay well, you never get real time off, there is a high barrier to entry, and you are under high amounts of pressure. We're on a forum with Silicon Valley wages: the average physicist wage is 100k, what you'd make with a BS in CS but need an advanced degree for working at a lab. Let try to compare likes and likes by looking at LLNL. As a PhD physicist you'll make between $150k and $200/yr. You'll make the same as a PhD computer scientist. Yeah, this seems good, but we need to consider that if you drove 45 minutes west then that would be your base salary and you'd be making the same in other compensations. You can easily verify this and there's plenty of people you can ask for personal experience (I've seen people jump ship often). This doesn't prove that they aren't corrupt, but it provides strong evidence that if these people were motivated by monetary compensations (or even prestige) then there are far better opportunities for them.
Another important aspect, which I think is critical to forums like this, is to be careful how you as a non domain expert. Opinions are fine and no one should prevent you from having them. But the confidence in your opinion should be proportional to your qualifications. If you're an expert in one domain I'm sure you're frustrated by how many people discuss your domain as if they knew so much and they get so much wrong. How wrong answers float to the top of forums (HN and Reddit) and the gems are hidden. This usually comes down to a lack of nuanced understanding. Simple answers are almost never correct. Murry Gell-Mann amnesia doesn't just apply to reading the news. Discussions can be had without teaching. Scientific discussions aren't done through debate. Determine your goals, and ask yourself if the way you are discussing allows you to change your opinion or not. Make sure you're on the same page as others, using the same assumptions (this is a key failure point). I'll argue to go in with care. If you don't, you're just adding to the noise.
1. Use a non-institutional email address, or have a hospital affiliation, 2. Have no international co-authors.
And they acknowledge 86% sensitivity and 44% specificity. It's a coin-toss which biases massively against research from outside the US and Western Europe.
This "paper" is bigoted nonsense.
https://fediscience.org/@ct_bergstrom/110357278154604907