Hacker News new | past | comments | ask | show | jobs | submit login
A Cornell professor’s trick for getting 1700 peer reviewed publications (andrewgelman.com)
207 points by luu on Dec 5, 2018 | hide | past | favorite | 120 comments



Back in 2012, I worked with a guy in grad school. He was then a grad student as well--just one year senior than me. He would recruit new incoming grads to 'collaborate' with him on his current projects or whatnot.

The way he operates usually goes like this: he has a hypothesis (something that is usually trivial in my opinion). Then he recruits grad or undergrad students to work with him on it. After that, he let his teammates to do all the development work (coding) and experimentation (recruiting subjects and sitting with them in the lab to run the experiments). We needed to recruit human subjects because our work was related to usability of assistive technologies. By the time the data was collected and analyzed, the guy would show up with a vague draft of the paper, in which he has liberally cited his own (and his grad school mentor's) old publications, without knowing a damn thing about what his teammates did. Then we, as his teammates who ran the experiments, fill out the experiment and results sections. He then would write up the conclusion and fill up the rest of the holes. After that, he submits the paper taking the first authorship (because he came up with the hypothesis). That's the way he was able to publish >25 papers during 5 years of grad school (until he got his PhD).

Now that guy is an assistant professor--with ~50 publications--at a well-known University in one of the midwest states (of the US) at the very young age (I think he's just above 30 years old). If you look at his publications, he liberally cites himself and his mentors.

Kudos to him, he knows how to play the game.


I'm in awe of his skills and intelligence. Sounds like someone who knows how to work smarter. I would have taken that as a learning opportunity.

*edit: Getting a few down-votes on this one. I was being mildly sarcastic but on the other hand it takes some skill and smart thinking to pull this kind of thing off. Also in my experience, bull-shitting is a very valuable and necessary skill at work and in life in general. Arguably it gets you to a position where you can do work of actual substance and get recognition for it.


Sounds like you're in awe of someone who has learned to play a game to his own benefit, but to the detriment of the larger institution (i.e. science) of which he is a part.


This seems to be a common thread of "lifehack" types


From the original description "detriment of the larger institution" might be inappropriate. After all he gave more than 25 other people a chance to publish.

And if we assume that some of those experiment involved master students then for a few of them that was their first publication. It look like he was kind of a unmoral presence from many point of view, but you cannot ignore that he probably helped many people get more experience.

Again I am not claiming that what he did was right, but it is neither necessarily a detriment to science.


If he’s ok with doing bullshit work (as opposed to real research) just for the sake of gaming the system, why not just do it in the private sector? Is about the impressing people with a Professor title?


Maybe he's waiting for tenure to start doing real research. That's the only winning strategy for this game, as far as I can tell.


You are what you repeatedly do.

It's very unlikely that someone who's spent their entire professional life being one way flips a switch after tenure.


Maybe he's already doing real research, and that stuff is a side "business" to allow him to continue to do so.


Or maybe he's just a scammer.


My understanding is that most labs cite themselves over and over in general, but perhaps this is more advanced.


Yes, they do, and it makes sense to do so. You want to point others to the related work that has been done in the topic and it's likely that your lab has done a lot of this work.

Of course other groups have also done related things, and you should cite them as well to create an honest image of the research landscape.

In the case the OP mentioned, this is how a lot of research is carried out. The main part that sticks out to me as as scuzzy is that he took first author without being closely involved in the work.


Isn't he roughly doing what PIs do? Except they usually take last author.


In CS and ECE, yes, PI is the last author.

But as a PhD grad looking for a faculty position, you will be evaluated by the number of first-author papers you have published.


Name order depends on the conventions of the field, but yeah.


Peer review is broken. The assumption with peer review is that the review filters out bad research in a fair way. The reality is that it is neither fair nor particularly effective in preventing bad research slipping through.

Part of the problem is the review quality is almost as inconsistent as the article quality.

From experience I can tell you that rejecting papers is a lot more work than accepting them. It requires arguing things in such a way that it survives editor scrutiny. This in turn requires actually reading the paper, doing some background checks, etc. Of course editors getting involved also means work for them. So negative reviews tend to trigger a lot of work. So much easier to just rubber stamp the reviews that give a thumbs up.

And editors have the problem of needing to fill their journals with stuff. Nobody reads these things cover to cover. So, there's a lot of filler content that will never be cited that got rubber stamped by reviewers taking a five minute glance at the text. As long as the article is not too blatantly bad, nobody cares. A common practice is to invite accepted authors back as reviewers. So, there's a notion of bad articles leading to equally bad reviews.

It is not surprising that there are researchers that game the system by publishing in friendly publications where the reviewers/editors are known friendly. They all need to keep their numbers up.

A possible fix could be to review out in the open. Publish reviews along with the article. Allow scientists to challenge these as well. It's one thing to say "this is probably fine" anonymously than it is to say, please publish this with a message stating that I approve and endorse the content of this article.


I am not an insider but could the reviewer turn into more of an endorsement? Meaning the paper is published and the endorsers are listed as well, e.g. "Reviewed and endorsed by Jill Jones of Cornell University." Organizations could then have clauses that if you endorsed papers that you didn't review and test that you would lose your job. I know that seems harsh but we need to find a way to get "skin in the game". I don't think having a paid organization would work and I think having something at the government level would be worse.


>'Meaning the paper is published and the endorsers are listed as well, e.g. "Reviewed and endorsed by Jill Jones of Cornell University."'

If you read papers from like the 1700s you will see stuff like this. Basically you wanted respectable members of the community to vouch for you.

Say it was some experiment with dogs and blood pressure. It would say "My servant Mr. X recorded the readings I read off the dial while Mayor Y restrained the dog and Pastor Z observed it was correct".


I like this idea, but I think it might make the whole system collapse, with few people willing to review papers at all.


The ‘nobody wants to do job X’ problem has the usual solution: compensation.


That compensation would come at the expense of research grants. This is not the same world as commercial development.


Some journals do offer compensation - I think the American Economic Review used to offer £100. The solution hasn't been widely adopted, though, perhaps because academics enjoy the kudos of being a reviewer (and the chance to savage other people's work anonymously :-P... ). And while you can pay for reviews, it is hard to pay for review quality, which is very subjective.


Also because a price that would significantly incentivise a senior academic is a price that is way too high for most journals to pay. If you have prestige, a decent if not exorbitant salary, bulletproof job security, a project you're passionate about and want to spend time on, and way too many obligations to actually spend time on your project, how much would you take in exchange for adding yet another obligation to the pile, taking time away from what you really want to spend your time doing?


Maybe that's OK? Going a bit slower but with more solid footing seems like a reasonable tradeoff to me, though I'm a layman.


My husband reviews papers. He works a 40h/wk industry job; he reviews papers on Saturday mornings when I talk to other people or do personal projects, pretty much out of the goodness of his heart. There is no way he would ever have time to download the required third party libraries for the average paper in his field, let alone figure out how to build and run it. We're not talking about a highly in demand reviewer here -- he's an average expert with a PhD.

That system essentially can't work unless 1) you make 1st-year grad students do the review and 2) you somehow make people OK with publication delays being extended by months.


Whether peer review is broken or not is not the problem highlighted in this article. This professor is publishing in junk journals that are not rigorous and are pay to play. I receive offers for these SPAM journals in the tens to hundreds everyday that somehow skip my inbox. The problem highlighted in this article is about why Cornell is not doing something about this professor.


This is just an extreme form of what is in fact common practice in the academic world. There are a lot of second and third rate publications that aren't completely horrible that happily accommodate a lot of mediocre articles peddled by mediocre professors on behalf of their mediocre graduate students. We don't all get to publish Nobel prize winning articles in Nature.

Universities are typically more concerned with their professor's abilities to raise funding than their academic integrity. If the numbers look good (money coming in, publications going out) they are not likely to complain or apply much scrutiny. Their assumption here is that peer review should be sufficient. As I was arguing it is not. So that is in fact the core problem. Universities don't care, reviewers don't care, journal editors don't care, publishers don't care. They all benefit from inflated numbers. The system does not work. Cornell is complicit here and part of the problem.


If this was the case then I'd see these low tier / scam journals in the cv's of my colleagues. I do not. Perhaps this is only because I am at a well respected/ranked institution. In terms of funding, NIH for example, only has you list 4 or 5 most important publications in your biosketch..so quality, not quantity would matter there in terms of securing funding.

Yes, lower tier (but not scam journals) can be the homes of lower quality work, but most often they are there because 1)it is a journal for a subdiscipline (Child Development, for example), 2)useful but not groundbreaking research (e.g., establishing validity of lab's protocol), or 3)research that was conducted well, but the results were not clear or are complicated.

The latter is what often irks me the most about science: The need for clear results to get in a good journal. The journal quality should be determined not on results but on the quality of methods and research questions. That is why I am a proponent of pre-registration.


Like the rest of economics supply and demand, it depends on if the journal is elite and has more submissions than space (Nature) or is struggling to fill space.

Also, journals catre more about important results than correct ones, until they get burned too much. But much of modern science is too hard to disprove, because the claims are too subtle and complicated.


Why is "X is broken." People say this about interviewing and other stuff too. Yes it is broken, but that is the game. You either play it or if you chose not to you won't get the fancy Professorships and title, or in the professional world you won't get the job you want, the promotion you want etc.


The difference between that and job interviews is that we are forced to pay for this crap via taxes. It is literally a black hole for tax-dollars.

I still say cut government funding for research by 90% and watch it get fixed automatically as all the bad actors stop joining (you still probably need to wait for the current ones to die/retire).


I doubt that. For example, Polish science is poorly funded, and yet it’s 99% bad actors gaming the system with bullshit non-research.


It's poorly funded compared to US science, not to Polish janitors.


When I hear "X is broken" I translate it to "We haven't really figured out X, it must be hard"

And usually then things make more sense. We haven't fully figured out interviewing. Interviewing is hard.

We haven't fully figured out vetting academic papers, perhaps vetting academic papers is hard


20 years ago, Sergey and Larry used the scientific publishing idea of tracking citations to determine document importance and built a better search engine. When people realized that links mattered more than anything, they started spamming links all over the place and the SEO battles began.

Glad to see that the scientists have caught up.


In reality PageRank is surprisingly resistant to such effort. The SEO typically involves bit of social engineering and hacking (example convience well known website to link you or spam the comments of unpatched blog engine).

The h-index, the measure used by researchers, is surprisingly gamable on other hand. For example, two researchers can buddy up and self cite each other few hundred publications in junk journals with DOI and will suddenly become the most prominent scientist of all time. It is mind boggling that even Google Scholar doesn’t use pagerank.


It was easier to trick PageRank back in time.

Today, they cross validate the search results against DNS queries in a specific region and from their tracking scripts embeded in pages too. They are getting a lot of data from ISP, Wifi access points, their own 8.8.8.8 DNS service, and even their mobile operator project fi.

So, even if you manage to trick the linking part, you'll not pass their other filters.

It's much harder now but still doable provided someone is willing to foot the bill for such an operation.


If you're talking about PageRank anno 2018, then you're talking about two decades worth of research fighting back against gaming SEOs. I wouldn't use say the resulting resistance is all that surprising.


My point was gaming score using self citations. This is almost trivial with h-index and much more difficult with PageRank. PageRank is superior in resistance compared to h-index.


The graph of PageRank is fundamentally different from academic citations because it allows for cycles and that allows the algorithm to converge.

Papers cannot cite future papers, so you are left with trying to value older papers from a number of newer citations of unknown quality, by essentially counting them. Until enough citations become available, the quality of the new work is usually proxied by the prestige of the journal, which in turn make these papers more wide read, influential and cited, and this perverse power drives the whole train wreck of academic publishing.


"When a measure becomes a target, it ceases to be a good measure."



They were certainly doing it 20 years ago. I can think of a professor who was caught in my department doing exactly this. In this case, as back then, I doubt any sanctions will be taken as although the professor is cheating the system: his employer benefits from the deception.


This article's subject does not match the practices of the field.


Scientists were gaming citations before SEO.


Getting increasingly cynical on this topic. Given the prevalance of introductions in popular media like, 'Professor X is a world renowned expert with O(1000) publications'. Which my brain translates as 'Professor X probably barely read everything with their name on it.'

There are precious few exceptions to this rule unless they are very old or are Paul Erdos.

I'd much rather hear 'Professor X's most important contribution was Y.'


> I'd much rather hear 'Professor X's most important contribution was Y.'

It's rather funny to me that for years, Cornell's biggest professorial scandal was Daryl Bem transitioning from groundbreaking psychology work to ESP studies. It was a much-discussed embarrassment for a while, then he started putting out successful ESP findings with uncommonly-good practice and people started worrying about what else you could prove in the same way.

These days, it looks like Bem may come out rosier than some professors there who were still on the rise while his ESP work was causing outrage. Most of Bem's results are disputed at best, but they were actual contributions which led to further study. His methodology and basic integrity have never been seriously indicted, which puts him leagues beyond Wansink and Sternberg.

With Bem, it's easy enough to talk about his most important 2-3 contributions and analyze their merits. The others look more like they were being paid by the word, and that should have been a warning even before the specific problems were found.


I don't know about that. There's no doubt it happens that people artificially inflate their citations, but how often? The fact that this cases like this are reported as something exceptional says that they are, most likely, exceptional, or at least surprising for the people who point them out (which again means that they are maybe not that usual).

In any case, if this kind of thing is reported in the popular press, blogs, etc, it's because a scholar caught it and reported it, which means there are checks and balances that don't let this behaviour run wild all the time.

And then again, there's going to be differences between fields- some will suffer from this kind of problem more than others, etc. There's no reason to "tar everyone with the same brush".


I didn't mean to imply outright plagiarism everywhere all the time. I think there are many ways to get inflated numbers.

If a PI has 3 collaborator PIs and each PI has 4 students writing a paper every three months then the PIs can get 48 publications a year. Over 20 years, there's your O(1000) publications in a model with no substantial PI time contribution.

Given teaching loads, home life, university admin, academic duties like reviews I'd be surprised if a senior academic got 3 days per week to do research. I'd say at that rate the 4 first-author level contribution publications per year model would be optimistic. So a more realistic cap on the quality work someone could do in 20 years might be 80 publications.


I've worked in a similar system to what you described, with the top-most PI contributing essentially zero to publications directly. However, their fundraising efforts were singlehandedly paying the stipends/salary of 40+ researchers under them. I felt like it was totally fair to include them in the authors list, given that the research wouldn't have been possible without their efforts.

I doubt the parent academic had O(1000) first author publications, nor is that mentioned in the article.


No it isn't! That's like including your mom in an author's list because without her raising you, you wouldn't be a researcher.


The grant application usually already laid out the research plan, hypothesis, state of the art etc. Even if there are massive changes later on, the person who wrote the grant application often provided enough scientific input to merit a co-authorship.


If your mom is directly raising funds to support 40 paid position she should be included.


In the acknowledgements, not as an author. Especially not as primary author.


This is a totally fair point. There is a contribution from leadership.

But to track back to what was getting me bothered, it's the implication when Professor X is discussed or introduced somewhere that they personally thought up and wrote O(1000) publications.

I know good PIs guide the research activities of their groups as well rather than just creaming off publications.

But while I understand where you're coming from, stepping back and thinking about it 'authorship as a reward for leadership' could be a sign of unhealthiness in academia. Why can't authorship mean authorship and leadership be recognised separately? 'Professor X runs a highly successful lab of N researchers, who produced M papers last year, and manages a budget of Z dollars.'


There are people who behave this way. They offer valuable insight and advice, as well as money, and in return get thanked in the paper, and their grant numbers printed there. But not authorship, if they didn't write the paper. If you look up "author" in the dictionary it's pretty clear.

I actually wonder if journals should start offering a second list of names on the first page, or even three to allow both group-leader and technical-work credit. Has this ever been tried?


This seems to be a general trend in film as well. Look at a recent movie credits (accountants, administrative assistants, lawyers and the like) versus the very short list of people who actually contributed to a film on an old film. Everyone likes to see their name in lights.


>> If a PI has 3 collaborator PIs and each PI has 4 students writing a paper every three months then the PIs can get 48 publications a year. Over 20 years, there's your O(1000) publications in a model with no substantial PI time contribution.

I don't know how to square this expectation with my experience, as a PhD student (currently). My own problem with my thesis advisor is that I'm concerned that I don't contribute enough to our joint papers, because he's doing much of the job- most of the ideas are his and he writes at least half of each paper, and codes the odd implementation. And he's been doing that for the last 30 years or so (though not with me, obviously!).

I have heard the rumours- that career academics let their students do the hard work and just put their name on the finished paper. However, that presuposes that PhD students are already capable scientists who can be trusted to write a publisheable paper entirely on their own, even in their first year. I think that anyone who's been through a PhD, or helped guide someone through theirs will know how rare that is. Even just figuring out what an original contribution means in your chosen field can take a long time- unless, that is, you have someone at hand who understands the field, knows the bibliography and can recommend a promising research subject and also methods. At that point, that person has already done 1/3 of the work for you- figured out what you should try to publish. The other two thirds are to do the research and actually write up the paper.

Btw, in the UK were I study, the done thing is that the student's name goes first in any joint papers, while that of the advisor, or in any case, the most experienced member of the research team generally goes last. The advisor will still get citations to their name of course, but so will the people preceding them - and the first author, who is usually the student, will appear as the principal author whenever the names of the researchers are referenced (e.g. in author-year citation formats, or in slides, etc).

I tend to see this as a substantial boost to my own career as a researcher. Maybe even too much of a boost, in a way. I don't like to think I'm riding on someone else's coattails. But, the fact of the matter is that at the start of your research career, inevitably, that's what you are doing.


This varies between fields a lot. In economics, I haven't seen many cases of e.g. thesis advisors rubberstamping their name on to someone else's work - not that it doesn't happen, but it is rare, and it would get you a bad rep if it became visible. In other fields I've interacted with, it seems quite standard for the lab leader or chair to get their name on every paper of the lab, and for papers to be padded with authors who did not contribute that much.


If we are to believe experiments by Dan Ariely, everyone tends to cheat a little bit. It is covered in one of his books, this article seems to contain the summary. https://fs.blog/2012/05/everyone-lies-dan-ariely-explains-wh...


Perhaps, though, this is a three sigma case of a common phenomenon. There are a lot of journals and I expect that they consider that a half-full issue would be bad. There are a lot of paper-writers who are rewarded for more papers.


You should because those are the metrics that actually matter. Everything else is just social clubs essentially. Which is a problem given sheer specialization.

Academics has been infamously cutthroat for a while now. The iron law is that anything which can be gamed for advantages will be.

The system needs updating so that what is gamed for is closer to the actual desired results. I suspect actual logic being incorporated into the judgements will be required which only makes things fuzzier. It is easier for more concrete fields to fall back on industries or failing that just numbers if it gets too theoretical like say maximizing energy extraction from a solar system.

You can prove that say Kant has fewer followers than Ayn Rand but that says nothing about philosophical merits.


Holy cow! 1700 publications is one publication every week for 32 years! How could anyone see that number and not realize that it must be a scam?


I've never seen a number that high in my field, maybe 500 or 600 for the big shots, and even that seems wildly unreasonable to me. Feynman had about 85 peer-reviewed papers over his lifetime, which is less than two per year during his career, which is a very reasonable number.

Authorship and citations are basically academic currency, and as such have undergone inflation just like real currency. Professors get their name on a paper as a form of payment for future or past services to other professors (often related to funding). There's just no incentive for anyone to stop this as far as I can't tell, and appealing to ethical conduct in authorship and citations is not enough.


That is from having very many post-docs, and collaborations from other labs, and from managing large projects.


>"Authorship and citations are basically academic currency, and as such have undergone inflation just like real currency."

Currencies don't just undergo inflation for no reason. Central banks create that situation on purpose to discourage saving.


But the method they use is printing money to move the value of money from those who had it to toward those who get the newly printed money. Same as with academics publications who do that decentralized.

(Another, more-natural (Austrian), cause of inflation is a contracting economy so dollars chase fewer goods.)


If publication of a new article is lowering the value of previous articles that indicates a severe dysfunction (which may indeed by the case).

If understanding/knowledge is being accumulated by a field the new publications should make those that they cite even more valuable, not less like in the case of a fungible currency.


It need not be unlikely. Once you become an acknowledged expert, other people may collaborate with you in parallel. You contribute some of the main ideas, and your co-authors will flesh out and write the paper. I do not see anything wrong in this. Currently, Erdos and Shelah are in the 1100-1300 range. Euler published around 400 pages (or papers? I am not sure) posthumously!


> Euler published around 400 pages (or papers? I am not sure) posthumously!

That sounds like people who are not Euler engaging in dubious behavior.


So, mix-and-match existing papers? As a referee, I often do check whether the authors have published similar content before. The only way to beat this simple vetting is if someone pushes the same paper to multiple venues, and they all get accepted at around the same time. This is pretty unlikely.

Incentives for reviewing are broken. Currently researchers have to see it as an intrinsically motivated activity for the good of the field, there are no quality checks per se.


More and more journals are using plagiarism detection software to test for that. My wife recently got caught 'plagiarizing' herself after she copied a paragraph of background material from a previous paper. The journal refused to send her paper out for review until she rewrote the offending section.


Just self-cite?


Unfortunately, some publications use a double-blind review process, where checking whether the authors have published similar content before becomes more difficult: you'd have to figure out who the authors are first, which kind of goes against the idea of a double-blind review process.

But of course, when the double-blind review process is not used, then it makes perfect sense for a reviewer to check the authors' other published work. In journals that require a certain percentage of original work, I don't see any other way.


If all papers were indexed by a full-text search engine, you could simply pick a few sentences that sound unique and search for them across all publications. It's not like self-plagiarism is more important to detect than regular plagiarism :)

Or journals could just submit all papers to the plagiarism-detectors used for undergraduate work, like Turnitin.


> Or journals could just submit all papers to the plagiarism-detectors used for undergraduate work, like Turnitin.

I have some limited experience in this... they do.


I wonder in this day and age if a mind like John Nash would have thrived. Nash had only a handful of publications, but all of them were masterpieces. Same with Riemann.


They likely would get picked by super elite research orgs like IAS, or work outside the system and get grants from people like Simons.

Or have a low level teaching job like that guy who proved a twin primes result a few years aago.


Has this problem infected mathematics to a significant degree?

I'm sure there's still unhealthy pressure to publish for grad students, but I had the impression that it was comparatively insulated from the usual patterns and receptive to rare-but-superb work.

(It's a good question, though. How would Temple Grandin or B.F. Skinner fare in psychology today?)


Now that's impressive! 1700 peer-reviewed papers. Damn.

A commenter asks why reviewers don't use plagiarism-detection software. With most everything online (and Sci-Hub) it'd be pretty easy.


A commenter asks why reviewers don't use plagiarism-detection software.

Many journals do these days, and won't send a paper for review unless it passes


thankfully arxiv.org flags articles with overlapping text:

https://arxiv.org/help/overlap


And the great thing about this system is that it does not inform you beforehand. You cannot check whether your paper will pass, you submit, and then when v1 appears online, it is marked as "having substantial text overlap with works by other authors" etc, with links. You cannot remove this version.

It will also flag overlap with your own works, but obviously it's not exactly a huge stain on your reputation if your thesis seems to borrow some paragraphs from your own papers.


As a software engineer, I spend comparable amounts of time writing my own code an reviewing other people's code. I think it's normal.

Unlike scientists in academia, I'm explicitly (and well) paid to do both; it's in my job description. If do a sloppy job reviewing, and let a bug through, I feel the heat because something breaks. And what I work on is likely much simpler than what scientists work on.


I don't think this professor is exceptional, but rather the norm. As a PhD student in computer science I regularly observed people publishing papers overlapped to 80% with their previous paper. The difference being that paper 1 was titled "Solving problem X using Y" whereas the next one would be "A Y' approach to solving the X problem"


Overlapping in topic, that is natural. Exact text, exact framing. No.


I can tell you with certainty that this is not the norm in psychology research. You will not publish empirical articles in such a manner for any journal anyone cares about. Chapters are time consuming affairs that usually the result of an invitation from a reputable figure in the field. They will not accept a copy paste of another chapter. Usually, the editor will have requests to tailor your chapter under some theme. This Cornell professor is most likely publishing in no-name journals and books that are pay to play....just like the 1,001 spam messages in my inbox offer to me as an active researcher. That Cornell has allowed this man to continue is beyond me, but it would not be tolerate in my department. I admit I despise these articles. They take the odd case and write a story, but leaves the public thinking this is typical in science. IT IS NOT.


There are rules and incentives in the 'Academia' game. Among important things: get published, get cited, get your name out there. I am cynically inclined to think the 'system' fosters this, and some people have gotten good at it.

Unfortunately it's become less about 'science and knowledge' and more about getting the grants to survive and eventually be able to work on 'real' research.

However I do not believe this is specific to Academia, any 'game' can be hacked (at least for some time).


Considering his background of Yale, Stanford and 13 honorary PhDs spanning 4 continents (his words) I wonder how far back his gamesmanship goes.


ISTM perhaps the link should be to the blog post [0] from which TFA liberally quotes? At the very least, the references to different colors of text will make more sense.

Perhaps there's something "ironic" about that...

[0] http://steamtraen.blogspot.com/2018/04/some-instances-of-app...


TLDR; a professor kept recycling his books in to perhaps 1000s of papers with identical content. It is however not clear from the article how all these went past peer reviewed publications for decades.


Peer reviewed publications would just care the paper on review is any good, they're not archivists to go and search for similarities. In fact in most cases they barely read the work with any substantial sense of "read".

But the article mentions peers might not be getting the works either: " Bobbie Spellman, former editor of the journal Perspectives on Psychological Science, is confident “beyond a reasonable doubt” that Sternberg was not telling the truth when he said that “all papers in Perspectives go out for peer review, including his own introductions and discussions.” Unless, as Spellman puts it, “you believe that ‘peer review’ means asking some folks to read it and then deciding whether or not to take their advice before you approve publication of it.”"


> In fact in most cases they barely read the work with any substantial sense of "read".

I would not say this applies to most cases.

I have got many papers reviewed, have done many reviews, and have got questions from people doing other reviews, and people usually put quite some effort into it. There are exceptions, but bad reviews are not the norm, and you can usually catch them when it happens and talk with the editor about it (either if you are the author or another reviewer).

Searching for similarities should indeed be part of the job, but it is not always easy, and I agree more work should be done in that aspect.

This may be different in other fields or specific journals but, please, don't generalize.


Although he cited his own work many times, he didn't cite papers which were copies of the submitted paper.


Sure, but plagiarism detection software aimed specifically at the academic market has been around for most of that time.


It's seems really pertinent to point out two other profs doing somewhat the same thing. Peter Boghossian & James Lindsay recently did an interview with Joe Rogan. https://www.youtube.com/watch?v=OlqU_JMTzd4

The interview is really interesting.


Academics are highly incentivized to seek quantity over quality of publications. Reviewers are disincentivized from performing thorough peer review (unpaid, tangential relation of paper to area of interest, too many papers to review).

This professor has simply optimized an unethical path through the broken incentive system. He's not alone by a long shot.


And for each such professor that gets discovered there are a dozen still hidden and a hundred covering each other, all of the hundred three doing the same stuff, just slightly smarter.

I don't believe anymore that this stuff is a new phenomenon though. It's more like a group of people trying to discredit university these days.


From the professor’s website:

His main research interests are in intelligence, creativity, wisdom, thinking styles, teaching and learning, love, jealousy, envy, and hate.


Hey, the dude published 1700 papers. Even if his repetition factor is 3 or 4, that's 400+ unique publications. Let's show some respect ;) Also, in some fields, it's acceptable to publish the same material thrice: i) conference; ii) journal; iii) book chapter, each slightly evolved or expanded or condensed or improved version of one another (ahem, I'm looking at telecommunications and signal processing).


TL;DR: The man is obsessed with citing his own work—except on the occasions when he does a cut-and-paste job, in which case he is suddenly shy about mentioning his other publications. And, as editor, he reportedly says he sends out everything for peer review, but then doesn’t.


He's merely applying SEO practices. Don't hate the player, hate the game.


I don't understand this. The professor is acting in an unethical manner. Do you mean to say that if something is possible, it is intrinsically ethical? Perhaps I am misinterpreting your comment. He is responsible for his own actions and those actions are clearly outside the range of acceptable ethical behavior for an honorable participant in a research community. Yes, the 'game' also has issues, but that does not excuse the actions and choices of the 'player.'


That's a false dichotomy


Not hate but exposure for all to see.


Hate both.


How can I hire this guy??


Social sciences is an oxymoron. Problem with social sciences is not only this issue that the article points out, but that the vast majority of it is not reproduceable.


My rule of thumb is that if a subject area includes the word "science" in its title, it is less likely to follow the scientific method.

Physicists don't assert that they study "physics science", nor do chemists describe their subject as "chemistry science".

It's only a rule of thumb of course. There are exceptions (computer science, neuroscience, ...) but I think it works in a lot of cases: "social science", "management science", "data science", ...

It's a bit like how if a country includes the word "democratic" in its title, it's less likely to be a real democracy.


Computer science isn't an exception, it mostly mathematics


I tend to agree, I was thinking this as I wrote the above.


> My rule of thumb is that if a subject area includes the word "science" in its title, it is less likely to follow the scientific method.

> Physicists don't assert that they study "physics science", nor do chemists describe their subject as "chemistry science".

Fortunately, most of the people in question study "economics", "psychology", "sociology", "anthropology", and so are exempt from this heuristic.


Google scholar should filter out self citations. Google scholar hasn’t been updated in like 10 years I sometimes worry alphabet will pull it’s plug.


Yes, Google Scholar is so useful and ubiquitous that it is dangerous. It would be far preferable to have a proper open replacement for it.

The technical issues are probably "easy". The ongoing maintenance costs/hassles could be solved. But the human factors would be very tricky to get right: anyone who maintained it would be under a lot of pressure in their decision-making -- your example of whether self-citations should be filtered is a good example of a decision which would have big effects and would be highly political.

One good thing about Google Scholar is (was?) that we didn't worry that Google engineers would tweak it to benefit Google researchers.


The biggest problem with replacing google scholar is that if you are not google, publishers will threaten to sue you for crawling their website and/or displaying information about their papers on your website. If you are Google you can just counter-threaten to delist them and stop crawling their website entirely. Google is unfortunately one of the only players that can actually offer something like google scholar.


Yes. I wonder why other fields don't use a system similar to HEP's Inspire (http://inspirehep.net/) which can give informative summaries on the researchers and their contributions (e.g., http://inspirehep.net/author/profile/Lisa.Randall.1 and http://inspirehep.net/search?ln=en&p=author:Lisa.Randall.1&o...)


Harvard's ADS (http://adsabs.harvard.edu) has recently been revamped and is very effective in it's domain of (astro-)physics. Perhaps someone could base a Scholar replacement on that?


Google scholar has been updated a few times in the last 5 years. As someone that worked on a startup building an academic search engine and who is now a PhD student I am a pretty big fan of Google scholar. I to worry that alphabet will pull the plug.


Academic journals hate him!


There are PhD students at my school who get tasked with peer review and they hardly speak a lick of English. They don't otherwise seem that bright either...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: