Hacker News new | past | comments | ask | show | jobs | submit login
UC’s termination of Elsevier contract has had limited negative impact (2020) (dailybruin.com)
262 points by ColinWright on March 7, 2021 | hide | past | favorite | 90 comments



At a semantic web meeting sometime back one of the heads of Elsevier reasearch during her talk blatantly stated that "Elsevier is not a publishing company", I noticed numerous heads raise at this comment. Elsevier sells services. They no longer need to publish, or worry about losing their publishing, because the big journals like Cell etc. are not going anywhere, and because they control enough of the published information that they can do other things.

They also know that their knowledge-base is worth massive $, they sell medical diagnostic tools, text etc. [0]. It doesn't take a Cellular Biologist to connect the dots to notice the huge potential conflict of interest. Controls publishing (on medical topics), sells diagnostics, $ for all this. I'd bet money they own insurance companies. Is it malicious? Who knows. Is it obvious that it could be abused, with certainty.

[0]:https://evolve.elsevier.com/education/medical-insurance-bill...


Heard from an Elsevier person, their purpose is to own the market from the production to distribution and storage of scientific and medical information.

That explains their acquisition of mendeley, their work on lab notebooks, acquisition of medical diagnostics systems...


They own LexisNexis too. It’s an interesting philosophy where they seem to try to corner information that is free, or should. Then charge rents on top of it.


Nice. Didn't know about that one so I had to Wikipedia it: "LexisNexis is a corporation providing computer-assisted legal research (CALR) as well as business research and risk-management services." if anybody is wondering.


Wikipedia's description isn't very helpful sadly - they're like the real world data version of Microsoft or SAP, they have a ton of different products that are basically databases of data collected from a ton of different sources. Lawyers use LexisNexis to find research legal precedent and find the contact information of defendants to send cease and desists or where to serve them papers; universities and public schools use LexisNexis to search newspapers and other media; finance firms use it for various data feeds; and so on.


Oh wow I had no idea, now I want to have a look at it.


When you get done I suspect you will have a strong urge to take a shower and also a newer, deeper understanding of "ignorance is bliss"

Google are amateurs compared to LN....


Yep I gave it a try today semantic/faceted searching is really the next level of search... I wish we could do that for general web search


I work for Elsevier. About 2 years ago there was a big internal push to drive it home that we are now a data and analytics company and not a publisher.


I am pleased to hear it but a rose by any other name would smell as sweet. Elsevier will cease being a publisher first when they stop selling access to journal articles. Until then they are a publisher first and an "analytics" firm second, regardless of corporate messaging.


Elsevier is a publisher, OK, and Google is a PC motherboard builder.


Given this, please consider some internal lobbying to publish all works under a free license, as a non-publisher should not care about others making copies of works it has published.


This is certainly important from a business perspective. For-profit publishing is a dead end. More and more venues are community-owned, and only the highest profile publications actually need the degree of professionalization that requires a large publishing house. By now, it is clear that the publishers cannot win this fight. And we see this in the concessions publishers make. IMHO, publishers can either milk the cow as long as it's possible and then die, or transform their business. My university just announced an Open Access agreement with Elsevier, and to me this looks like a confession on Elsevier's side that they will have to go OA in one way or another, or they'll lose their journals one by one. In my community, Elsevier has already lost journals to community-owned replacements.


> they sell medical diagnostic tools, text etc. [0].

If they actually do that, that is not represented in the link you provided. The whole page is about teaching materials about the field, and one piece of software to _simulate_ "real-world patient data systems".

On their whole website I couldn't find a single product/service that a company would use in their direct operative work (in contrast to training).


Sorry for the crummy link. See the LexisNexis comment above. See https://en.wikipedia.org/wiki/RELX.


Thanks for clarifying!


At least in my field (applied machine learning/optics) and my partner's field (cognitive neuroscience) pre-prints are a massive deal. Not only do they allow you to get your work out earlier, google scholar will typically link your pre-print to the published article eventually so anyone who doesn't have a subscription can get a PDF no problem. This probably nullifies the negative effects of not having access to the publisher's library.

It's to the point now I'd say that all of the grad students and younger PIs insist on posting a pre-print. Pretty great trend especially considering with LaTeX the preprint ends up looking just as good as the published version anyways....


Preprint often looks better. I've had publishers send papers to offshoristan where they redid everything including the graphs to match their layout, introducing tons of errors, typos (yes, really...) and problems that even almost 10 rounds of urgent back-and-forth before the deadline didn't fix entirely.


Same in cryptography, where the research community manages our own preprint archive (http://eprint.iacr.org/). My experience has been that preprint archives are often used to "mark territory" i.e. get credit for an idea quickly so that it can be discussed more openly.


Arxiv was a godsend.



I made a list for linguistics here: https://gkayaalp.com/blog/20200319_lingpreprints.html

Some of them are general social sciences tho.


> Not only do they allow you to get your work out earlier

Why can't you just upload to arxiv and perhaps places like researchgate or academia.edu ?


Uploading to arxiv is a pre-print. That's exactly what the parent is saying that they do. Some journals will forbid this as part of the publishing contract.


That can be circumvented using the standard trick:

https://academia.stackexchange.com/a/119002/7319


Elsevier knows this. We have just finish negotiating a new national contract with them, and to our surprise did they agreed to all of our demands. They know which way the wind is blowing and are trying to pivot as quickly as an old lumbering giant can.


Losing UC as a subscriber is a huge own goal for Elsevier. Researchers will get used to: 1. always posting preprints, 2. always looking for preprints first, 3. using scihub to get older papers, 4. citing and sharing material using links to preprints or scihub by default.

They will spread these habits to their students and colleagues and as they move between institutions. UC is huge across all the constituent colleges and many parts of it are extremely prestigious, so UC grad students are the sort of people who will often end up as professors in other institutions.


Researchers around the world are already used to (3) and (4). I would guess also (2) but maybe that depends on age and location, and anyway I can't say for sure. We're definitely not there yet when it comes to (1), so - if you're an academic, please make sure to pre-publish all of your research so greedy corporations don't get their grubby IP hands on them.


The wind is blowing towards gold open access, i.e., the research remains published at Elsevier, it is open-access, but publishing each article costs $1000-$2000 charged to the university via grant money or "subscriptions to publish" in place of the existing subscriptions to read.

I don't think Elsevier is really worried. This gold OA model is just sending the same quantities of money to them, just under a different name, and with the side benefit that everyone can read the published articles.

The real change will be when researchers stop publishing in the journals that they own -- noting that the costs to publish are so high because the prestige of journals means that fair competition can't happen. Sadly, it doesn't look like researchers are willing to take such a step -- or even to stop reviewing for these journals for free (see our pledge in this sense, https://nofreeviewnoreview.org/)


> Sadly, it doesn't look like researchers are willing to take such a step

The incentives are misaligned for that to happen. Who publishes the most? Postdocs, assistant professors, and their grad students. Who has the most to lose from not publishing in top journals? See above.


Curious, what were some of the demands?


The main one was automatic open access on articles written by authors from our institutions with cc license. Better prices and permanent access to some parts of their portfolio was also accepted.


I hope they find a long term archival approach for the literature that’s no longer being paid for.

In CS, there was a transition from older journals to newer publishers around 1980, and much of the literature prior to that is simply inaccessible.


At the Internet Archive, we are working on one aspect of this problem: https://fatcat.wiki/

Other notable efforts, mostly envisioned and led by librarians, are "dark" digital archives (LOCKSS, CLOCKSS, Portico, etc) which usually have a mechanism to flip and "trigger" public access if the publisher vanishes; microfilming and other microform throughout the 20th century; Hathitrust / Google Books; and as others have mentioned, scalable and sustainable (cost-wise) off-campus depositories.


Scientific papers should have more limited copyright protections around distribution. No one except journals benefits from the current situation, and the service they provide around connecting submissions with reviewers seems ripe for replacement if any institution lends their credibility to the matter.


Any research paid for with tax dollars should be public domain.


I've been told academic journals don't even pay peer reviewers. Is that true? Peer review is the whole point of a journal and is what sets them apart from just publishing a PDF on the internet. If peer reviewers aren't even getting paid for their invaluable service, nothing stops them from forming their own independent peer review group.


Yes, there is no monetary compensation. But the currency here is community service. If your CV lists you as reviewer, program chair, technical committee on XYZ, it makes you look important. And you get to see very early who in your field is working on what. It has networking effects and you get to know people (other reviewers and committee members).

That's why academics do it. Most of them anyway. All for "free".


In most cases yes, peer reviewers are not paid. These companies are pretty much a paywalled pdf hosting service hence the outrage against them.


I can't actually think of anyone who has been paid for a review in the sciences.

The most I've personally gotten was a nice note from an editor; I've heard rumors of someone who reviewed enough to get some journal-branded swag (a mug or calendar, I think). If you review a book, you might get to keep the copy.

Nobody gets cash.


I've been compensated for one review, and for peer-reviewing a book proposal, gotten some free books.

That is, in fairness, three instances over what is by now probably well over 100 reviews.


Yes!

Literally the copyright for scientific articles should be 1 year. This is how we fix open access.


The NIH and the Wellcome Trust both negotiated effectively this - papers funded by those organizations are open access within 12 months.


OK.

Step one: Renegotiate the Berne Convention.


Just abandon it. International treaties are mostly hindrances to progress because re-negotiating them is considered hard to impossible. States should aim to be free from such obligations and reduce the amount of applicable treaties as far as possible.


No, they should be immediately open-access. Why give commercial publishers a 1-year exclusivity window for free?


My level of cynicism at this point is so high that I'd assume we'd be forced to add narratives to papers until they resemble the five-thousand word life story you find tacked on to a recipe to escape the "just a list" issue. [And breathe]

Although, full agreement if that doesn't become a problem or you have a solution for it already prepared :)


inaccessible?

That's a strong word. Just because something is not digitized doesn't mean it's not there. Have you talked to a reference librarian? Material in archives may be inaccessible because of legal stipulations or just simply unindexed, or writings are known to exist only in manuscript - but "not online" or "in off-site storage" doesn't qualify.


I agree with this statement. Just it is not there didn't mean it is not around. There is a many scientific journals sites out there. In my field, I have access to journals way back to 1880s. However they are split among three sites; JSTOR, Sage and one of Elsevier largest competitor (I forgot the name). Just talk to the librarian, they have so much resources and will get the journal for anyone who need it.


This is a growing problem in other fields, especially when universities destroyed the paper versions once they were available online...


At least in biology, a lot is indexed on pubmed.


Any developers who want to help with the scholarly infrastructure that underpins scholarly publishing workflows (DOIs, scholarly metadata and related things), Crossref have a senior dev job opening. This is about metadata, not content, but it's a crucial layer, and we are committed to open data and open infrastructure.

https://www.crossref.org/jobs/2021-02-08-senior-software-dev...

(but get your application in today if you're interested)


This brings an interesting anecdote from when I suddenly had this question of how my seniors working in the 80s and 90s in a small Indian lab managed to get the literature they want (when their small lab was still one of the most advanced institutions in the state). My senior said they’d pretty much have to write up a literature request letter, attach a cheque and mail it to the main library in New Delhi, and they would typically hear back from them after a month or two either with a copy of the article or saying they can’t get it (roughly half the time). They still did good research, arguably even better than what crap gets spewed out now!

Just like Wikipedia and the internet didn’t change everyone’s ability to know things in any meaningful way (actually it made it worse), free access to arbitrary literature was not exactly the limiting step in anyone’s research career in my experience. It’s definitely annoying but once you’re up to speed in a field you are typically happy with just the abstract and if it’s an important enough paper you find a way to get it one way or another. Especially with a good supportive library and ILL system it’s typically a snap.


>Just like Wikipedia and the internet didn’t change everyone’s ability to know things in any meaningful way (actually it made it worse), free access to arbitrary literature was not exactly the limiting step in anyone’s research career in my experience.

Changing everyone's ability to know things is a much taller order than changing some people's ability to know things. I would not be where I am without Wikipedia or the internet. Growing up, I had no educated adults around me. If I grew up without the internet, I do not see how I could have learned many of the things I did, especially due to time and money constraints.

I would certainly dispute the claim that knowing things in a meaningful way (whatever that means) with the internet is worse than without the internet.


> Just like Wikipedia and the internet didn’t change everyone’s ability to know things in any meaningful way

What are you talking about? Wikipedia substantially changed everyone's ability to know things.

In fact, Wikipedia was so good at increasing everyone's ability to know things, I wonder if we're now having to build back a way to synthesize and ponder our desired context for all those things. That problem may be exaggerated due to unregulated app casinos sucking out any surplus time to reflect and wonder, but I think the problem is definitely there either way.


My biggest criticism of Wikipedia is that it refuses to publish anything that isn't supported by a dead tree publication, like the ones Elsevier publishes for a fortune. You could be a prominent, highly respected full professor, but unless your research paper has made it into one of these publications, and actually gone out the door, your research won't be included in Wikipedia. Wikipedia is great for the kind of stuff that used to be published in print encyclopedias, but if you want to find out about cutting edge research, arXiv.org is a better source.


That's like the purpose, man. Wikipedia wouldn't be what it was if it didn't do that. In CS, we have these things called DSLs (domain specific languages) and usually one strives to not make them Turing complete because the restrictions give you advantages in reasoning.

Wikipedia's strength is that it represents the viewpoints shown by (relatively) famous sources.


There is a lot of crap on arXiv as well... So I respect the position of Wikipedia editors to have someone vet the information before them. To be fair many editors of Wikipedia have a better understanding of their fields than reviewers (at least in my domain)


Maybe the wider adoption of arXiv in other fields (taking off in biology right now) will lead to wiki's acceptance of these sources?


Actually, why isn't there a Wikipedia-like open system for scientific publication?

It doesn't seem to be too hard technically or operationally.


What a strange qualification of "everyone". Surely, someone or some group of people must have benefited greatly. Even in the aggregate the average person must have benefited as well.


> Wikipedia and the internet didn’t change everyone’s ability to know things in any meaningful way (actually it made it worse)

Are you making a claim of knowledge (e.g. epistemology) or of learning (e.g. pedagogy?)

If you're talking about learning, then I would say things like Khan academy, MOOCs, online course materials, online textbooks and digital libraries seem like powerful counterexamples. Personally I've learned a lot of interesting and practical things from youtube, github, and HN, not to mention non-paywalled research papers. The internet's enabling of billions of people to get access to thousands if not millions of newspapers, books, television programs, movies, songs, and other sources of information and culture from all over the world, and to talk with each other at minimal cost (compared to traditional telephony), seems like an astonishing achievement

If you're making an epistemological claim, I would suggest that the internet era allows you to compare more sources more easily, for example more newspapers than you would have been able to read easily in the pre-internet era. Although it's possible that news outlets have become more opinion-based in the US recently, opinion-based (or opinion-biased if you prefer) print news is basically as old as the printing press, and was certainly well-represented in the 18th, 19th and 20th centuries.


Increase the gain, you will increase the signal, and some of the noise as well.

I would not be as extreme, wiki* (data, pedia,...) and Internet made a lot of research possible. Internet enhanced collaborations across the globe (no need to be a big name so people talk to you) and openly accessible data I believe facilitates understanding (ok not deep understanding, but enough for creating common bases for communication).


> free access to arbitrary literature was not exactly the limiting step in anyone’s research career in my experience.

What are limiting steps?


There are a lot...

I agree with that person point on that not being a limiting factor. SciHub and all the networks of exchange of publications before that (I've seen exchange of proxy accounts and forums to do that since 2003, unsure about what existed before) have helped with that a lot.

Just to list a few points that come to mind:

Environment:

- A university that doesn't provide the ressources necessary (equipment, computing ressources with adequate expertise)

- Colleagues that are not supportive or worse putting you down for internal competition (just fame, attracting the most promising grad student of the year, or funding lines for grad students and postdocs)

- Intellectualy poor environment, colleagues that have no understanding and no willingness to understand your field of research

Money:

-Difficulty to compete when it takes you 80% of your budget to do 1/10 of a big lab (behind the name of a single person but really a group of 10s employees) can do with their large fundings. This is especially true of US vs many other countries.

Sociological:

- Gender (we know that women are not "allowed" to grow as much as men in some domains)

- University delivering a PhD, I still see people in their sixties being presented as a graduate from (pick an Ivy) and people ignored in discussions at a table of Ivy league graduates (despite that person being an expert)


Let’s ask scihub to tell us how much traffic they get from uc*.edu ip addresses


Traffic from US has always been huge anyway on scihub so thats going to be hard...

Put you in the shoes of a grad student.

What would you choose (ethics aside) navigate to the website of your library do 20 clicks through a shitty interface to finally get a paper, and thats if you are lucky, or just paste a DOI and wait 5 seconds...


When on mobile/iPad, my university (Harvard) makes me two-factor for every paper.


Luckily nearly every university offers their library proxy as a bookmarklet. It's two clicks: 1 to hit the bookmarklet, another to hit login on my autofilled secure sign on page.


My university does (a top national one) and it’s still less reliable and UX friendly than SciHub.


Thankfully some universities have a decently working ezproxy config but still. Then you hit the paywall, or the non working website from the crappy publisher that thinks people want to read pdfs as their enhanced-pdf and require two other clicks to get a real pdf, during that time the 5 (not kidding look at what those pages load!) trackers are loading...


In my experience at least those are edge cases from some super dodgey journals. All the major publishers and journals worth reading work fine.


I'm talking about a major publisher here, Wiley $1.7bn revenue...


What's a DOI?


Digital Object Identifier. It's a permanent URL to avoid link rot, really useful for scientific articles.

https://library.uic.edu/help/article/1966/what-is-a-doi-and-...


Said behavior is not limited to grad students.


You're right I started with grad students as an example and after rewriting I lost that effect.


You would have to get a 10 years before / after comparison for it to prove anything. Also, comparing a university against itself is interesting but not at all noteworthy, for bonus points to make the paper actually useful, you could compare against 10 years of traffic from other universities.

But then, Science magazine did a segment on this: https://www.sciencemag.org/news/2016/04/whos-downloading-pir...


You don't really need 10 years. Each researcher at UC might be confounded by their local network, but you could argue that is just a casual variable anyway. As for statistical power, UC has >30 researchers and each one of those researchers would access several papers a week. Take the ratio of sci-hub vs non-sci-hub papers accessed per week or month and compare to the month before UC's contract with elsevier ran out. Take the mean and do a t-test. Don't even need timeseries!

I suspect plotting sci-hub/non-sci-hub papers accessed per week for several weeks before and after the contract expiry would dispense with the need for any statistical analysis.


Or maybe there's been little noise about the change because everyone was using sci-hub already anyway!

I recently introduced a part-time professor to sci-hub, when he was having trouble getting his vpn connection to work, so he could access a paper in a journal. Open access is just easier.


Does sci-hub release geographic group-by ? She should, I have a corp all access pass to basically everything, and I still sci-hub.

The racist, classist control of information has to stop. It should basically be a reverse paywall.


Article is from February 2020 although not much seems to have changed since https://osc.universityofcalifornia.edu/uc-publisher-relation...


Good riddance. No need for Elsevier, look elsewhere [1].

[1] https://en.wikipedia.org/wiki/Sci-Hub


yea... because libgen and scihub are a thing


The headline claiming "limited negative impact" does not refer to fact but to the opinion of library staff based upon complaints. The pricing of Elsevier journals and their access limitation policies keep recent research out of the hands of scholars and researchers--information which helps drive future research. Many authors choose to use Open Access journals with very different access policies.


It’s outrageous that Elsevier was extracting millions of dollars from UC alone while providing basically negative value-add (yes, paywalling scientific knowledge is negative). That’s funding for ~100 or more PhD students right there. I recall getting offer letters from Berkeley and UCLA back when I was applying to grad school, and immediately hard-passed them since they said something like funding was not guaranteed beyond the first year.


Probably the only journal among this "top Elsevier journals" list that one might have difficulty finding pre-prints for is NeuroImage; I'll bet the other medical journals will have preprints on medrxiv:

https://blog.typeset.io/top-12-journals-in-elsevier-and-thei...


It has been somewhat frustrating, but there are ways


Part of it is that academics are so busy writing that they don't read all that much. (too busy publishing and perishing)


> The statement added that during negotiations in January 2019, Elsevier proposed an increase in the cost of the UC’s multimillion-dollar subscription while reducing access to several journals

Hahaha, no. Well done UC, and goodbye Elsevier.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: