Hacker News new | past | comments | ask | show | jobs | submit login
The Arsenic DNA paper exposes flaws in peer review (michaeleisen.org)
193 points by ilamont on Oct 3, 2013 | hide | past | favorite | 98 comments



``we need to get past the antiquated idea that the singular act of publication – or publication in a particular journal – should signal for all eternity that a paper is valid, let alone important''

Most scientists I know (physics) don't have this idea. They know publication is just a step. Most grad students learn at some point that just because it's published doesn't mean it's correct.

Publication is a checkpoint, not a finish line, for any theory or experimental result.


Very true, but he does go on to state:

> If we had, instead, a system where the review process was transparent and persisted for the useful life of a work

That is a very interesting idea. (Reviews are currently confidential and usually anonymous -- only the author and editor sees the reviews, and only the editor knows who wrote them.)

Not only would this reduce blatantly petty reviews, it would also give some recognition to the work that goes into reviewing a paper. It's currently an important but very thankless job. It's considered polite to acknowledge reviewers at the end of the paper (even if they remain anonymous), but it's often not done.


I'm going to respond to this in the context of academic computer science (which works off conferences), in particular, the systems subfield. YMMV if you extrapolate this to non-CS science and journals, so take this with a grain of salt.

The bottom line is I, as a powerless grad student, am _very_ comfortable with the level of reviewing competence I have witnessed (and been a part of).

The proposal to deanonymize reviews has a huge flaw: Namely, there's no way I (as a grad student with no political power) will write a critical review of somebody famous if I know I won't remain anonymous.

The risk of blowback is too damn high. (And given the small pool, writing style would probably be sufficient to deanonymize, if my entire trail of reviews were public). There's no way I have the balls to write a critical review of a paper by, say, Robert Morris or Larry Peterson, for some prestigious venue such as SOSP, when I'm asked to write a review. Unless I know for sure that there's absolutely no risk whatsoever of being identified.

Look. Reviewing is thankless and hard. Anonymity exists so that we can be critical and not risk career suicide. Last year at OSDI I received 9 reviews, and they were all exemplarily thoughtful, even the obligatory "strong reject" one of them. The problem of "petty reviews" _has_ a solution: The program committee is self policing, and the reviews are not anonymous within the PC. (For example, at SOSP this year, there are 28 PC members.) You know that most of your reviews come from this gang of 28. You might have one or two externally solicited experts in addition, but that too is not anonymous _to the PC members_. If there's a culture of meanness, the PC members and PC chairs can fix this by internal policing (and they do).


Your point regarding anonymity is highly valid, and I appreciate that you bring it up. There is actually a concept I'm rather fond of that seeks to address this, usually referred to as peer-reviewed peer review [1].

Essentially, anonymous experts review publications in much the manner they do now, with no risk of their identities being revealed. A separate set of review-reviewers then rates the publication-reviewers themselves, scoring them on a standardized quantitative scale (1-10) based on the validity of their claims and the depth of their assessment. In this manner, experts collect an "aggregate reviewer rating" of sorts analogous to an h-index [2], call it an R-index.

This R-index can then serve to select accurate reviewers for future publications. A high R-index could also be included as a positive bullet point on a CV, providing some reward for reviewers to put a high degree of effort into their otherwise unrewarded reviewing work.

A few potential flaws are evident: what is one to do if early reviews are scored poorly, and you are no longer able to provide new reviews to increase your R-index? A grace period before scoring could potentially remedy this, ie) an R-index is not assigned until a reviewer has written >10 reviews.

[1] - http://www.frontiersin.org/Computational_Neuroscience/10.338... [2] - http://en.wikipedia.org/wiki/H-index‎


I like the R-index idea. It's basically Slashdot's meta-moderation system? I always thought that had some promise. However, it does add even more _work_ to a system that is extremely stressed (writing reviews is so much work! Imagine having to read papers, read reviews and then review the review on top of that!)

But basically existing program committees (in computer science) do serve that meta-moderation purpose (without the empirical R-index metric). The entire PC does get to see the deanonymized reviews of the other program committee members. If someone's just being an unhelpful jerk, they're probably not going to get invited back in the future.


Very good points!

I completely agree that there's a lot value in anonymity when reviewing, and I don't think it should go away. Blowback over disagreeing is very, very real, and I've experienced it myself (though it was from presenting something at a conference that disagreed with part of a previous publication, rather than as a result of a review).

However, I do think public and optionally non-anonymous reviews could be helpful.

(Incidentally, you had _9_ reviewers for a single publication?? Holy cow... In the geosciences, I've never heard of more than 3, and it's usually only 2. That's one thing that would really help in my field. More reviewers results in a broader review, rather than a particular person's pet peeve.)


To be fair, it's not common to have 9 reviews. OSDI 2012 was particularly thorough (my reviews totaled about ~4000 words) I'm more used to 4-6 reviews, around 3000 words or so. SOSP/OSDI is probably the most thorough and competently run conference that I know of in all of academia. This makes sense: they only take about 30 papers (it's restricted not just by quality, but the quality bar goes up as more submissions are made, just to keep it exclusive), it's once a year, and systems researchers sync their lives to the OSDI/SOSP submission cycle. It's the "flagship" systems conference, and a lot of advances are first described as OSDI/SOSP papers. For some perspective, the original MapReduce paper was by Jeff Dean and Sanjay Ghemawat, published in OSDI 2004. Last year's best paper was the description of the Spanner Distributed Database by Google.

Incidentally, I have deanonymized several reviews already. When you go to conferences, and some researcher asks the exact same question and has the exact same opinion as reviewer C, chances are... Sometimes the recognition is mutual as they realize what they just revealed (thankfully that has only resulted in humorous chuckles, at least in my life. But I can see how it can go wrong so easily!)


You might only get 9 reviews though if you pass into the second or third rounds. Controversial papers in the PL conferences I haunt will get more reviews out of necessity; better to add more reviewers to help break ties.


Yes. According to the OSDI 2012 proceedings introduction, all papers in the third round had 9 reviews. No paper was accepted or rejected with less than 51% of the PC member's assent.


That is good and bad, it definitely skews toward safe non controversial papers.


Interesting. I don't think I do, or would have, feared blowback from giving a negative review - but I have no way of knowing for sure. I do know that I'm generally reviewing papers because a colleague asked me to, or asked me to be on the PC. So, in that instance, I'm more concerned with doing a good job in the eyes of my colleague.

I do know that I have received many cursory reviews where the reviewer did not fully read the paper. I have often wondered if single-blind the other way would be better: reviewer signs their name, but paper authors are anonymous. While this may fix the crappy review problem, it is susceptible to what you fear.


Some reviewers do (voluntarily) do that[1]. I would seriously consider doing so once I'm tenured (i.e. when I feel I am immune to blowback).

I would hope that those cursory reviewers resulted in feedback from the other PC members that fix the issue (either through improved future reviews, or not being invited back to be on the PC.)

[1]: http://se.ethz.ch/~meyer/publications/online/whysign/


I've heard of others who do that, too. The difficulty is that those who self-select to sign their own name are probably not part of the problem.


I sign my reviews unless I'm involved in some sort of post paper evaluation. It seems only fair since the reviews aren't double blind (given double blind, I would not sign). I also publish reviews on my own papers as a matter of interesting record.


I agree. But why not make it double-blind? Make the author anonymous to the reviewer too? Wouldn't that make the most sense?


>only the editor knows who wrote [the reviews].

This is not entirely true. I've talked with several people involved in research (math and CS). The ideal is that only the editor knows who wrote the reviews. However many fields do not have that many people who are qualified to review cutting edge results. This means that it is possible to get an idea of who your reviewer is based on how he wrote. Of course, although the people I talked to said they often can figure out who reviewed them, they cann't actually confirm that they are right (and I definitely cannot).


Completely agree. I've been into computer science research for few years and not knowing who accepted the papers and which were the reviews always bugged me.

I've always advocated that the reviews behind an accepted paper SHOULD all be public and publicly distributed along with the paper.


This is definitely a benefit to the readers, and potentially (though not definitely) to the community as a whole. However, it's much worse for the reviewers.

I hear so much about how researchers provide papers to journals for free, and review papers for journals for free, and then the journals (are presumably evil to) charge high fees to anyone who wants to read the papers. How much can we pile on the reviewers before they stop doing it?


Good point. But it is a key point that making reviews is part of your job as researched and you MUST take it very seriously. I received so many reviews poorly written, just few lines saying "OK for me". I really think that this way of reviewing is wrong. Reviewers should take care in their job and they should be ready to face a public discussion on why they decided to accept a paper.

Note that IMHO is OK to be anonymous while refusing a paper because otherwise a reviewer may be put under lot of pressure.


> it would also give some recognition to the work that goes into reviewing a paper

I think you're spot on with this comment. That's exactly what we're building at publons.com. Check it out.


Maybe the general idea should shift towards papers being more of a opening for discussion, where as reviewers would be privileged to get "first posts".


As any scientist will tell you, this is in fact how we (as academics) do view publication. They are the first step, and definitely not the end goal, of academia.


There are huge differences between different fields of science. In some publication has become the end goal.


There is a transparent review process for the useful life of a published work. Other scientists can read it, test its conclusions themselves, and publish their own papers supporting or critiquing the original work.

If a paper claims a significant result, it will eventually get tested.

If it claims an insignificant result, it might never get tested. That's ok, little harm done because it is insignificant.


Contrast that, though, with the extremely sad state of nutrition, where an absolute difference of 0.25% gets trumpeted immediately to the media at "200% more likely", even though the effects are measured through observational studies with no proper controls. A field where who you are matters much more than the data you're presenting (and it seems reasonably accepted to throw out data that disagrees with your pet hypothesis).

I recognize that, in many ways, physics has it easy: there are few sciences that reduce as well as it does and that offer ways to isolate variables like it does (even if it requires multi-billion dollar machines and years to make that happen). But it is still a shining exemplar of gradual evidence building theories to find more evidence, and there are many, many fields that aren't anywhere near that.


You're absolutely correct, scientists don't suffer from any misconception that publishing equals truth. The general public, on the other hand, has little understanding of this.


Absolutely. I see peer review as a way of - hopefully - filtering out the most egregious crap, and fixing the worst errors in papers while rounding them off.

I feel all these debates are kind of pointless, because no-one I know treats a published article as gospel. We make up our own minds about the paper.


Nor do the reporters and/or editorials that cite such peer-reviewed papers that agree with their opinions.

Wait, maybe they do understand but they also understand that the general public doesn't understand.


I think this is an artefact of how science is taught: In 1953 Watson and Crick published a paper on the structure of DNA. Everyone smiled and nodded.

In my experience it's more like: Most scientists ignored it. Others were interested but had too much teaching to pay attention to it. A few people got pissed off and tried to prove it wrong. A few other people got excited and tried to build on it.


Physics is an area where almost all papers have pre-prints available freely on arXive.org... sort of going along with the author's point?


In the biomedical sciences (where the original author is speaking from), there is a dichotomy: yes, most researchers don't believe that a single publication is the final word on a topic and that in theory anything could be retracted or modified, but in practice that's how the system works. Publications--both how many you have and where they are--are the prime metric for promotions, tenure, and overall success. Furthermore, when applying for funding your publication record is taken into account, as well as your promotions, tenure, etc. It is currency in the life of an investigator.

Not to mention that research is all about who comes in first; the incentive to validate published work is by and large not there. I recall a Nature article on data reproducibility: only 11% of the TOP, PEER-REVIEWED PAPERS were able to be reproduced[1].

[1] http://www.nature.com/nature/journal/v483/n7391/full/483531a...


So true! The health of scientific literature is not in the articles that claim things, but in the follow-up articles that verify and/or disprove.

Skepticism is at the very core of science.

Partly to blame are the "non-fiction/science&nature" book category, and the journalist/writer cross-breed who chain-write in newspapers, magazines and churn out books from their articles every year.

At first, the really good ones were a welcome addition to Science literature, for bringing the science to our daily lives. But then they became part of the "where it bleeds, it leads" culture, sans proper journalism and sans scientific thought.

Scientists may make fun of how journal articles are covered in newspapers and blogs, but the negative effects of such coverage are quite real.


The converse also holds true. Just because it isn't published doesn't mean it isn't science. Lots of trade secret processes and such fall under this.



This post takes the lead-in with the Felisa Wolfe-Simon "Arsenic DNA" paper. In recent times I remember this was one of the most Hyped-up papers something akin to the Cold fusion paper by Fleischmann and Pons.

The contrast in the public response to the two incidents is fascinating. While Cold-fusion became a career-ender for anybody associated with it and Fleischmann & Pons both suffered pretty much banishment, repeated experiments have shown that there is something funny going on with that particular mix of compounds (Heavy water, Palladium, Nickel etc) - excess heat.

With arsenic life however, follow-up experiments have proven that it was contamination that was a cause of the result. So there is NO ARSENIC based life. Yet the lead authors of pretty much a gigantic false claim -orchestrated shamelessly by NASA- seem to be doing pretty well, dodging the basic questions with that most famous catch-all - more work is needed (hence more money should be sent my way) to prove the (bogus) claims made.


It saddens me that all the replies to your post are nit-picking over the cold fusion side. I never heard that the arsenic-based life paper had been discredited or had anything wrong with it until I saw this link.

The main point is that someone working at Science wrote a bunch of fake papers and ascribed them to fake African scientists (which bugs me a lot, since it might both assume and create discrimination) and showed that a bunch of open access journals don't do peer review properly but he didn't do the same thing to the conventional journals.

Where's the control group?

Any scientist will tell you over drinks about peers who double-publish, fake data, and force anyone whose work they review to cite their own work. I'm reminded of the scandalous story (in the New Yorker a few years ago, sorry no link) of a famous -- Fields Medallist no less -- mathematician who took over a journal and started publishing his own and disciples' work without review to get the jump on rivals -- and does a science get more assumed credit for perfection than Math?


> excess heat

There is no excess heat, it's measurement error caused by miscalculating how much hydrogen recombines in the cell.

If you build a calorimeter that recombines all hydrogen you find no excess (but it's really hard to do accurately).


And, even more tellingly, there are no fusion neutrons emitted.


That's not very telling when what they were claiming was aneutronic fusion.


The Pons / Fleischmann claims about their "cold fusion" experiment had nothing to do with aneutronic fusion. In fact, the definitive debunking of their claims consisted in the failure of other labs, and of P & F themselves, to detect fusion byproducts, including the expected neutrons: http://partners.nytimes.com/library/national/science/050399s...


cold fusion wasn't a complete career-ender. There are people who do replicate the experiment to probe the phenomenon they discovered.

Their mistake wasn't the paper; it was the press conference.


Well they did get fame!! And Macromedia used to have a Product by that name :P Cold Fusion!


It was also the code name of the Power Mac 8100.



The cold-fusion story reminds me of the physician who pioneered antiseptic procedures, and the reaction of the medical community to his publication [1].

[1] http://en.wikipedia.org/wiki/Ignaz_Semmelweis


The very first sentence in the section about the response from the medical community:

> One of the first to respond to Semmelweis's 1848 communications was James Young Simpson who wrote a stinging letter. Simpson surmised that the English obstetrical literature must be totally unknown in Vienna, or Semmelweis would have known that the English had long regarded childbed fever as contagious and would have employed chlorine washing to protect against it.

And:

> Indeed, initial responses to Semmelweis's findings were that he had said nothing new.

But:

> In fact, Semmelweis was warning against all decaying organic matter, not just against a specific contagion that originated from victims of childbed fever themselves. This misunderstanding, and others like it, occurred partly because Semmelweis's work was known only through secondhand reports written by his colleagues and students. At this crucial stage, Semmelweis himself had published nothing.

Why didn't he publish until later? Who knows. The fact is, he didn't and it hurt his case.

And, if you read further, you see that Semmelweis simply went crazy later on, with people later on proposing causes from Alzheimer's to learned helplessness to PTSD. Not a good thing if you are insulting and argumentative to begin with and are proposing a new theory on top of it.


What I find most interesting about the story is the reaction from the medical community.

He was onto something, had data showing the correlation, it was a simple method with no contraindications hindering it's immediate adoption... Still he was faced with incredulity or irrationality. The ones who didn't vilified him, didn't bothered to investigate further either.



That's great, thanks.


In the cold fusion debate we sometimes forget the theorists who argue `for` and `against` cold fusion in platonic terms - where it has yet to be completely disproved.


Isn't it supposed to be the other way around: proved?


No?

You don't need proof to say "this class of reaction might be possible at a different temperature; we're not sure yet"

The default assumption is that there are many many ways to accomplish any particular chemical or nuclear change.


His title is a red herring. he admits later he didn't actually do this. His point is actually a rebuttal to John Bohannon's submission of a fake article to public/open journals, demonstrating the Science Magazine also makes the same poor errors in review as the open journals.


His title is the hook, not the catch. So it can’t be a red herring.


I actually knew that when I wrote that, somebody was gonna call me on it.


Warning! This title is a total lie! I know it's a lie done intentionally and calculatingly, but most of the time most people don't read past the title and will assume that it's true. I really think this needs to be edited! Even if it 'spoils' the article. Please do this, whoever can!

At the time of writing the posted title here is the same as the article title: "I confess, I wrote the Arsenic DNA paper to expose flaws in peer-review at subscription based journals"


If you want to talk about honesty and integrity in publishing, you shouldn't give your blog post a blatantly misleading linkbait title.


True - but to make a very valid point about the state of peer review in science journals it is perfect.


You needed a demonstration to see that people can easily mislead each other on the internet with bogus headlines?


No, I was disagreeing with you that the post was about "honesty and integrity in publishing".


The Last Psychiatrist talks in his/her blog sometimes about what a joke peer review is. Here is one entry: http://thelastpsychiatrist.com/2011/01/wakefield_and_the_aut...


LP is so sarcastic and snarly that it is difficult to understand what they are trying to say. Not an effective tone in an article about the problems of misinformation.


There should be an "Open Source" science journal. Where anyone can submit a paper and anyone can peer review or freely read said articles. User accounts could have a vetting process to verify their real world credentials and the articles they review would be weighted accordingly. Funding by donations and for-pay apis to do advanced searches perhaps? How awesome would it be to be able to freely read a repository of this kind of information? And to also be able to accurately say what is or isn't the current "scientific concensus".

I feel like this shouldn't be too difficult to make.


http://www.plosone.org/

PLOS ONE is this. It is well respected already. The initial invited reviews are mainly a junk filter to establish baseline methodological quality and NOT to evaluate significance.

From their "About us" page:

"Unlike many journals which attempt to use the peer review process to determine whether or not an article reaches the level of 'importance' required by a given journal, PLOS ONE uses peer review to determine whether a paper is technically sound and worthy of inclusion in the published scientific record. Once the work is published in PLOS ONE, the broader community is then able to discuss and evaluate the significance of the article (through the number of citations it attracts; the downloads it achieves; the media and blog coverage it receives; and the post-publication Notes, Comments and Ratings that it receives on PLOS ONE etc)."


I'll just mention for those unaware that the author of the post is one of the founders of PLOS, which is an umbrella organization under which a number of journals (including PLOS ONE) are published.


In addition to the PLOS series of journals, there are also other modern open-access journals popping up like PeerJ (peerj.com) and elife (elife.elifesciences.org). Anyone can go on and download the papers, and usually the datasets used in the papers if you want to replicate their results.


This is something promising: http://www.peerageofscience.org/


http://arxiv.org/

The arXiv focuses on physics, math, computer science, and similar, but it certainly seems to be what you're looking for.


That's not a journal. It's a pre-print archive (hence the name). There's a difference.


Here is a real case of this happening: http://en.wikipedia.org/wiki/Sokal_affair


Did you read the blog post? It discusses a much more update-to-date case of this happening, by a reported who just recently submitted a fake paper to a bunch of open-access journals, some of whom accepted it. He then wrote the experience up and submitted it to Science, who is spinning it as something typical of open-access journals.

More more notable than the fact that some fly-by-night, pay-to-play "open access" journals accepted a scientifically and ethically dubious paper is the fact that there now exists a large body of established, high-quality open access journals who roundly rejected the paper and pointed out its many flaws.

Meta publication bias strikes again.


The point of the Sokal affair seems to lean more towards proving that postmodern cultural studies as a field is lacking in substance than to demonstrating the ineffectiveness of peer review in general.


What bugs me about the Sokal affair is the implication that they didn't catch a bad review process but proved an entire discipline to be a sham (a discipline which they understood relatively little of to begin with).

So, when you get junk accepted in a humanities journal, its because the field is about bullshitting anyway - when you get junk into a science journal it's a hitch with the review process?


Was the Sokal paper published with inadequate review? Or was it passed by experts in the field?


All it takes is reading the wikipedia article...

But yes it was and no it didn't. That's always bugged me about the Sokal affair and people holding it up as an example of anything. While the editors did publish the paper, which is a basic problem in and of itself, they claim they wanted big revisions that Sokal refused to make, but then they published it anyway. Since the whole point of that journal at the time was to get wide-ranging viewpoints, I think there is some merit in the editor's claim of a simple betrayal of trust: they assumed Sokal was being straightforward, and the worst that would come of publishing his piece would be a bunch of letters telling him to stick to his day job. In any case, I don't know if we'll ever a definitive story of what really happened, how carefully they vetted the paper, etc.

Regardless, the journal he submitted the paper to was not peer reviewed, so the whole thing demonstrates nothing about the peer review process.


The first part is referring to the Felisa Wolfe-Simon controversy.


The problem is not the opaque peer review system.

Instead of making the reviews transparent, some journals solicit notes from other researchers to discuss the published articles. It is an ethical responsibility to publish notes which criticize previously published articles, regardless of how painful it is for the publisher to publish them.

e.g. http://www.sciencemag.org/content/335/6069/665.summary

Reading an article that falsifies claims of another article is pure scientific joy. And such articles are not uncommon at all!

The problem is, as pointed out in the beginning of the paragraph "we need to get past the antiquated idea that the singular act of publication – or publication in a particular journal – should signal for all eternity that a paper is valid, let alone important".


In legal practice, decisions are tested on appeal or by subsequent tribunals.

While this process is currently conducted by paid-for editorial teams, most practitioners rely on citation tools which for any judgment provide links through to subsequent decisions which have, for example, "approved", "considered", "distinguished" (as in disapplied due to different facts, but this can have the practical effect of confining an earlier decision to its particular facts) or "overturned" that decision.

Although Google Scholar seems to provide some authority measure through the "cited by" tool, I've often wondered why academia (or perhaps I've just missed it) hasn't developed a comparable "precedent" system for research.


> Any scientist can quickly point to dozens of papers – including, and perhaps especially, in high impact journals – that are deeply, deeply flawed – the arsenic DNA story is one of many recent examples

Dozens of flawed papers out of how many total papers? Numbers are meaningless without a context.


peer-review scientific analysis has been an issue for some time if you ask me. As the guy points out, all you have to do is find a few people already receptive to your idea. I'm not really sure why "peer-review" is so lofty a concept anyway.


> I'm not really sure why "peer-review" is so lofty a concept anyway.

I agree. Will someone tell pg to turn off the upvotes?


The author doesn't choose the reviewers. It isn't like a webtech company.


There are 250+ journals that have cancer growth as part of their field? That shows you the problem right there.

There is absolutely no way that a researcher in that field can keep up with publications in all of those journals.


Hey Michael Eisen, we get it. You don't like the peer review journal process and having to pay for access to scientific research. If you don't like it, make a better system.

Also -1 for having the worst form of linkbaity title, which is one that is utterly false and misleading, and lying for the first two paragraphs of that blog post.


To me, the title was obviously satire from the moment I read it.

Also, he did make a better system. He co-founded the Public Library of Science (PLOS).


Does PLOS protect against false claims somehow?


A better-ish system has already been made. The post mostly excoriates pay journals for implying that open access journals enable fraud and poor papers and pay journals prevent those sorts of papers. The arsenic-dna paper is used as a counterexample.

The author instead says that ideas of peer review need to change. Although, everyone I know always thought peer review was a continuous process, more support for after-publication review would be nice.


Quote: "So I created a manuscript that claimed something extraordinary - that I’d discovered a bacteria that uses arsenic in its DNA instead of phosphorous [sic]."

This tongue-in-cheek piece would be more plausible if the author could spell "phosphorus".


Really, get away from your computer screen for a while. Get a passport, visit places. Know about the rest of the world, what they wear, what they eat, how they talk, how they write.


Where do they write "phosphorous"? They don't indiscriminately sprinkle 'u' after every 'o'. That would be "phosphourous".


phousphourous?

(https://en.wikipedia.org/wiki/Red_Phosphorous#Spelling_and_e...)

> According to the Oxford English Dictionary, the correct spelling of the element is phosphorus. The word phosphorous is the adjectival form of the P3+ valence: so, just as sulfur forms sulfurous and sulfuric compounds, phosphorus forms phosphorous compounds (e.g., phosphorous acid) and P5+ valence phosphoric compounds (e.g., phosphoric acids and phosphates).


> Get a passport, visit places.

I have: http://www.amazon.com/Confessions-Long-Distance-Sailor-Paul-...

If you had any idea to whom you're speaking, you would realize how utterly absurd your suggestion is. Speaking purely on a probabilistic basis, I know more about the world than you have any chance of knowing.


Yeah, and if you had any idea of who you were criticizing, you might be willing to overlook a spelling mistake. Mike Eisen is a world-class evolutionary biologist, professor at UC Berkeley, founder of PLoS, and an HHMI investigator.


Then he has no excuse. Do you really think academic status excuses academic hubris? Do you really think science is about authority and titles? I could go on like this, but you should know better than to recite someone's titles in a science discussion, and frankly I doubt anything I might say will change your behavior.

The bottom line -- in science, evidence means everything, reputation means nothing. The greatest amount of scientific eminence is trumped by the smallest amount of scientific evidence.


What??? You create a straw man, choosing to ignore a satirical opinion blog posting on the academic review process in OA journals because the author misspells phosphorus. I say maybe you should overlook the spelling error because the author is the founder of the most successful OA journal publisher to date, and an intellectual leader in the field. Then your response is that nope, his status does not excuse academic hubris (for misspelling phosphorus), and therefore you will ignore the whole argument. In addition, you doubt that I'll change my behavior based on fairly vapid monologues of science being based on evidence (and spelling as well I assume). Anyways, this has gone too far down the complete-waste-of-time hole for me. Maybe if you actually had a substantive critique or comment on the original blog post, it would be worth trying to engage you.


But spelling is usually not that important. It's not as if anyone is confused by the use of the incorrect word "phosphorous". While it's nice to have the correct spelling this is the kind of word that can fail spell check (because phosphorous can be correct in other contexts) it's not a serious typo.

It's not like silicone / silicon - often misused where people have to guess from context. (No one is getting a silicon breast implant; no one is making silicone chips.)

It's not like mistaking the units, 1.50 mg instead of 1.50 μg, or putting the decimal point in the wrong place, 1.50 μg instead of 15.0 μg.

Having said all that the Underground Grammarian makes a good point in "Trifles":

> Our educators, panting after professionalism, are little interested in being known for a picayune concern with trifles like spelling and punctuation. They would much rather make the world a better place. They have tried on the gowns of philosophers, psychologists, and priests. That's why, when they think of their "teaching goals," they say those things in the questionnaire. They see themselves as guides to emotional development, instigators of creative capacities, and molders of moral character. When they must attend to the factual content of some subject, they prefer to say that they impart that "general insight into the knowledge of a discipline." Niggling details, like spelling and punctuation, seem base by contrast with those noble goals. Our educators have established for us what may be a genuinely new kind of cultural institution--although it is something like the Austro-Hungarian Empire--that stubbornly avoids those undertakings in which it might succeed and passionately embraces those in which it must probably fail.

(It's a long piece and I didn't know which bit to use as a quote). (http://www.sourcetext.com/grammarian/less-than-words-can-say...)


> But spelling is usually not that important.

Sometimes true, but I've always thought it strange that people who spend much time with computers, and who know a horror story or two about seemingly trivial lexical errors resulting in catastrophes, object to those who correct spelling.

> It's not as if anyone is confused by the use of the incorrect word "phosphorous".

Not true. They're not even the same part of speech. The correct word is a noun, the misspelled word is (or can be) a verb.

In this case it went beyond a simple spelling error -- because the paper was technical and scientific, a different standard must be applied. In science publishing, how one spells technical terms is a deal-breaker. It is to science what "ingenious/ingenuous" means to a liberal arts graduate -- the difference is so great that it's a standing joke.

> It's not like silicone / silicon ...

Actually, to a biologist or a chemist, it's exactly like that.

> Having said all that the Underground Grammarian makes a good point in "Trifles" ...

I'm familiar with the piece and I agree with the sentiment, but this wasn't a trifle.

Writers regard this issue in the same way that musicians regard popular music. And scientists who publish must be writers as well as scientists.


> They're not even the same part of speech. The correct word is a noun, the misspelled word is (or can be) a verb.

The incorrect word is an adjective; it isn't a verb.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: