Hacker News new | past | comments | ask | show | jobs | submit login

Is it feasible to generate comprehensive plagiarism analyses for most or all published works?



I wrote a paper this year (after about a 15 year hiatus) and apparently submitted papers, this was to an elsevier journal, now automatically go though a plagiarism detector of some kind that gives a score that can't exceed some value. We actually failed the first time because we'd published the paper on Arxiv first, which seems like a big flaw in the system. I don't have a lot of faith in, nor do I support automated detection like that, but it does happen systematically now.


It’s not a flaw from Elsevier’s perspective to discourage authors from releasing free pre-prints.


It’s a flaw to label it plagiarism.


Not without false positives, which are unacceptable (unless you're willing to put the work saved from plagiarism checking into absolving the innocent).

I was forced to submit essays to a plagarism checker in high school (well over a decade ago). The company running this checker had, in their infinite wisdom, included a check for writing complexity under the assumption that a 10th grader who writes like an adult is probably cheating. This was embarrassing for everyone.


I had a high school English teacher who became suspicious of a piece of writing I did for this reason—no automated checking was necessary! he called me in to talk with him about the work, probed my knowledge a bit, satisfied himself that it wasn't plagiarism, and gave me a compliment for writing at a high level.

False positives are not necessarily deal-breakers, as long as the teacher treats the situation carefully before taking any disciplinary action.


As long as it's available to students so they can check their work for unintentional plagiarism.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: