Hacker News new | past | comments | ask | show | jobs | submit login

I think that your impression of doing related work is overly simplified. Since you are submitting to a conference/journal in your field, chances are high that the reviewers are knowledgable in the subject area and will point out errors in attributing due credits to related work.

While I agree that there are systemic problems with peer review and how the science "enterprise" works, there is fitting analogy from politics by Winston Churchill: "The worst form of government except for all the others."




I think you misunderstand. I'm not talking about attribution errors. I'm talking about the fact that discoverability and cross-linking is severely hampered the bias towards previously cited works. This doesn't create errors per se, but can narrow the scope of inquiry to the point where it becomes detrimental to the institution of science as a whole. It's a naturally emergent silo, but a silo nonetheless.

A NLP-based referencing system would not need to be correct all the time; it would merely need to be helpfully suggestive. As you're writing your paper, it would put tips in the sidebar: "Maybe this is relavent? (Hover to read abstract)". As long as there aren't an intolerable number of false positives, it would be quite a useful tool, I think.


Well, actually I was not only having attribution errors in mind, too. Peer review ensures that colleagues will tell you about related work that you don't know about. Sometimes, people will tell you that something is related, even though you yourself don't actually think it's related work. Only with some time and acceptance, you will see that the remarks are really related, probably not directly to your own contribution, but to the bigger field that you orient yourself in.

Come to think of it that this is probably the most important detractor for having an NLP based "recommender." Personally, something like this might be interesting, probably even a great help, but at the end of the day, people need to really read a lot of papers, follow the proceedings of their target conferences, journals, and asking colleagues for their bibliographies. This has the added benefit of teaching them how to present their own work in contrast to others, do meaningful evaluations (in the best of all worlds, of course!) and figure out who is doing interesting work and might be valuable to get into contact with. Of course, some parts could be automated, but there is currently no incentive for scientists to do so.

IMHO, it would be a much more important step for CS researches to publish their code, too, because I frequently come across papers that have no implementation or evaluation at all--and that's really bad, because then the least-publishable unit becomes an idea with nice pictures. Researchers can be very successful using this "publication strategy." Come to think of it, there should be another approach to rank scientists by the number of publications, or their impact; unfortunately, I have no idea what could work instead.


Peer review ensures that colleagues will tell you about related work that you don't know about.

Not really, because other researchers are advancing their careers based on how often and how much they (a) publish and (b) get cited. So the colleagues most likely to be in a position to review your work are those who got cited a lot, who would primarily know about their work that got cited a lot.

The application of a relevance/NLP/PageRank-like additional-citation recommender program could come as a step in the reviewing process. Rather than having just human reviewers suggest further reading, a "machine reviewer" would as well, placing the query results in front of everyone involved in publishing a paper.


I totally agree about the importance of scientists to publish their code. That is critical. It's one of the many parts of the scientific process where the community would benefit from greater sharing.


I think you may mean the worst form of government except all the others that have been tried.

We need to try more.


I think that is a great sentiment. To find systems that work a lot better than the current one, which is incredibly slow, we are going to need to try a lot of different ideas.

When you are venturing into the unknown, innovation is tough and challenging, but it's also very rewarding.


Unfortunately trying new ideas when it comes to government, sometimes produces milions of victims.

But I agree - we should experiment more. One idea - we should be A/B testing any new law, before introducing it in the whole country.


trying new ideas when it comes to government, sometimes produces milions of victims.

Usually only with the forms of government that don't care about millions of victims. Is really a no-brainer that any totalitarian system that intends to make an enemy of a large section of its population is probably going to be pretty awful. So I would strongly suggest weeding those forms out in advance, rather than using the common historical method of allowing them to be implemented through force by the paranoid and insane.

This is less of an issue when it comes to systems of improving feedback in science, unless it somehow enables an aspiring evil genius, or something.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: