Hacker News new | past | comments | ask | show | jobs | submit login

You're spot on in that citations are largely the pagerank metric and that's really what "counts", but there's a need for having an earlier metric. Journal impact factor is a computation based on historical citation count of articles in a journal. The leap of faith is then that if you publish in a journal that has a history of publishing highly cited work, that means your work is good and likely to be highly cited. There's any number of ways to argue against that logic, but that's the way it works.

So yes, what journals typically provide is a leading indicator of quality that you can get before waiting for citations to come in. There's a definite demand for such a leading indicator, and I certainly believe there's an opportunity to replace the journal with something else.




That makes sense.

I'm curious what you think about variants on the PageRank idea that address the slowness problem you point out. You state in an earlier comment “so if there was a committee that provided that evaluation entirely outside of the publishing ecosystem, that would be great”. You could theoretically have a PageRank-like system where some nodes are papers, some nodes are committees, and (maybe) some nodes are individuals. Then there could be different types of edges, expressing relations like “has cited this paper” or “deems this to be a quality paper” or “recognizes the competence/authority/prestige/whatever of this person/committee”. Then run PageRank on the whole mess.


There are certainly other data points, once example is Mendeley bookmarks, that can indicate what research is being highly viewed/discussed. If you know what people are passing around organically, and especially if you knew who those people were, then you could apply different weightings to that data. If a leading scientist in the field is digging through your paper, or has started citing it in their unpublished work, that might be an indication of quality that's useful.

The other thing that could make citation data more useful earlier might be taking a more complex view of citations. PageRank doesn't equally weight links, ie links from authoritative sources like the NYT count more than random sites. I think there's plenty of room for improvement in applying similar mechanisms to enhance citation metrics. And maybe some of that could start playing into making citation metrics more useful earlier in the process. If you only have one citation, but it's from a preprint article written by a Nobel Prize winner, that should count more than 10 citations from published articles by random postdocs (although I have a feeling that even that idea wouldn't be accepted without a lot of controversy in academia).


Interesting ideas, thanks for the response.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: