Hacker News new | past | comments | ask | show | jobs | submit login

> Whereas undergrads are coached to write tests we're coached to publish.

That is the big problem. The problem is that the sole method of employee evaluation for University professors (and postdocs - anyone in an academic career) is publications.

This encourages bad publications, a focus on research (or more accurately, publication milling) rather than teaching and a focus on many publications - instead of good research.

At least 99% of conference articles are worthless and 85% of journal articles are crap.

I wonder if the world would not be better served if professors were encouraged to study deeper and target the high-hanging fruits, instead of producing (mostly) worthless research.




What are you basing your conclusion on? How do you evaluate the effectiveness of a researcher? Especially in an era where there are often ~200 job applicants for a given position? At one point people simply tried number of publications. But, as you say, many of those publications could be lousy. Thus, the h-index was born (http://en.wikipedia.org/wiki/H-index) in which not just the number of publications is counted, but also their impact (citations/yr). It's an imperfect system, but it does put more emphasis on higher impact (more cited) publications rather than simple quantity of papers.....


> which not just the number of publications is counted, but also their impact (citations/yr).

But that is completely flawed! Some research areas (which is inconsequential but easy to do) produces a lot of citations. And what also happens is that people start building research groups which cross-cite each other.


How do you judge that? If you write a paper and no one cites it, is it important (a tree falls in a forest...)? This is a serious question--I agree that h-index is vulnerable to fads, but if one adds aging (the effect of citations decays with time, so if people stop citing the paper when the fad passes, then the h-index decays--for researchers who are post tenure and intend to stay in the field, they can think long-term) then it's hard to think of a better way. There are so many papers produced (the number of researchers is huge) that I don't see a feasible way of judging research outside of using "crowds"--aka citations....I suppose that one could try to look at page-rank (based on citations as links) as a proxy for citations (I think this has been done) and use it to remove the academic equivalent of link farms--but there are legitimate uses of cross-citing. For example, imagine that you have group 1 with technique A and group 2 with technique B that work in a similar problem space. It might be very natural for them to cite each other if they disagree or agree with each other. This may happen several times--it's not malicious, just economics at work (even if both groups could afford both sets of techniques and one of the groups was really good at both techniques, it would make sense for them to only pursue one--comparative advantage)....

What is your solution?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: