Hacker News new | past | comments | ask | show | jobs | submit login
Should we move to a system where every scientist gives grant money away? (johnhawks.net)
108 points by benbreen on April 16, 2017 | hide | past | favorite | 46 comments



Sounds to me like they want to replace the current grant process with something that resembles Google's peer bonus system. Popular sorts will get the lion's share, renegades will get peanuts.

And this would compound existing problems in the tenure system wherein the decision to grant you tenure is in part determined by your competitors at other universities. Imagine, for example, if your promotion at Google were in part decided by Jeff Bezos.

In my own post-doc experience, I caught a National Academy of Sciences member at Johns Hopkins faking data 20 years ago this year, and he decided to threaten me over it. Since no one stood up for me at the time, I left for tech a month or so later. And it's a shame because the method that exposed him was an early form of adversarial search that's now all the rage with Deep Learning. I don't see how a grant system based on a Mean Girls(tm) model of funding allocation would have prevented this.


> In my own post-doc experience, I caught a National Academy of Sciences member at Johns Hopkins faking data 20 years ago this year, and he decided to threaten me over it.

That's a bold claim.

Do you have evidence to support this claim? Or a name? Could you point us to the data set they faked? If others aren't able to reproduce their data (under the same circumstances), that's fraud.


There's no benefit to me in outing the professor at this point so I'm not going to do so. If you feel that undermines my credibility, I'm OK with that. I don't need a lawsuit.

But the reason my own research was never published was that my post-doc advisor demanded my work achieve the same Nobel-prize worthy result that this professor was claiming to have attained. To put that into perspective, imagine if the requirement to submit a machine learning paper to arxiv was building SkyNet. And this was a status quo requirement for publication so researchers tended to make bold claims about doing so, all of which eventually got debunked, and there still hasn't been a Nobel prize for the achievement yet 20+ years later. There has been significant progress though.

The professor went on to complete a long and successful career but the work in question was exposed as bogus shortly thereafter by a competition that was the equivalent of ImageNet for the field. It was yet another example of predicting the training set in an age when people didn't really understand yet why that would be a problem. And it wouldn't have amounted to data fakery except that said professor denied there was any issue whatsoever when I presented counterexamples to the model located by adversarial search and then proceeded to threaten me professionally, paraphrasing: "Your data could take me down, and if that happens, I'm taking you with me."

It's mostly behind me now, but I bring it up in this thread because I don't see how this new model of funding would have prevented this or a myriad of other issues with science funding.


Ugh, shiver. I've seen a variety of "problematic" data analysis behavior by very senior people, so I have no trouble believing this story. Nor do I blame you for not elaborating to random internet people. Although it is sad that his intimidation was not only rewarded, but apparently intimidated you right out of the field.

But IME, they usually try to cover their bases by (fake?) incompetence, rather than this kind of thing. That is, they claim to be unable to see severe problems even when the evidence is placed right in front of them. Plausible deniability, I guess.

In your example, it would be like "I don't see why we can't build Skynet. And if we claim to have built Skynet and have overstated our case, we can just fix it in future work. Anyway, who cares what we claim as long as we made a good-faith effort and get that next grant funded."

Personally though, I doubt whether most fabricated data is fabricated by PIs themselves, but by postdocs etc put exactly under the kind of impossible pressure you're describing. Then, of course, they can take credit for the successes and pretend to be shocked and outraged if the fabrication ever comes to light. The VW scandal probably happens somewhere in science daily.

I agree with your conclusion. Also, the distribution of "credit" or "popularity" is highly exponential in any field, so this proposal would just exacerbate the already rich-and-senior get more rich-and-senior phenomenon.


Whistleblower. Announce it annonymously, or just do it. Seriously, we can't get bullied like this. If we let this go, we will repeat the fraud research Harvard scientists did after getting paid by the sugar industry and hundreds of other scientists willingly to write paper with subjective evidences and conclusions because they were paid to write in certain way. We need to stand up. I am willing to support you if you have concrete evidences.


You're really underestimating how much this happens. One incident is a drop in the bucket, and there won't be a whistleblower every time.

IMO the only way to fix it, especially in CS type disciplines, is to make it a requirement for publication that all the code is available and easily reproducible.

For other disciplines, I don't know the answer. At its core, this is a problem of very high competition and of failures being unpublishable. Really, it's not a great thing to out highly respected scientists who fabricate -- even though they richly deserve it -- because it erodes public confidence in science. It would be better if we could find a way to prevent these things before they happen.


I totally understand the institutional and establisment power in place and how many folks out there are on the same boat as this said JHU professor. But this is why we need to blow at every chance we have. We know how policy change takes time at the scale of universe. It takes forever. This is how corruption is defeated or we attempt to defeat corruption in society. Ideally we take out the top guys then all the way down. If we maintain the position that "the wrongdoer is too famous", "he/she is not alone and not fair to single one", then don't bother have peer review journals. I know exactly how corrupted some of the peer recviews can be to be honest. But that's the whole point. If we don't start holding people accountable, espeically the PI, the bad seed will grow and more bad seeds will blossom. Stop the bleeding. This JHU professor is famous so what? Grow up and admit the wrong. He/She is just scared of losing his/her honor and this is exactly what we need to do. Strip the crown and start encouraging people to publish code and dataset openly.

If this professor is famous, you can bet he/she most likely didn't do the actual research. It's almost always the graduate students on the co-author list did the research and then sign by the professors. If a professor mentoring can't admit the wrong how can he/she be a good mentor? If he/she instruct the students to take a shortcut, well, guess what, this is real bad. Do we want more bad seeds? Lead by examples. Let's not do more sloppy and dirty research. Someone who brag about his/her successful research but knowing the data is fake is a sad person. Academic is tough - totally understand that. Professor's tenure and salary often tie to the amount of grants they can bring in and the stress is too much for one to do research with ethics and dignity. But this is why we need to hold people accountable, and reform research. This is a complex issue, but it takes courage. I hope powerful researchers and the leaders on research committees on various government agency will start improving the system.


I said nothing about fairness. If I thought it were practical and would help, I'd advocate outing every single one of them. I have less than zero sympathy for them. And I totally agree that it is really screwing up science.

But to use an analogy, if you have a few weeds in your field/garden, you can pull them. If you have a lot, you need a systemic solution like pesticide. Remember also that the most senior people are close to retirement and if eliminated will in many cases just be replaced by other sociopaths.

The worst case scenario for this guy if the whistle were blown would be that he takes early retirement, goes home to enjoy his pension, and probably comes back as part-time/emeritus.

People would say, well, it's bad that he fabricated this one time, but his contributions to the field as a whole outweigh this one minor lapse. That's a big difference between when senior and junior people are caught in these situations. He would still maintain a lot of respect and connections -- more than enough to make good on his threat to totally screw GP's career.


I agree and I am no saint either. My thought is don't let a bully win. Sure he may go home with an early retirement or just hide behind desk teaching who knows. But his reputation deserves to be punished. Perhaps no one would remember but that's a stain he needs to live up to. How fan we be so sure his other contributions are questionable? I am the type that can't stand at bully coming at me and I will go head to head. I am sorry if you or others have to go through these science bullies. Often we critize people demoting science in climate change and yet we ourselves can't do true science either


>Allocating money on the condition that some must be given to other researchers would create several downstream benefits. Scientists who maximize the ability of other scientists to produce their own new and useful results would have a big advantage in this system. Jerks would be punished appropriately.

Feels too naive to me - jerks in powerful posts will be rewarded by this system. If I know you sit on the committee to decide on thing X I'm very interested in (let's say, access to a new computation cluster), I'll contact you and hint how favorable inclusion of myself will result in more of my 'share' towards yuo. Jerks tend to thrive in such a system since they look for opportunities where they can control the lever, any lever. It doesn't create a more friendly environment, it amps up science's political game, where people look friendly but stab you in the back, without your knowledge.


Damn, I'm a tenured professor and reading through the comments it seems like I'm the only one who thinks this is a good idea at some level. The reasons I think it probably wouldn't work are things I haven't seen anyone mention, though.

This is really not too different from the current system, except for one critical thing, which is that instead of decisions being based on nepotistic review panels, they're based on everyone. So this has the benefit of everyone getting funding, and decisions on who to fund based on some highly democratized process rather than the old boys club.

The issue you're raising is important, but making funding decisions totally anonymous would help. Of course, I don't know that you could make it totally anonymous, and what you're suggesting would still be an issue, but anonymity would go a long way. Also, the thing you're suggesting would probably still happen anyway even in today's system.

The problem in my mind is deciding who gets to participate in the system. Everyone who's a tenured professor? Tenure-track? What disciplines? My guess is this would just shift the funding decision from a per-project issue to a per-researcher issue.

My favorite idea for remedying the current process is to randomize rewards. That is, everyone above the median gets thrown into a pool and everyone has X% chance of being funded.

Also, I'd like to see "term limits" and more randomization of review panels. Make it more akin to jury duty, where, if an institution receives federal funding, their researchers are all put into a pool, and review panels are randomly selected for fixed periods of time.

Finally, indirect costs need to be eliminated, sharply curtailed, or reorganized to require explicit justification. Republicans are finally drawing attention to this, and they're right: it's slush funding. Universities wouldn't give as much of a crap about funding if it weren't a profit margin for them.


Not to mention, being a "jerk" has precious little to do with your ability to do good research.

Some of the most earth shattering members of their fields were infamously unpopular with their peers. Kurt Gödel, George Cantor and Albert Einstein were all responsible for massive fundamental shifts in the understanding of there respective fields, but were all deeply unpopular outsiders, and would be much less likely to get funded under a system like this.


This proposal is a simple, appealing way of solving the problem, but I doubt it would work well over time, because, like all efforts to control human behavior via a simple set of rules, it will end up being gamed by human beings.

Campbell's Law: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."[0]

Ultimately, I believe, we'll need human moderators to (1) detect and prevent abuse of the rules, and (2) modify the rules when they stop working.

[0] https://en.wikipedia.org/wiki/Campbell%27s_law


Indeed. Especially in such a simple scheme, e.g. if they find people aren't giving away enough money and try to incentivize it, people will just form cycles. If you start doing triangle-detection on the graph (as a stand-in for some arbitrary regulatory measure), they'll just make longer ones.

And beyond pure greed and the political aspect, there are professors that also have pointless grudges.


I wonder if there's not a way to make it more like a library type system. Ie a central pool of durable resources.


I think it would have all the problems of lending money to or going into business with family or friends.


If you want to pull together a bunch of money for a large experiment, you're still going to have to write documentation to argue your case, and spend lots of time arguing that case.

Continued funding will require perennial marketing to your peers.

It democratizes science funding, but as in pure democracy, it forces each citizen to spend much of her day at the forum.

My related pet hack: Give researchers a shareholding ability for existing grants, letting them go long and short each project, should they choose to do so, perhaps in an imaginary currency. At the end of each cycle, allow researchers to re-allocate their "capital". The key difference from the article's proposal is that the researchers can retain their "capital" for future use.

It is the shorts I'm most interested in -- people who can accurately predict which experiments won't work in their proposed timelines are the people who I want evaluating grant applications.


> Continued funding will require perennial marketing to your peers.

I'm not a fan of this proposal either, but how does this aspect differ from the current system, really? Who do you think reviews grant proposals?

If I had a nickel for every PI who came back from a study section saying "We gave high scores to project X, because I know the PI, Y. He does good stuff." (i.e., the decision was made apparently with little reference to what was in the actual proposal), I could afford at least a better parking spot.

I like your idea. One problem, though. It seems to require some kind of post-hoc evaluation of whether a proposal succeeded or not. In its simplest form, this would reward low-impact, sure-to-succeed types of projects which are already a scourge at least at the NIH. The worst kind of grant IMO is not the one that fails, but the one that, after it succeeds, nothing of value was learned.

I think post-hoc grant evaluation is a great idea and needs to be done much more, I just wanted to point out that "success" is multidimensional: successful at meeting its aims, or successful at improving our state of knowledge? The latter is what we want but very difficult to evaluate even qualitatively.


If you just shorted everything you'd come of pretty good on average.


The way this would work out is that you win very small amounts when correctly shorting, but lose big amounts for incorrectly shorting. Shorting everything would work out to a small overall loss.



One quibble that I didn't see addressed but possibly overlooked: who chooses who is considered a scientist worthy of getting the initial block of $$ in the first place? There will still need to be an application for that status at some point.


I was wondering the same. In https://arxiv.org/pdf/1304.1067.pdf they say "chosen according to some criteria like academic appointment status or recent research productivity".


This is more than a quibble. It's a gaping hole in the idea.


Yes, or put another way, supposing we solve that problem, what do we do about granting said status to new scientists? Is it a one in, one out policy?


This definitely seems like a rich get richer sort of situation and will lead to patronage machines.


Why not have part of each grant be placed into a fund for replication studies, or even directly allocated to other independent teams to replicate the study. Might do some fields rather well.


I agree with the posts that this would just give more to jerks who game the system. Although what if there was just equal distribution. You could form coalitions for more resources or you could just use the small amount for exploratory research. Get 10 people together for a large grant 100 for a mega grant. This collaboration is required regardless if you want larger grants.

I would like to see a small pool of grant money go out like this. Or maybe the lower scored but not grant winning get 1/10th allocation with no requirements to do a small scale study or act as a bridge.

Imagine how amazing technology could be if we swapped the war budget with the health and science budgets.


Everything people object to in socialism would apply to science funding. It would be like giving everyone a percentage of the GNP and hoping their innate altruism would redistribute the money to worthy recipients based on individual judgments of merit.

Those who think scientists would act differently or that scientists aren't otherwise-ordinary human beings with specific areas of talent, should read the history of Nobel Prizewinner William Shockley (a very public racist). Or the ugly and public priority fight that Jonas Salk and Albert Sabin fought with respect to the Polio vaccine, which resulted in neither being considered for the Nobel Prize.


The challenge in research is to create variance.

New mechanisms that explicitly favor assumption breaking research will generate variance that a scientific community can sample from to generate progress. The impact his will have is to entrench the current paradigm.

New mechanisms should explicitly favor developing researchers rather than made researchers, to fund the research that is relatively valuable - underfunded research. Mechanisms that optimize for the largest return on the margin, instead of optimizing for name recognition / popularity, which is already pareto distributed.


I don't see how this will not lead to - rich gets richer - phenomenon. Scientists who have highly cited papers are by definition already well established (full professors in tier 1 institutes who are already well-funded or might be occupying an endowed chair position). Do they really need tons of grant money to do further research? As per this scheme, they will be the ones getting the share of grant money.

And currently scientists in academic institutions are already giving more than 50% of their grant money as overhead cost to the institution.


They should require that who the fund get disbursed from, and to, and the amounts, be part of the public record. The quality of results, and year-after-year funding to the original researches (the ones who selected who the disbursement goes to) should be related to the success of the research. This would massively crowd-source successful grant creation, and make the market into a winners market.

The record would make things like gaming and collusion clearly visible after the fact.


Wouldn't it be better to make the donation piece more like a secret ballot, i.e. make it hard for anyone to find out whom a given scientist funded? That would make quid pro quo arrangements less likely because one party wouldn't be able to determine if the other stuck to the agreement or cheated.


The fundamental problem isn't that grant funding based on peer-reviewed evaluations of research proposals is a bad idea. It's that:

1. The pie has shrunk a lot because federal funding of research has seen a huge decrease when measured as a fraction of GDP.

2. There's also a lot of political pressure to not "waste" public money on research that doesn't yield useful technology NOW.

3. But at the same time, there's an ever-growing supply of people who love doing science. These people are willing to do pretty much anything to get and maintain an academic research position, because like it or not, academia is still the among the best places to do high-impact research.

So there simply isn't enough money to fund all the good ideas, BUT there's political pressure to fund only sureshot research, and an oversupply means that scientists are doing what it takes to be productive in their academic positions.

The obvious solution is to increase federal funding for research. Accept that not everything is going to pan out as intended. It's not like every product a company makes is going to be profitable, so why should we expect science to work any differently? And then just wait! Some of this research will in fact succeed and we'll see the next internet happen.


References a Science article titled "With this new system, scientists never have to write a grant application again" ...

> In Bollen’s system, scientists no longer have to apply; instead, they all receive an equal share of the funding budget annually—some €30,000 in the Netherlands, and $100,000 in the United States—but they have to donate a fixed percentage to other scientists whose work they respect and find important. “Our system is not based on committees’ judgments, but on the wisdom of the crowd,” Scheffer told the meeting.

> Bollen and his colleagues have tested their idea in computer simulations. If scientists allocated 50% of their money to colleagues they cite in their papers, research funds would roughly be distributed the way funding agencies currently do, they showed in a paper last year—but at much lower overhead costs.

http://www.sciencemag.org/news/2017/04/new-system-scientists...


"If scientists allocated 50% of their money to colleagues they cite in their papers, research funds would roughly be distributed the way funding agencies currently do"...assuming scientists didn't change their citing practices to exploit the new incentive structure.

Still think it's a pretty interesting idea, but they should probably team up with some game theorists.


My limited experience tends to suggest there is a geographic mafia involved as well. If you are on the west coast and just need money, you are probably better off pitching local industry because the east coast PIs have a older, more established network system that effectively locks up a lot of government grant funding.

I have applied for grants that might as well have been written for my group, very niche, and been denied, multiple times. One of my advisors actually told me "Do you know how to deal with the mafia? You make your own mafia."

I consider an NIH R01 to be equivalent to a degree from Harvard: it's the imprimatur, not the money (or, for Harvard, the education), that is different.


I think they would end up finding 8 or 9 dudes playing beer pong at a chalet all year with the money. They wouldnt actually "find" that, but it would be the internal outcome. And to of course a bunch of doctored papers with fake results.


Crucial question whose answer I didn't catch while reading the articles: how do you determine who is considered a "scientist"?


> ... how do you determine who is considered a "scientist"?

In the same way we do now, using a system that would overlook people like Einstein, a patent clerk who hadn't finished his Ph.D., who published articles having no supporting empirical evidence. In fact, there was little concrete evidence supporting his ideas until 1919 (a solar eclipse), which would eliminate him from consideration for the "scientist" label in modern times.

Or Alfred Wegener, a mere meteorologist who had some crazy ideas about the continents floating around and who was shouted down by the real scientists during his lifetime (reliable evidence for plate tectonics only appeared long after his death).

Ironically enough, both stories support a foundational principle of science -- that evidence trumps eminence.


> Should we move to a system where every scientist gives grant money away?

Who is this we who has the power to completely change the grant system?


Why so many complex suggestions when we have not shown that any system is better than the lottery approach. Unless you can show that your grant allocation system is better than a random number generator why should we use anything else other than dice.


If (academic) politics weren't bad enough in the basic sciences!

(hopefully HN readers understand the differences, as well as similarities, of academic vs governmental politics. Seems like the distinction is lost in the general public).


One think to stop collusion and gaming is that who they select their grant to go to should be part of the public record. After-the-fact analysis could quickly show gaming or other nefariousness.


This sounds a lot like the Valley angel investor system.


We need to reduce administrative overhead.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: