Hacker News new | past | comments | ask | show | jobs | submit login

As an admittedly-biased basic-researcher (who is no longer doing basic-research as a day-job in part due to constrained funding), the big lesson here is that funding basic research can yield very large returns.

There will always be people at the margins of funding, no matter where those margins are. Sometimes those people will hit it big for reasons that span persistence, cleverness, collaboration, serendipity, luck, and more.

There are some arenas in which widespread funding of small actors works well and others in which decades of focused investment can yield huge breakthroughs.

Overall, though, if you like the outcomes from basic research and want more of them, fund it.




Part of the problem is that a lot of basic funding mechanisms have sort of broken down, or we've lost sight of certain things, or aren't open enough or supportive enough as a society of diverse research funding mechanisms.

Just to take one example: at one time, you had a university, someone applied for a tenure-track job. The idea was, if they passed the tenure hoops, had the approval of their peers, etc. the university was essentially "funding the person". That was the idea: the public (or private benefactors) trust a university to appoint faculty who will identify people who are successful with worthwhile research. That is the idea behind tenure at a university, at some level. You as the public entrust the state to fund research, with the idea that there is some benefit and risk attached to that benefit.

Now, though, this idea has broken down, in that the current paradigm is for the university to find people who will get funding from the federal government, to fund their projects. The university is no longer "funding the person" at some level, it's passing that buck on to the federal government, and letting the federal government decide. So while, at one time, we had a very decentralized "fund the person" model (in the form of individual states and universities deciding who to fund), that's started to fail, in favor of a very centralized funding model where we basically rely on the federal government for funding everything.

There's just not enough options for people wanting to do research; it's too homogenous and too narrow in terms of incentive structures. It's like anything else: if you set up a system where there's one way to do it, you're going to incentivize taking advantage of that system.


> the big lesson here is that funding basic research can yield very large returns.

It can also be somewhat wasteful, depending on what gets funded. Research that generates "very large returns" is probably quite rare and special, in a way that makes the whole notion of "basic research" somewhat less than meaningful as a target for funding.


But that's the rub. I think almost by definition big discoveries will often be unpredictable, because if they were predictable you'd not have trouble finding them and they wouldn't be big. So if you just fund things that are popular, you're kind of stuck because although sometimes things that are popular are popular because they work, sometimes they're just rehashing what's already known.

Part of the problem I think is that there needs to be some healthy acceptance of risk in research. I'm not sure how you draw the line all the time between "things that are just bad ideas" versus "things that are unusual" but if you're always funding the sure bet you're not going to get anywhere.


> So if you just fund things that are popular, you're kind of stuck

The principled approach is to correct for that by looking for things that ought to be popular by current standards, but are nonetheless underfunded. Yes, this is hard - it's literally trying to beat all other grantors at their own game. It's also supposed to be hard. There was no reason to expect that "funding good work in science" would have an easy, painless solution.


I listened to a podcast a while back that discussed why the US shifting towards this approach (after a period of less constrained funding post WW2) has lead to worse returns on research investment.

The basic premise boiled down to: Scientists already want to work on valuable things, they just have different priorities. Understanding unknowns is motivation enough for them. But this makes the impact of their work also unknown, and scary to people who see that motivation as wasteful.

Making them justify ahead of time why their work will be useful, to satisfy the risk tolerance of financiers, means that they need to reduce that unknown before they even start, and potential impact with it.


"Understanding unknowns" is a perfectly good motivation for a grant - it just means that instead of seeking a grant to do X, you'll be seeking one for what amounts to a feasibility study on X-like stuff; this is not a bad thing for either 'impact' or 'risk tolerance'. Of course, it goes without saying that work where there are fewer "unknowns" to explore will be funded more directly.


But that assumes X is not an unknown, which is kinda the point. "X-like" is an easy pill to swallow for people who want to reduce risk, since they already know X.

If you just want to poke a stick at some lesser known Y, it's harder to appeal to those who want to understand potential returns on their investment.

And it seems that there was a time that predictability, and even returns, was less of a priority. I can't recall the podcast (maybe Planet Money) but they mentioned Vannevar Bush pushing for the government to act as a patron for sciences, and to take on risk that private investors would avoid. Like, spend up to find a new X, then let the private sector step in and develop the X-likes.


I do not think it is "hard", I think it is impossible. The current grants are what they are because they are already trying to do this.


The whole concept of basic research is that it is inherently wasteful, as the vast majority of decent (as far as anyone can tell before funding) projects will not yield anything with practical short-term benefits. The projects with large returns can't be seen as rare and special before they are done, it's kind of ordinary and natural for projects which have actual scientific ambition and the potential to yield very large returns to be risky and uncertain; and it's also overwhelmingly common (if we look at historical examples) for the major benefits of the basic research to be impossible to know or describe before that research was done or funded.

If you want non-wasteful research, doing only "safe" projects with a clear path to expected results then you are simply committing to never fund any of the research that has the potential to yield very large returns, and not getting those returns. That is the key point - that attempting to eliminate wasteful research is counterproductive and results in throwing out the baby with the bathwater, so to say. It has a lot of parallels with VC funding, as with basic research just as with startups you can't tell which projects will "strike gold" before the work is done; you can and should do some basic diligence but if you throw out uncertain projects, then you throw out all of the really good ones as well; all the projects where you are quite sure about the results and can know that they won't be bad are the same projects where you can also know beforehand that they definitely won't be groundbreaking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: