This article articulates a whole slew of issues that I'm sure many hners with experience wrt graduate school or beyond have observed and / or experienced first hand.
All I can say is this, jumping into industry, and spending my time on engineering and research of my own that'd be too risky for academia, and that would not have a clear payoff value for a big co R&D lab, its the sexiest, best, most exciting and amazing decision i've ever made.
If my current endeavors pan out, I will actually be able to say I've created software (and an associated business, WellPosed Ltd) that makes innovation on our wee planet happen faster. I'd not be able to attack the high risk & high impact area I'm working on right now as a grad student or jr faculty, but as my own wee company, I can!
:-)
(anyone who's intrigued regarding building better shovels for the data mining gold rush, whether as a user or maker, shoot me a line at first name at wellposed dot com, subject: awesome shovels)
research of my own that'd be too risky for academia
I don't get this. The whole point of university research is that it is too risky (or benefits will be too far in the future) to make it profitable for companies to pursue. I don't see how anything could be too risky for the university, but not too risky for a business.
Edit: I'm not saying universities are perfect. I'm just saying that they still manage to do research that is too risky for industry. For example: finding the higgs boson. I don't see any examples of industry doing research that is "too risky for the university."
That should be the case but it is not primarily for 2 rather related reasons: tenure & funding.
Everything you do as university research is towards the aim of getting publications to progress either you or your adviser towards tenure. The problem is that things that don't work are not usually publishable. This leads to 'fluffing' up results (read enough academic papers and you'll find some hilarity there) and avoiding anything that might not pan out.
Just as important is funding, even a small lab with a few computers and one or two grad students needs funding. And in CS that largely means DARPA or a handful of other government agencies. If your particular research interest involves eventually killing someone, you'll do fantastic. There are of course other sources of funding but they typically have much smaller wallets, especially the further you go on the 'for the good of humanity' scale.
I've seen countless times were grad students are doing something really interesting, but because it's not going to help anyone get tenure and not going to bring in any funding, these students are strongly encourage to 'get back on track'.
If your particular research interest involves eventually killing someone, you'll do fantastic. There are of course other sources of funding but they typically have much smaller wallets, especially the further you go on the 'for the good of humanity' scale.
Unfortunately, being in the EU, I don't think changing this part fixes that many of the problems. EU grants are not about killing people, but typically about cooperation, building understanding between nations, reducing violence, integration of immigrants, that kind of thing. Goals I like more, in principle, than DARPA's. But the actual administration is if anything worse: 100-page proposals via hugely bureaucratic processes (NSF's are at least only 15 pages), unwieldy multi-country consortia, periodic mandatory status update meetings in Brussels, the works. If you can sell what you're doing within that framework and work with it, it's good, but it's still very much about being on the right track to impress the purse-string holders.
But the actual administration is if anything worse: 100-page proposals via hugely bureaucratic processes (NSF's are at least only 15 pages), unwieldy multi-country consortia, periodic mandatory status update meetings in Brussels, the works.
Tell me about it. I was 7 years in "academia" in the EU and 4 of those years was working in a FP7 (Seventh Framework Programme) project. Man, the amount of administrative baggage we have to do is amazing. "Libre research" is a myth, between monthly deliverables, 6-month review, half-way Brussels project review, you could not make a lot of research.
And then you have all the administrative controls, I kid you not, I had to log what I did every day in two places (one in EU FP7 timesheets (in Excel), and another one in my institute's own software (a terrible java program)). Sure, the upside was that I got to travel a lot (the project had about 7 participant countries).
Now I returned to the industry, I am in a company from the Silicon Valey as a "simple" software engineer (even though I have a PhD) and I could not be happier. Moreover, after a couple of months in the position I have made several contacts which are bringing new opportunities.
I am happier at my current "fast paced" job now as I was while I was in academia doing papers just trying to publish papers for the sake of it (that is how I felt).
Don't some of the large software and hardware companies also support research at universities? I think that some of the research labs at my university get supported heavily by companies like Google, Microsoft, Intel and Qualcomm. However, I don't really know the specifics.
They definitely do, I didn't mean to imply that DARPA is the only source of funding. Usually for most universities you can look at Large Company X, that operates locally and find them contributing to any department that does the kind of work they need. For example I believe Microsoft heavily funds the CS department at the University of Washington.
Of course this doesn't get you out of the trap of having to do research in the area of interest to whatever large company is funding you. You can do research that isn't indirectly getting people killed ;) but it usually has to be pretty clearly aligned with the business interests of companies funding the research
Heh, I guess being in the Bay Area really helps with getting a really wide range of companies to sponsor research :P. Looking through some of the bigger research labs, it seems most big companies represented at some point.
I was just curious because my impression is that CS is rather well funded, especially for having far lower expenses than anybody else in college of engineering.
I don't get this. The whole point of university research is that it is too risky (or benefits will be too far in the future) to make it profitable for companies to pursue. I don't see how anything could be too risky for the university, but not too risky for a business.
I get it. I recall specifically posing a promising, if not clear-cut, research topic to my grad school advisor and his response "that's high risk".
Academia is more concerned with the volume of publications rather than impact and unique ideas. There are tremendous disincentives in the system for innovation.
You can blame citation-counting and h-indexes for a decent amount of that latter part. If administrators who don't know enough about an area to individually judge the quality of someone's record are mainly using citations and h-index as proxies for quality, then the job of a junior researcher becomes to maximize those two numbers. There are various strategies for doing so, but one of the higher-probability ones is to churn out a lot of papers, and also to make them relatively similar to what other people in the field are already doing (so they'll cite them as related work in their next paper).
Academia is more concerned with the volume of publications rather than impact and unique ideas.
As a generalization, this is false. Certainly there is more emphasis on publication than is warranted, but most of the top places care much more about impact/reputation than about raw # of papers or citations. If you talk to people about how research labs hire newly-minted PhDs, how faculty are hired, or how tenure decisions are made, everything I've heard suggests that, at least for good places, they aren't just looking at the length of your pub list or the # of citations.
"There are tremendous disincentives in the system for innovation"
I feel that's overstating it but even if not, is it worse than industry? I've found academia to be very accepting of different viewpoints and depending on the field, experimentation is encouraged.
In my viewpoint, if an advisor in academia calls something 'high-risk' it's within the context of the overall work. For example, in grad school, the aim is to make a contribution to the field and get a PhD. Several students I've known have followed 'promising' lines of work (after disregarding advice to contrary) and ended up with nothing.
That's the point, but the reality of job, training, and funding structures today is that almost no one in academia can do anything that is not an incremental step on the research who has already managed to become politically established.
Right, but that's pretty much true of industry as well. I just don't see any examples of major scientific breakthroughs coming from industry that aren't equally coming from universities.
His essay seemed to say that the cautious incrementalism and turf-buildout that succeeds well with NSF (where most of the funding for academic ML research comes from), turned out to be less exciting than riding upon the Google juggernaut.
It's hard to argue with that. The applicability of ML technology has attracted companies with deep pockets, and really changed the landscape of research.
He claims universities are increasingly judging professors by their ability to get funding. And, due to budget cuts, funding is only going to areas likely to pay off. "Too risky for academia" would be too far off the beaten path to get funding.
I agree with that, though that's still a somewhat different idea of "risk" than companies have. It's perfectly possible to do "risky" research in academia in the sense of something that may not pan out into practical applications in the next 50 years... if a funding body has made it a priority. If the NSF has decided that Foo Research is a priority and will be funded at $100m/yr for the next 5 years, then it is not risky for the professor to research in that area, but it might be risky research in the sense that it may never pan out to be worth that $500m NSF investment. In industry, that kind of research may never have happened.
What's increasingly hard to do in academia is to take risks individually: if you decide something is under-researched and should be a priority, rather than waiting for a funding agency to decide it's a priority, you'll have tough going.
It's more along the lines of an international lab. Many, perhaps most, of the academics working there are still employed by their home institutions and are funded just as if they were working in a lab in their own university.
In fact, the same can be said for some of the academics working in national labs.
National lab, international lab, whatever. Point is that CERN is still more like a large, government-supported lab than a university.
What's happening at CERN is not at all reflective of the university environment of the OP, which emphasizes small collaborations working on, say, million-dollar projects lasting 3-5 years. As we both know, CERN is nothing like this.
I commented because the idea of saying that the work at CERN is "university research", as was done in the comment above, is mistaken. CERN has no relation to the work done by the OP.
Two enormous successes of academia are mapping the human genome and finding the higgs boson. Both were hugely speculative the time the projects were undertaken. If there's something comparable coming out of industry, I don't know about it.
The human genome sequencing didn't really get going until they had to start competing with Celera. In the end, the HGP ended up adopting the same techniques to keep up.
So, that probably isn't the best example to use.
The LHC on the other hand is one example of what the original post was worried about - the trend towards more consolidation, towards funding bigger projects (with fewer people), at the expense of many smaller labs.
This is very misleading. Celera used techniques born from academia to sequence the genome quickly. While it did prove that the method was both feasible and preferable, it did not cause the genome to "really get going." The genome was nearly finished already by the time Celera got close to completion.
The public draft was also of much higher quality then Celera's, and as hindsight shows, also born of better scientific technique. Celera secretly used only one person (very nearly) to sequence from, rather than a collection.
If Celera never existed, the human genome would still have been completed in a similar timeframe and shotgun sequencing would still have been adopted. If nothing else, academics are just as competitive as anyone else and thus some lab would have loved to show the feasibility of shotgun sequencing, albiet on a different species.
Disclosure: I worked on the public draft and also had access to the Celera draft right around the time that the genome was "completed". Our lab was using and comparing both the public draft and the Celera draft at the time. We were even fusing the two to get the best possible "up to the minute" draft for a specific chromosome.
The Higgs boson also wasn't super risky an undertaking. It was expected, and the project was a multi-billion dollar affair that was likely to come up with something. A somewhat ungenerous view is that it was just glorified taxonomy.
When you think about the sums involved, we could have done a whole lot of smaller, riskier projects, which may or may not have panned out but had a great deal more potential. Which I guess is your point.
It's not like looking for the Higgs is the only thing people have done at the LHC. For instance, there's another big experiment going on investigating Quark-Gluon plasma, and the OPERA experiment was using LHC neutrinos.
The LHC is a big, expensive piece of infrastructure that groups have to share, but there are multiple groups.
The more important question is could multiple smaller labs have come up with something more valuable (to science or society) than weighing the Higgs Boson (the discovery was expected, the weight was an unknown)?
i mean all weve accomplished is connecting over a billion people and letting you see the person you want to talk to in real time on a device smaller than a hotpocket
yeah i guess industry isnt really accomplishing much these days
And most of that was driven by academic research 10-50 years ago. The fear is that we aren't currently funding the academic research that will drive technology in the next 20-50 years.
Sibling commenter explained it fine, but to elaborate... your example relies on the existence of wireless networks, LCD technology, assorted networking protocols, cryptography (for secure logins, etc), and video compression algorithms. All of those have been advanced in some part by academic research.
With business, if you don't innovate, you die. University research is usually grant-driven. Grants are the result of filling in the right answers and telling a committee what they'd like to hear. There's much more urgency to be constantly innovating in business.
With business, your innovation absolutely has to turn a profit. This means that important research with less potential profit will die out, e.g. malaria and tuberculosis treatments.
It is for this reason that I believe there is still a place for academia in our society.
Regarding finding the higgs boson, regardless of whether you agree that it's what they're actually doing, the main people actually building the first quantum computer are D-Wave, a private company.
citation please? First, what does quantum computers have to do with finding the higgs boson? Second, is D-wave not standing on the shoulders of decades of academic research in designing their computer?
> spending my time on engineering and research of my own..
In some fields, this is very easy. It's easy to buy a bunch of cluster time on Amazon EC2 or something and run lots of machine learning/other research applications. This is awesome and is one of the great things moving the field forwards.
It is much more difficult in the physical and life sciences where starting out costs are extreme. Many great ideas do not become reality because of how expensive starting a company in those fields are and how difficult it is to raise money in those fields. Lots of VCs are cutting funding in these areas because of the comparatively high risk and pharmaceuticals are slashing R&D budgets in order to pick up smaller companies and secure rights to compounds, etc. Funding is very difficult in these fields and the framework of academia is sometimes the only reasonable way to carry out original research.
A dream of mine has been that I'd move into academia and teach Computer Science in the latter half of my career. I know that I enjoy teaching and I'd love to be able to devote the remainder of my time to researching topics that interest me in my field. This may be impractical for a lot of reasons, but it has been my dream none the less.
So, when I read posts like this it leaves me somewhat disheartened. The reason it is so disheartening is that I agree with the author on many of his points.
The commoditization of education in particular hit home for me. It seems obvious that online programs like Coursera and Udacity are the future. Traditionally, if you had a world-class professor you were left with the fact that his impact as a teacher wouldn't scale. But now he can teach a million students not just a few hundred a semester. Why settle for a B level professor at some non-top 10 CS program when you can learn from the people who wrote the seminal textbooks on the subject you are learning?
The feeling I'm left with is akin to the feeling I have with the digitization of books. Again, the benefits make the rational part of my brain see that it is the future. But I can't help feel like some part of the experience is lost in the process.
Is there going to even be a place for me in 20 years when I'm looking to teach Computer Science at a state school?
"But I can't help feel like some part of the experience is lost in the process."
Some of the experience is lost in the process. We need not really dance around or wonder about that, it is. But what will be gained will be greater than what is lost, or we'd go back to the old ways, which we won't.
A mass-produced medium-class bed isn't a magnificent hand-carved bed adorned with various and sundry Greek gods and goddesses. But the former scales, and the latter doesn't, and sitting and wringing hands about the experience lost when your bed isn't gloriously hand-carved is missing the point of the mass-produced bed quite badly.
"Is there going to even be a place for me in 20 years when I'm looking to teach Computer Science at a state school?"
A place? Quite likely. But probably not standing in front of a class giving a monologue, because in 20 years the whole "give a monologue to 50 people barely paying attention, hand out assignments to be done one week later, trickle out feedback about performance two week after that" model will be considered laughably quaint, and our grandchildren will ask us why on Earth we ever expected anyone to be educated with such a terrible model. It may not be a state school, either.
A moment of silence for some of the old nuances may be called for, but what is coming is a tidal wave, not a couple incremental advances. It won't be entirely 100% positive, but effectively nobody will be seriously advocating going back to the old ways in 20 years.
Rather than await a moment of silence, perhaps now is the time to carefully see what can be preserved. Somewhere in possibility-space is a model in which great minds teach in only the ways that they are well-suited to. Giving the same monologue year after year will likely die, but the teaching time that frees up could be used for some sort of mentorships or discussions or teaching of subtleties.
The first step is to characterize exactly what is lost moving from a traditional model to a Khan-academy-like model. Then to determine how to focus on those things. Then to experiment.
Unfortunately, the people writing checks don't ask "how could it be better for the same money?"; they ask, "how could it be equal for less money?"[1] Yes, we have an opportunity to make education much, much better. With the exception of a few places like Harvard and MIT, most of it will pass right by the US.
[1] I see a small possibility where the strong teachers unions get behind this as a future-directed way to save teaching jobs, and that could make all the difference. However, unions aren't too smart at predicting and driving assive upheavals. Instead, they will likely fight any change kicking and screaming.
You will know the true revolution is here when "the people writing checks" has moved far closer to "the consumer".
I actually expect about 80% of the true revolutionary reform to essentially come out of the home schooling movement, and trickle up to the universities, who will hold on to the traditional ways the longest. While what the universities are doing now is a critical step, I don't expect them to ever escape "University lectures and homework, but on a computer!" on their own, except that they will eventually have no choice.
This just means you can't lecture. One thing the digital, scalable lecture isn't going to take away is the importance of one-on-one instruction. In fact it may exacerbate it.
To me this is the future of teaching for most teachers - whereas currently they may spend ~10% of their time one-on-one with students, in a future model I can see that figure climbing to 80-90% percent as students struggle with their homework after watching their Coursera/Udacity lectures.
Not sure how you'll feel about that but personally I always found individual instruction to be the most challenging and rewarding part of teaching :)
Agreed. I have done a bit of teaching and I would gladly turn the monologue over to someone else and spend my time working individually or in small groups with students on projects or helping them gain a deeper understanding of the material.
I don't remember the lectures I gave, or the students who fell asleep in my first few tutorials... but I do remember the students who asked me about concepts they didn't fully understand and how great it felt when I could see the idea click in their head.
More of that kind of teaching could be a great thing.
I think it's an overstatement to say that online programs are the future of education. Coursera and Udacity are not selling the same thing that universities are selling, at all, and I find it hard to imagine them affecting higher education substantially.
What do people gain from spending four years in college? Many of them learn something, but most of the value is in the degree, which serves as a signal of competence in the workforce and is a requirement for most jobs. Online programs probably won't provide the same kind of signaling in the foreseeable future. Moreover even students who are primarily interested in learning are unlikely to get as much out of online programs as they do out of college; the rewards structure of college drives many students to work much harder than they would on their own, and (without the signaling reward) online programs can't reproduce that motivation.
Lectures are a very small part of the value proposition that universities make, and of course their functionality is already mostly duplicated by textbooks. Indeed, I'm not entirely sure why American universities primarily teach their students with lectures right now (versus books, video, Socratic method, etc.). I'd guess that the answer is some combination of tradition, students having a fundamental psychological preference for live teaching, and society in general preferring to believe that colleges are selling "education" rather than signaling, motivation, and lifestyle. In particular, I don't at all believe that the dominance of live lecture is driven primarily by a lack of good video materials. And availability of good video materials is the only thing that I see Coursera and Udacity really changing.
So: I'm not sure if there will still be state school professorships in 20 years, but I wouldn't bet against it.
(Note: for the record, I'm a fan of Coursera and Udacity. I also enjoy universities as they are today, though I wouldn't be heartbroken if they changed dramatically.)
As a physics PhD with two postdocs, this highlights almost perfectly my decision to leave for industry. When I defended the plan was to become faculty, but my first postdoc made it clear to me that goal was never going to make me happy, or allow for a balanced life.
I would add to the list one other important factor: getting papers accepted at desirable journals feels less like success for me now than it did when I was younger. The 14th edit of a paper just feels like pointless tedium and is time not spent making or testing something. And once you get a faculty appointment all the hands on time vanishes, and you're more of a professional writer/editor. I still need to be working with my hands.
That said, I'm finding it somewhat more difficult than I expected to transition to industry. I find that a lot of companies don't understand that graduate school and postdoc level research are not like college. I am also frequently treated with skepticism (from the CEO level to HR) that I really want to leave academia after developing such a strong academic CV. There still seems to be a lot of misunderstanding about the differences in environment from both sides.
I also found the post-PhD transition to industry a challenge. I highly recommend Jane Chin's 'PhD [alternative] Career Clinic' [1] for getting a handle on the perspectives of employers to PhDs, and ideas on how to avoid falling victim to some of the common stereotypes.
I was talking to an HR guy from a mid-tier web company who said they were pretty adverse to hiring PhD's without proven industry experience. It sounds like they were burned a couple of times by people who couldn't make the transition.
Neither. I'm an experimentalist. It appears to me at this point that if I were computational or a simulationist, the transition would be a bit easier.
Some of my difficulty comes from the fact that I'm limiting my search to the greater NYC area. If finance was something I was willing to do, there would be more opportunities for my background.
Well, I'm based in NYC, let's grab coffee some time later this week or the next? I'd be interested in hearing what you're looking for etc. (and who knows, maybe someone I know is looking for you and doesn't know it!)
Point being shoot me an email! (also include some random phrase in the email and in a reply to this message so I know it's you!)
I've tried to rationalize my own jump out of academia a while ago, which I did after passing my mid-term tenure review (I'm currently with Mozilla, happily pushing the boundaries of the web).
In doing so I came up with a (super-scientific!) "professional satisfaction formula":
The claims under "Mass Production of Education" are a little misguided:
1. Online education is not mass produced as he claims; the costs for every video that is downloaded to watch video lecture is incredibly small. Popping up tutorials on YouTube with ads is profitable and sustainable, and that's really all you need for someone who can dedicate themselves to learning more about a topic.
2. I have no idea where he pulls his "winners win" suspicion.
> why wouldn’t every student choose a Stanford or MIT education over, say, UNM?
If we have an open model of education, people would have the freedom of spending a minimal amount of time to look through a plethora of lectures and figure out what they like the best. In the end, you'll hear the best lecturers echoed from students who watched them. I'm making many assumptions here, and I don't want you to forgive me for making them. But we can't overlook the assumptions that the author is making either, and there are a lot of them.
3. Supposedly, online education kills a personal connection: an argument that has been made many times. Maybe I should point to a popular counter-example: Khan Academy. A very small group of people who manage to supplement the education of an incredible amount of people who want the extra help. They do it for free. They are happy with the results. And the people they do it for are very happy with the results. The argument he's making sounds like there's something to be yearned, but I don't buy it.
Were it not for this section and his rant on the "Funding Climate" (I really don't want to argue about politics here), I would not have thought this to be the rambling of an angry, old man. He may have some valid points, but it's hard not to see this as a giant middle finger to his previous employer as well as venting now that he's at his new job.
All three of his points seem reasonable to me. If people can take online courses in place of traditional university courses, it's no great leap of imagination that a handful of professors will teach many thousands of students. That's not necessarily a bad thing for the students, but many universities would be left with no teaching role - which could be a reason not to stay in academia.
The personal connection in education is more complicated. It is certainly important, and I don't think the Khan academy is a good counterexample (they apparently have plans for personal mentoring, but the focus seems to be the videos). I could imagine a model where an online course by a distinguished professor is combined with face time with a local mentor, but it would still be a narrower experience than going to a bricks-and-mortar university.
None of that is to say that online education is a bad idea - I think the advantages outweigh the downsides, and it looks like the author does too. But we shouldn't ignore the downsides, and the time to think about mitigating them is now.
I've never been in industry, but to me there are similarities between running a lab and running a business.
First you have to convince some one (a funding agency, an investor) that your idea is interesting and will work.
Then you have to manage yourself, other people and other resources to work your plan.
Then you have to show off your results.
The real world is not strictly a meritocracy, not even a good approximation. What people (reviewers, customers - the market) likes is often not very tightly coupled to the technical merits of the work.
Based on your past success funding agencies (angel investors) will be more or less willing to fund your next venture.
In the end, industry involves a whole lot more money, but academia gives you a different kind of satisfaction, but in both I think it is what you make of it, how you manage your time, how much stress you take and what kind of work life balance you make.
Okay, so there are issues with academia. How do those get solved?
They don't. As far as I can tell, all the problems, bar those that stem directly from funding, have always been there. That's the nature of the institution.
What changed, for CS at least, is that Google provides much of what is attractive to academia, but has all the funding that is required to make the other roadblocks go away. Google is not like Xerox PARC or MERL or Microsoft Research or other industry research labs. It's a company that's built upon being a research lab. I can believe the hey-days of Sun and SGI were probably similar.
The easiest way I describe working at Google to my grad friends is "It's a giant grad lab, except grads are also the ones running it, and they're billionaires." Academia can't compete with that, and it never did. The open question is not how to fix academia (as it never will be), but whether the Google model is sustainable. For as long as the Google model does exist, you will always see a net loss of professors to industry, rather than the other way around.
I dunno about the "never did" part. Google is doing some interesting stuff, but I don't think they're yet up to the peak of academia, like the amount of individual researcher freedom combined with innovation going on in the heyday of the MIT AI Lab. Even grad students were doing totally independent never-been-done research projects! Mere staff members like Richard Stallman had influential active side projects, too.
Google does seem like a great place if you're a senior enough researcher, though. People like Peter Norvig, Ken Thompson, and likely the author of this linked article, seem to get basically 100% freedom to work on whatever they want, with minimal management or job requirements, which is pretty much the ideal position to be in as a researcher. I suspect not all Google employees get Ken-Thompson-level freedom from having a boss, though.
Hehe, this is where is gets interesting. There's a huge number of small to mid sized companies which give you huge creative latitude for projects. Perhaps the most famous one is Valve (read their handbook, it's amazing!)
I'm actually hoping to replicate some of those ideas of both the good bits of academe plus the modalities in aforementioned handbook as I start to pull people into my org, Wellposed. (because what's more exciting than working with fun nice interesting intelligent folks who do amazing work plus being paid insanely well? )
It's an interesting model that I do hope catches on. My impression is that it's not currently widespread, but maybe I'm missing all the interesting action. Afaict, in the game industry (which I'm pretty familiar with, being an AI researcher in academia focusing on game design / game mechanics), the more common model is a very highly managed EA-style one. And among smaller firms, creative freedom in design is more common, but technical research isn't that common, because they don't have the resources/budget/interest. The only smallish game company I've seen produce actual published research is LittleTextPeople (some years ago, there was also Zoesis, but it's now defunct).
I'm not sure it's more funding, as much as the way it's doled out. The idea with the way funding has shifted was indeed (in part) to make academia more "real-world" and competitive. Instead of a university getting a big block grant from the NSF, or researchers getting unrestricted funding directly from their university's budget, as used to be common, individual researchers apply for competitive 3-year grants to fund an individual project. In addition, researchers are increasingly expected to fund their students and student expenses (e.g. travel), whereas a larger percentage of that used to be funded by universities. Some of that (particularly at state universities) is outright decreases in funding for research, but some of it is just a change in how it's administered.
The idea, which was plausible, is that projects can then be judged on their merits, instead of one big slush fund that who knows what comes out of. But the unintended, if not unforeseeable side effect is that it adds a huge amount of extra overhead, and incentives to target only "fundable", aka "sellable" research. With NSF funding rates for projects currently running at 5-10% of proposals, and typical large research universities expecting you to have 1-3 of these 3-year grants going at any given time, you need to be submitting 10+ grant proposals a year! And ideally also working your networks to see if you can attract some corporate funding. That's a huge amount of overhead, and it also sucks a lot of the appeal from academia, since rather than the university setting giving you freedom, you're in some sense closer to an independent firm that has to bring in its own funding, constantly chasing the next round of financing lest your lab implode and students go hungry.
There are likely people who will thrive in that environment, but I think it's increasingly going to be people who are skilled at research management and sales. The #1 job is attracting external financing, and the #2 job is heading up a successful mini enterprise with that financing, ensuring the lab is operating well. This is becoming pretty close to explicit. One university I've been at now sends around an internal newsletter ranking faculty by number of dollars brought in so far this year! If that's what you're going to be judged on, why even be in academia?
When I worked in academic research (NB a long time ago) I worked on pretty large EU funded projects (30 to 40 people, 7 or 8 organisations in multiple countries) and they were a nightmare due to the amount of "management" that comes with projects spending millions or tens of millions.
My feeling was that taking the money (even less money as you wouldn't have the manager/bean-counter overhead) and splitting it amongst a large number of small projects would have been far more productive in terms of actual research output.
I just commented this in reply to another person but I feel you will understand it more.
I was just part of an FP7 EU project in which 11 organisations of 7 different countries participated... and gosh, the admin burden of that is horrible. I got to be a work package leader and unfortunately it means you can't really do a lot of R&D. The amount of forms you have to fill , all the Brussels reporting is amazing. Added to that, I think the whole "publish or perish" attitude of academia just is not for me. So what if I only publish one paper from the project? yeah, everybody things I am a loser and that my participation was not successful (yeah, nobody says that but...). It feels as if research is just done with the objective of publishing papers... not only that, but colleagues actually say it "but you know, we have to see how are we going to translate this research into publications".
On the other hand, I also worked in a smaller 3 year EPSRC project with 3 Universities and 3 industry partners. It "felt" better, although the 3 industry partners did not really care (each meeting they sent someone new because the other person was either out or doing something else, so the new guy didn't know anything about the project and was just in the meeting to fill in). In this project, at least each academic partner was doing whatever they wanted and just presented their work, and a the end of the project, something "practical" came out of it.
Just wanted to say good read. Props for being from UNM. I am an Albuquerque resident, and my father taught Calculus @ UNM. Nice to see a fellow new mexican on Hacker News. Sometimes I feel very disconnected from the heartbeat of the technology scene being out here in the desert :\
All I can say is this, jumping into industry, and spending my time on engineering and research of my own that'd be too risky for academia, and that would not have a clear payoff value for a big co R&D lab, its the sexiest, best, most exciting and amazing decision i've ever made.
If my current endeavors pan out, I will actually be able to say I've created software (and an associated business, WellPosed Ltd) that makes innovation on our wee planet happen faster. I'd not be able to attack the high risk & high impact area I'm working on right now as a grad student or jr faculty, but as my own wee company, I can!
:-) (anyone who's intrigued regarding building better shovels for the data mining gold rush, whether as a user or maker, shoot me a line at first name at wellposed dot com, subject: awesome shovels)