Hacker News new | past | comments | ask | show | jobs | submit login
Academics, we need to talk (matt-welsh.blogspot.com)
141 points by ssn on Jan 7, 2016 | hide | past | favorite | 165 comments



Now, I don't think all academic research has to be relevant to industry.

I'm getting kind of sick of this tactic, where this fake concession is made before going on about how academia isn't designed well enough to deliver to industry.

If authors of articles like this didn't care about the efficiency of academia to deliver them research, what's the big deal if there are a bunch of people somewhere running around in circles? Hey, at least the CS academics occasionally produce something useful to industry, which is more than I can say about other groups running in circles. Sure it could be about burning through money, or a real desire to improve the state of CS academia. But if that was the case I'd expect these articles to have categorically different discussion and calls to action. Or - at the very least - address goals of acedemia other than performing research for industry.

I've never been an academic, and never want to be, but even I gave up reading after the "(I know, crazy, right?)" line. At that point, I knew for 100% sure, that the target audience of this article is not academics. Nobody is that much of a condescending prick to the person they're trying to persuade. This is not a "we need to talk" conversation this is a "I need to talk at you, so I can show off to other people."


I'm getting kind of sick of this tactic, where this fake concession is made before going on about how academia isn't designed well enough to deliver to industry.

This +1, er +1,000,000.

I would firmly state that academia by definition is not supposed to 'deliver to industry', and not even to society. The whole point of academia is to give the very smartest people (whatever that means) the freedom to explore ideas, so that a some point humanity may benefit from their ideas, experiments, and research.

My point is that it could be any number of generations of research from the current generation of (what seems like quack) research (to the establishment) will turn into being instrumental to our understanding of the universe. Galileo Galilei (and Copernicus) for instance. Their ideas (and research) did not sit well with the establishment of the Church (which was pretty much the central authority of everything at the time), but their ideas are now central to our understanding of our Solar system.

edit: formatting.


There is an attitude of entitlement here.

Students spend their money and time to learn something and frequently they get worked like a racehorse and nobody cares about if what they learn is transferable after the academic system no longer needs them.

If you are not going to be relevant to society the only thing I can do is vote for somebody who is going to cut your research budget.


Some would say that we should divide universities up into more separate institutions for Arts, Philosophy, Fine Arts, Natural Sciences, Applied Science, Math, Social Sciences, Business, Law, Medicine (or is that a Science?) and others, but the point of universities is be places of knowledge, and to help foster cross pollination between the disciplines. The point of independent research is to be unfettered by government, business, and indeed wider society as a whole.

Government and business and society should not be fettering independent research, just the same as they should not be interfering with law courts. The point of independent research is for the independent researchers to have the freedom to follow their research even though it might not be popular with the local Wood Pulp mill pumping dioxins into the local river, and that is pollution is going to cause cancer. This research may not be popular with many in the community because it will impact their jobs. This research may not be popular with the government, and elected officials because it will affect their tax receipts and donations. This research my not be popular with you because it is going to put your family out jobs, but it is important research, we can ultimately make informed decisions.

That is the point of research, information, and knowledge--so we can make informed and reasoned decisions.

I just can not believe the attitude and entitlement of some who think that just because they pay tax they think everything that is not obviously useful to themselves is useless.


Honestly, that makes more sense which is why PRIVATE industries should create schools inorder to foster educatin relevent to them.

Having organized education that is relevent to the real world is an exellent idea. However, reforming schools is not the appropriate approach. We live in a capitalist society, if they dont like whats around, they should compete. Im confident people are more than willing to pay 20k a year for a guarenteed job. Code boot camps charge that much for inadequate preparation and people are still clamboring to get in there.

Edit: i agree with you, im just so passionate right now. I need to relax


This is also very dangerous.

Having schools that don't teach for the sake of fostering knowledge and critical thinking but rather just train you to serve in a specific industry/company would be the final blow to higher education in the US.


Why? Vocational schools exist and always have. They are not supposed to replace universities, but complement them.


It is the consumers choice. I personally believe that vonventional universities enable creativity and freedom that a private institute would likely avoid. These types aspects are paramount to popular culture and forward thinking. However, if I want to get a job out of school, id prefer a guarentee


That's a tremendously short sighted view to take. It's very hard to judge relevance to society over short time frames.


Do you benifet from the ideas of Copernicus and Galileo Galilei?


As would others vote to continue it.

Relevance is a very broad term, with industry a subset.


>Relevance is a very broad term, with industry a subset.

That's very insightful. It's easy to lose sight of things and be reduced to a particular worldview. The Higgs boson doesn't have to have commercialization potential for us to justify spending a few billion and a few decades discovering it. :) It's an end in itself.


That's how I see it. Pure research is necessary and possible.


Cutting the research budget only makes academia more cutthroat and exploitative. We need to fundamentally get rid of the pyramidal academic job structure.


I think you and Matt are talking about different kinds of research. He's specifically talking about incremental research in computer systems. And he's 100% right that most of such work has zero impact, both over the short term and long term. I didn't read this essay as a condemnation of research in general, especially not of what he describes as "visionary, out-of-the-box, push-the-envelope research." It's talking about "research" that is misguided and is not doing anything meaningful to advance knowledge.


This is exactly what your grandparent (and parent to some extent) comments are arguing against.

There aren't 2 different kinds of research, kind A that is never gonna go anywhere and kind B that is visionary. The vast majority of the advancement of knowledge has been what academia has always valued: probably never gonna go anywhere, but might push the envelope by an inch or a millimeter. The whole reason you have to research a thing is you don't know whether it's gonna go anywhere.

Which isn't how we think in industry, not even in industry R&D. We have to know it'll probably to go somewhere, or at least have a small chance of going really far. That's the only way to get a profit in expectation.

But work that will probably go nowhere or even if it does go somewhere, won't get very far, like most of scientific progress? The point of systems like tenure is to foster that, since industry can't.


I don't think you understand the point being made about different kinds of research. You are correct, it is not "kind that will never go anywhere" and "kind that is visionary".

But I do think we can separate "kind that is incremental improvement on existing systems" and "kind that is visionary change of existing systems". This is particularly true in computer systems research. It's common to see a paper where people tweak a small part of an existing system; I consider that incremental research, and it is necessary. But the value of such incremental research is dependent on the assumptions made by the researchers. Often, those assumptions are informed by impressions of what is important to the wider field of computing, and the author is saying those impressions are often misguided.


> I would firmly state that academia by definition is not supposed to 'deliver to industry', and not even to society.

That, and also, academia is several thousand years older than industry.


If I recall correctly, Maxwell was once asked by a minor noble-person what value his research would lend to industry.


"Mostly I'm involved in mobile, wireless, and systems, and in all three of these subfields I see plenty of academic work that tries to solve problems relevant to industry, but often gets it badly wrong."

If you want to work on problems irrelevant to industry, go nuts.

If you want to work on problems relevant to the industry, it helps to double-check that your problems are relevant to the industry.

The desire to work on problems relevant to industry is coming from academia, not the author.


Since he doesn't really give specifics, it's hard to tell, but I wouldn't necessarily read most academic papers that mention applications (especially "potential" applications) as having an actual desire to work on short-term industry-relevant problems. Yes, papers often have a vague gesture towards, "breakthroughs in [graph algorithm X] have potential uses in [networking problem]", but this doesn't necessarily indicate a deep desire to work on problems relevant to industry. Rather, it more often indicates a deep desire to work on graph algorithms, and external pressure to throw in a mention about potential applications in some hot area. So I wouldn't read too much into it.


Indeed! The question is: are we ready to drop the requirement of listing applications everywhere?

The department where I am studying is big on graph theory and other theoretical CS concepts, and I always feel a bit uneasy when we write or read about applications (on a grant proposal or in a conference paper, usually) and yet none of the authors really cares about applications. They usually care because "it was an interesting problem that some people studied in the past and we can do it better than them".

Yet, as far as I know, nobody can really write that sentence unless it is such a fundamental problem that its usefulness or importance goes without saying. So we are now in a situation where everyone (theoretical CS, applied theoretical CS, and practical CS) pretends their work have actual applications, and I feel Matt is calling them out on it.


(I'm the author of the original blog post.) Ad hominem attacks aside, I do think it's a big deal if a bunch of academics are spending time working on the wrong things, if for no better reason than maximizing efficiency: Don't forget that most academic research is funded by the government. Since I also happen to help decide which research proposals Google funds, I also care that academics are well-aligned with the problems we care about. Clearly we also need to invest in long term bets. But there is a big difference between doing long-term, potentially-groundbreaking research and bad industry-focused research.


>it's a big deal if a bunch of academics are spending time working on the wrong things

At the risk of sounding a little prickly with my comment - isn't it a little presumptuous to write off a certain amount of research as "wrong"? There are plenty of examples of research that didn't have a clear purpose leading to breakthroughs - safety glass, microwave ovens, and Teflon all spring to mind.

Also, focusing on efficiency doesn't really align with academia's purpose, which (to my mind) has more to do with fundamental research. This article about Xerox PARC springs to mind:

http://www.fastcodesign.com/3046437/5-steps-to-recreate-xero...


That's a fair point. I'm trying to draw a distinction between "speculative" research (which might pan out long term) and "industry-focused" research (which tries to solve problems we have today). My concern is not with speculative research -- that's great -- but rather flawed industry-focused research: making incorrect assumptions, failing to deal with the general case, not considering real-world constraints.


> but rather flawed industry-focused research: making incorrect assumptions, failing to deal with the general case, not considering real-world constraints.

I find this point rather condescending. What you term "incorrect assumptions" are often necessary simplifications that reveal the core of the problem at hand. The point is to try and understand what makes problems hard and to develop new ways of solving them; not to deliver to companies like Google ready-made solutions that cater to their every operational need. Don't like that? Too bad. Fund your own research.


No, I mean incorrect assumptions. It doesn't matter if we're talking research in industry or academia; doing research based on flawed (not simplifying) assumptions is bad science.


If you want to make a concrete point about bad science then do it. At the moment however you are not making that point. Your argument is that "the assumptions" (whatever those are; you provide no specifics) made by researchers don't match up with industry expectations; a wholly different point for which I have little sympathy.

In my opinion the scientific authors doing the work are in the best position to judge what assumptions are and are not appropriate for their work. If the science is bad we expect the community to pick up on that via peer review or in a subsequent publication.


I was originally going to list examples of bad industry-focused science in the original post, but decided against it, since I didn't want to offend anyone. Your username is "gradstudent", suggesting you have read a few papers. My bet is that you've read papers where you scratch your head and say, "is that really how things work?" I read lots and lots of those papers - usually they don't end up getting published.


>My bet is that you've read papers where you scratch your head and say, "is that really how things work?" I read lots and lots of those papers

That sort of contextualizes your post as a long winded statement of "My job is hard".


Honestly, if the academics didnt choose their research the private industry wouldnt have to hire r and d. This sounds more of a ploy to save costs by having academics build for the industry on government money than to hire people to further a companies interests.


> making incorrect assumptions

If it was easy to know when you make incorrect assumptions, nobody would.


Well, the whole point of my post is to give some tips on how not to.


I understood that as more of a lament that a lot of work is being duplicated because academics aren't aware of prior work in the industry. But then again, how are you supposed to know that someone has a solution unless they tell the world about it in a publication?

> It drives me insane to see papers that claim that some problem is "unsolved" when most of the industry players have already solved it, but they didn't happen to write an NSDI or SIGCOMM paper about it.


There are more ways to "publish" work than in academic publications. I'm not sure to what extent it is unrealistic to expect academics to be keeping up with what is going on at industry conferences or on github or by talking to people at user groups or wherever, but the reality will always be that academic publications are not the only, or the most up-to-date, source of information.


Agreed, blog posts and open source projects can be just as helpful as journal-published articles. I didn't intend publications to mean only academic journals.


The problem is that many academics are not aware of anything outside the academic journals, and thus work on "unsolved" problems that already have well known solutions in industry.


I didn't think that's what you intended, but in the quote you included, the author is specifically lamenting the lack of knowledge of things published outside academic journals, not the lack of knowledge of things that aren't published anywhere.


Matt is talking about academic who publish papers showing that System X is Y% better than System Z, under completely nonsensical conditions. They aren't advancing the state of the art, they are mapping the terrain of the compost heap, but don't realize it.


Is he? Maybe he should have said that.


I also care that academics are well-aligned with the problems we care about.

That is not the point of academia, and indeed not the point of why academics are offered tenure with their institutions. Please consider Galileo and Copernicus and the Church of the 15th and 16th centuries. Please consider history.

Google as a business may care about research being aligned with its business, or what 'society' or certain segments of society care about; Google really should not care what academia and academics want to work on. That is the point of, for better or worse, academia: independent research by some of our smartest people.

If Google, and industry, care about what gets research, they should fund it. They then can choose. Please leave academia and academics to be just that, academic.

Without government funding, support, and indeed the wider academia 'ecosystem', my father would have never been able to be a historian of the Scottish enlightenment, and producing the seminal book on Adam Smith. And if you don't think that the Scottish Enlightenment is important to our modern understanding of the Universe, then I suppose that you then have to rethink the contributions of these notable academics, scientists, engineers, and philosophers: David Hume, McLaren, Taylor, James Watt, Telford, Napier--off the top of my head.

My point is that my fathers book may not itself be a major contribution to a current revolutionary idea, but it will likely be part of some future realisation about economics, since Adam Smith is a major cornerstone of our current understanding of economic thought.


I'm sorry, but I think this perspective is fairly naive and ignores the reality of how applied science and engineering work in universities today. You're talking about 17th century scientists, but the reality is that in the middle of the 20th century there was a tremendous shift to applied sciences -- computer science being one of those fields -- with the goal of producing useful innovations.

The whole point of my blog post is this: Most academics are trying to do work that is relevant to industry, but many of them are going about it the wrong way. Nobody is saying you have to work on industry-relevant research, but if you're going to try, at least do it right.


>Most academics are trying to do work that is relevant to industry,

That is certainly not what most academics think.


Bullshit.


In other words, academic research is a punt in the dark. Much of it will not yield all that much, but some of it will.

It is kind of similar to what YC and VC's do: give a little bit of funding to a small number of businesses, but hope a (very) small number will pay off big.

Academia is a small investment to a number of ideas, with the hopes of big pay-offs in a very small number. The technology behind the Web was invented by Tim Berners-Lee while at CERN in the early nineties. You might not think that funding in high energy and particle physics would be worthwhile, but yet something completely different and revolutionary for all of humanity came out of it.

We just do not know where these revolutionary ideas come from, but what we do know is that they tend to come from some of the very brightest people.


That's not true that they come from the brightest people, I'd say that's very much a bias.

Much like how YC often says that the best startup founders aren't necessarily the smartest people in a room. Occasionally, being the very brightest can even be counter-productive because in the end the people who are less smart and realize it will strive to work harder.

And that's what research really is, really hard laborious work. Not this fantasy of someone bright sitting in a chair thinking up discoveries. That's why research is hard.


> That's not true that they come from the brightest people, I'd say that's very much a bias.

[Citation needed]

There are such things as talent, insight and experience but intelligence makes everything easier, from learning new fields and tools to evaluating and formulating new ideas. Ceteris Paribus more intelligence is better.


That's a bit reductionist on something that is inherently non-reductionist: how do you define a good problem worth solving.

I believe you presented a logical leap between having all the tools one could ever dream of <-> delivering scientific impact.

Here's a citation: "scientific impact is a decelerating function of grant funding size".

http://journals.plos.org/plosone/article?id=10.1371/journal....


I would further add that Google, and similar technologica enterprise are some of the richest organisations of all time. I'm not sure about comparison to the Spanish Empire, however Google and 'Industry' can well afford to pay for their own research. If you have not been to Seville, and other major European Empirical cities, Rome, Paris, London, St Petersburg, Berlin, etc, then I recommend that you do a bit of travelling in Europe and witness the legacy of the dangers of what happens when money and power have no checks.


You can edit your posts if you realize you have something to add after hitting submit.

Click "Threads" and you will be presented with a list of your recent comments. Anything that is still editable will have an "edit" link. I am not sure how long this link is available but its definitely longer than an hour.


As someone that spent seven years in industry as an engineer and is now three years into a PhD, I lament the failure of industry to adopt old crypto research that has clear immediate application.

I like your post because I think it is the duty of academics to communicate their results for maximum positive effect and to represent real world impact accurately, but I also have to ask what can industry do to expose itself to excellent research which is sitting unused?


Just like it's not fair for me to conflate all of academia together in my post, it's difficult to lump all of industry together. Places like Google have a pretty good track record of leveraging the latest innovations from academia when it makes sense to do so. Not all companies work this way. You need people who are aware of the research, and willing to put the extra effort in to make it practical.


I think you are focusing on the wrong problem. The real problem of academics is not what people research, the huge elephant in the room is the "publish or die" dogma.

This dogma is one of worst thing happening to research.


I agree that "publish or perish" is not a good model for research. What are alternative models that are potentially viable?


It is absurd model for evaluation of people that solves only one problem: how to make evaluation easy for the administrators and soft on researchers. We should rather find and implement a model that leads to best research results possible. I think one such model would be this: give tax money only to people whose results of their last work have proven to be beneficial to society (for other people outside their interest group, not only colleagues in the same field). Impose reasonable set of practical goals and supervision (roadmap) on people who want their work financed and periodically re-evaluate. Leave basic, speculative and other research of unclear significance to be financed privately, by patrons, no tax-based financial support.


I really think you have it backwards on basic and speculative research to lose financing from the public. How would you propose "proving that results are beneficial to society"? And how is this different from the current funding proposal process? Committees that fund proposals certainly look at the success of prior research by the group.

I don't think anyone would disagree with you when you say "We should rather find and implement a model that leads to best research results possible." Viable alternative models are what I am looking for.


Researcher with practical results can provide evidence of usefulness of his results to society and discuss his work and result with his money-guarding superiors who are not researchers themselves. Superiors take some time to confer and decide whether and how much he gets.

Basic research cannot be so easily evaluated, often it is decades, sometimes centuries before the practical use is found, if ever. Basic research, by its definition, has unclear significance for anything except researcher's curiosity and intellectual fulfillment. The results in the form of a paper are often too remote from the needs of society and often so intellectually involved nobody except few people competent in the field can judge whether the work is even meaningful, far from judging its usefulness.

The current evaluation process for basic research is that grant committees don't even try to evaluate the usefulness of the research requestors have done. They use superficial indicators like existing publication score, academic rank and history of the requestors, relation of the work to other high-profile trending topics, association with popular research groups and political considerations.

That's why I think people who want to do basic research should not be funded by their colleagues in field directly from tax money. It's too opaque and ridden with corruption.


Your point of view honestly makes me sad. It's the kind of industry-centric thinking that is killing culture.

Under this premise, investing in the arts is a waste of money, when you could invest in UX studies. Investing in the humanities is a waste of money, when you could invest in data analysis. Hell, why even have academia at all? Wouldn't it be better if education was managed and provided directly by the industry?

This fetish for optimisation only ends up hurting the sense of humanity in all of us. Not everything has to be done with the purpose of being disruptive, or in a frenzy for efficiency. Some people are just goddamn curious about bees and will study honey bees (as useless as it may sound) because understanding nature is a goddamn delight in its own right, even if it doesn't yield the next billion-dollar idea.

That innate delight and curiosity is where science is born. Let's appreciate that. It applies to STEM too. Maybe some academics don't care about your problems or in fact solving any problems at all, and decided to be in academia not for the "potentially-groundbreaking research" but just the thrill of finding new stuff and uncovering the beauty in the universe, don't you think?


I'm not sure if you read the original article or not, but I very clearly point out the benefit of long-range research in addition to things more directly relevant to industry. So I don't agree with the premise of your criticism.


I agree with you completely about "bad industry-focused research". It serves no end. My question is that, is it just a reflection of mediocracy and gaming/dishonesty being everywhere, including academica ?

In this case, please trying to take shortcuts towards their goal of improving the amount of publications and grant approvals. There is reward in the system for this kind of behaviour. You being a reviewer/jury position unfortunately do not have the luxury of a filter.

Is there a way to early catch this , by looking at past trends ?

Slight detour is that this is one of my rationale for spending time reviewing papers for journals/conferences. In average, only 10 to 20% of papers I review really stands out or appeal to me, which is correlated to acceptance rate of a top journal/conference.


I worked in industry research for a long time and almost every intern or new hire came in with laughably wrong assumptions, but it wasn't due to mediocracy, gaming, or dishonesty. They just didn't have anywhere to learn from. Their assumptions were copied from earlier "best paper" winners in the field whose assumptions were copied from a random previous paper whose assumptions were probably mostly speculation.


How do you think about pure mathematics research then? Or fundamental physics? Or art/literature/history's place in publicly funded universities? There's so much more to all of this than pushing forward in industry.


Hmm, I wouldn't write off the value of "bad industry-focused" research too quickly.

"Industry focused" doesn't necessarily mean it's looking to push the needle of industry much yet. Often the first paper academics and students do in an area will really be about trying to take an industry problem into the lab, to see if they can prod it and discover what's interesting about it. Not yet about trying to push their research out into industry. Little experiments are how we get to look at new problems for the first time when we're not close enough to buy you a beer. And little papers are how we get to start discussing what we're doing with our peers at other universities. In a sense, where start-ups have Minimum Viable Products, PhD students and academics have Smallest Publishable Activities to start exploring a new area ;)

Sabbaticals and internships are a lovely idea, but it's increasingly hard to get universities to give leave to do that. (Backfilling an academic's teaching can be surprisingly hard to arrange, and then there's the academic's service roles for the university, their PhD, masters, and honours students, etc) So I expect most academics will always have many more research interests than sabbaticals / placements.

Ok, those are the constraints - on to the value:

Most universities do focus fairly heavily on industries in their area. (eg, agriculture in regional areas). But it's not a good idea for all their research to focus on that. If we teach in an area, we want to be at least somewhat research-active in it, as that's how we keep teaching from stagnating. And in lots of countries, students find it too expensive to move far from home, so they study at a uni they're in the catchment for. So unless we want to shrink the future pool of employees to children who grew up in academic-beer-buying-reach of a company in that industry, we really do need some research happening at universities outside beer-buying distance. Which means you'll get a bunch of "bad industry research" papers as they first explore the area.

You might not want a new multi-hop algorithm. But a graduating masters student who's designed their own multi-hop algorithm with different characteristics, and undergrads having had some exposure to the problem...

And sure, the interaction design student might only have developed their app for one mobile platform, not several, and only tested it with a dozen people not a million. But really that's because we figure it's probably sensible to find out if it was even worth writing for one platform, before we spend so much of that government-funded time on writing it to work across several...

There's quite a few things industry, such as Google, could do to help if you're keen to make the match between research and industry closer. One is to disseminate your problems, not just your libraries.

If Google emailed "here's a good small problem and project we'd be interested in" (a different problem to each university), I doubt there is a CS department in the world that would refuse to offer it as a project for their honours students. (Which means the faculty who supervise the students would be looking at the problem with interest too.) The academics would just need to know it's a unique problem for them -- that they're not being asked to compete with an unknown number of other honours students and academics around the world on the same problem (which really would be inefficient as well as unfair on the students).


I don't think it's a fake concession. There is academic value in far-out research that isn't relevent to industry, just like there's value in math research and in metaphysics.

The point the article tries to make is that if you're trying to research something relevent to industry, it's probably not nearly as valuable academically as something more exotic would be, so you'd better make a good effort to make it practically valuable to compensate.

Nobody is that much of a condescending prick to the person they're trying to persuade.

In context the line didn't appear condescending at all as I read it. A little sarcastic, that's all. The author made it clear that he doesn't think academics need to produce something that's actually used by consumers. And the author was/is an academic. If you'll allow me to speculate, it seems that you don't have the highest opinion of academics and so it seems possible that you might further think that in order to be valuable they need to produce a product that is economically valuable. If you read the "I know, crazy, right?" line with that attitude, I can see how it might appear much more condescending than it actually is.

This is not a "we need to talk" conversation this is a "I need to talk at you, so I can show off to other people."

This is a non sequitur. "This article isn't trying to persuade academics, therefore it is trying to promote the author's social standing at academics' expense". The author identified a problem and offered a potential solution, does that mean he also has to persuade? Your statement presents a false dichotomy, that he has to either persuade academics or shame academics to promote himself. I say that he doesn't make an attempt to persuade beyond identifying problem and solution because he doesn't want to bore the reader with rhetoric, nothing more nefarious.


The author is on the program committee for some academic conferences, so he is not just an observer, but a participant. It is also common for academic computer science papers to use industry applications as motivation for their research - and, similar to the author, I often find some of these motivations misguided.

(I work in an industry research lab, write code for a real product, and publish academic computer science papers.)


Look, this blog post contains good advice for academics. You seem to be overreacting quite severely. The truth is that the utopia where anyone can work on interesting problems is long gone or rather never really existed (http://amapress.gen.cam.ac.uk/?p=1537). Academia produces a massive oversupply of academics, and there isn't the capacity to fund them all. Funding bodies in general are talking more and more about 'impact' as a deliverable. The original post is just trying to say don't ruin your chances to get funding with faulty thinking about industry applications.

You can argue, quite rightly perhaps, that focusing on impact/industry applications is a bad idea, and that we will miss out on important epochal discoveries, but that is a problem with society, not the author of the blog post.


Matt Welsh used to be a tenured systems professor at Harvard. He probably knows more about academia than 99.9% of people in industry.


I think he is largely correct in identifying the flaws of 'academic research', but he does not spend enough time discussing the whys.

Academics are in an insanely competitive environment, where what is rewarded is bringing in grants/high impact publications. There are a very select few academics that are so brilliant and have such sterling reputations they can afford to not play this game (like Matt Welsh's former advisor) but most young researchers don't have this luxury.

For example, Peter Higgs, the Nobel prize winner who postulated the existence of his namesake Boson, flat out said that: "I wouldn't be productive enough for today's academic system" (http://www.theguardian.com/science/2013/dec/06/peter-higgs-b...). He spent several years of quiet research without publishing anything to develop his theory - a young professor doing the same now is unthinkable. The most highly successful young scientists I know now are incredibly career driven and optimize ruthlessly for the kind of output that tenure committees are looking for.

Basically, if you want researchers to incorporate best practices (tests, version control, well commented code, etc) and to actually attempt ambitious longterm research programmes, make sure that's what you reward, and remember you cannot just reward success. By definition, something ambitious has a high possibility of failure - if failing means that your career is destroyed, then people won't do it.


This is very important. Writers like Matt might do well to factor in the "academic" constraints much like he encourages them to do for industry. I ran into this in a discussion with Anti on prior art for Rump Kernels. Learned a bit in that discussion but his troubles getting approval stood out:

https://news.ycombinator.com/item?id=10141736

I had heard about BS in academia where only paper output, grants, citations, and so on count. That Anti had to fight to get them to care about his work being implemented speaks wonders about how this works out in practice. Had he only cared about academic success, he could've dropped some light technical details and graphs in the paper then been done with it while the idea collected dust. Many benefited from him fighting the tide on it to produce good papers and an implementation.

I'm not in academic circles but I bet many face the same battle. With the pressure, it might be impossible for them to produce the desired output on their own within their constraints. Or so difficult many give up. Perhaps we should encourage them to have one good line of research they string out over years for quality and ensuring delivery while doing lots of nice papers in between to keep institutions happy. Think that might work?


I agree with your points, in my case it was this hypercompetitiveness at the expense of meaningful contribution that drove me away from an academic career, post-PhD.


Me too, and I've been lucky enough to have incredible research opportunities in finance and insurance, which all go unregistered, and which have today brought me to Holland to eat this cheese plate.


Academic research that unknowingly (or sometimes even knowingly) duplicates secret industry work is much more valuable than this discussion indicates. Sure, it's not valuable _to Google_ for someone to publish things they already know. But everyone else benefits. If people at Google want their research to stop being duplicated, they should publish it.

Of course, if your goal is that Google adopt your new system in their data centers, then you need to know what they already do. But the problem with that model of research is the initial goal, not the way it's currently executed.


I'm not so worried about duplication by academics -- that does not happen often -- but rather about academic research that's just wrong: makes bad assumptions, uses a flawed methodology, fails to address the general case.


If industry could provide better platforms (up to date, not hand me downs or neutered versions of their real product) then I think you might see better forms of this research.

For example, I'd love to improve search relevance, but w/o having access to Google's search engine to build on, it's pretty hard. That's my suggestion. :-)


While I agree in general that opening up opportunities for academic-industry collaboration is good, I don't think it's practical for academics to work on problems at true industry scale. Academics don't have access to the resources, personnel, or funding required to do that kind of work. An academic lab can do many things of relevance to industry -- but not everything.

Google recently open sourced its TensorFlow plstform specifically to enable researchers (and others) to build upon and improve it -- trying to avoid the problem with MapReduce (where a bunch of clones came out that were, at least initially, inferior to the original).


It would be really nice if they would go ahead and release the Google version of MapReduce now that they've learned their lesson. It's not too late for everyone to learn from the original, and it's no longer a competitive advantage now that anyone can run a Hadoop job on AWS on demand.


I think I agree with samth - it's not the academia, but the industry who needs to open up. Especially Google has a reputation of being very secretive.


You and samth seem to be missing the point: academic research is often built using methods that will ensure no real-world success while aiming to achieve real-world success. Factoring in real-world constraints, best practices, and existing work-arounds will let academics achieve better results on average. Baseline of practicality goes up.

And again, these are all academic projects that aim at being practical. The point is supported by a number of academics that incorporate real-world information and constraints into their work to produce deliverables that advance state-of-the-art and are useful. Examples that come to mind are: Haskell/Ocaml/Racket languages, CompCert C compiler, Microsoft's SLAM for drivers, SWIFT auto-partitioning for web apps, Sector/Sphere filesystem, the old SASD storage project (principles became EMC), old Beowulf clusters, ABC HW synthesis, RISC-V work, and so on. So many examples of academics keeping their head in the real-world instead of the clouds to making a name for themselves with awesome stuff with immediate and long-lasting benefits. I'm not Matt but I'm guessing he'd rather see more examples like this than, say, a TCP/IP improvement that breaks compatibility with all existing Tier 1-3 stacks and whose goal is to improve overall Web experience. Yes, there are people working on those. ;)


I don't think the RISC-V work is a good example. It suffers from some of the problems that mdwelsh is worried about.

It's aimed at a real world problem but their solution is not good.

A couple of days ago, someone asked where the verification infrastructure was on https://news.ycombinator.com/item?id=10831601 . So I took another look around and found it was pretty much unchanged from when I looked last time. There is almost nothing there. It is not up to industry standards, to put it lightly.

It's not just the verification aspect that is weak either. On the design side, they only have docs on the ISA. For SOC work, you are essentially given no docs. Then in another slap in the face, the alternative is to look for code to read but the code is in Scala. Basically only helping those who went to Berkley or something.

It is something that seems relevant but if you were to try using it most engineers would have a pretty hard time.


That I recall, the RISC-V instruction set was created by looking at existing RISC instructions, industry demands, and so on. The result was a pretty good baseline that was unencumbered by patents or I.P. restrictions. From there, simulators and reference hardware emerged. Unlike many toys, the Rocket CPU was designed and prototyped with a reasonable flow on 45nm and 28nm. Many others followed through with variants for embedded and server applications with prior MIPS and SPARC work showing security mods will be next.

Them not having every industrial tool available doesn't change the fact that the research, from ISA design to tools developed, was quite practical and with high potential for adoption in industry. An industry that rejects almost everything out of academia if we're talking replacing x86 or ARM. Some support for my hypothesis comes from the fact that all kinds of academics are building on it and major industry players just committed support.

Is it ideal? No. I usually recommend Gaisler's SPARC work, Oracle/Fujitsu/IBM for high-end, Cavium's Octeons for RISC + accelerators, and some others as more ideal. Yet, it was a smart start that could easily become those and with some components made already. Also progressing faster on that than anything else.


The flow is not good IMO.

They haven't followed engineering practices which is one of the issues mdwelsh was talking about.

If they've synthesized to 45nm and 28nm, where's all their synthesis stuff - constraints etc.?

They have no back end stuff, very little docs, almost no tests, almost no verification infrastructure.


Hmm. Im clearly not an ASIC guy so I appreciate the tip on this. News to me. Ill try to look into it.

Any link you have where people mention these and any other issues?


Maybe I was a bit harsh with the "almost no tests". They have some tests.

Someone name fmarch asked on https://news.ycombinator.com/item?id=10831601 about verification against the ISA model.

It possibly can be done via a torture tester apparently, https://github.com/ucb-bar/riscv-torture , but taking a quick look I don't think it handles loops, interrupts, floating point instructions etc.


There didn't seem to be a lot in there but I don't know Scala. I wish it was scripted in Lua or something with the Scala doing execution and analysis. Make it easier for others to follow.

Doesn't seem nearly as thorough as what I've read in ASIC papers on verification. They did (co-simulation?), equivalence, gate-level testing, all kinds of stuff. Plus, you did it for a living so I take your word there. I do hope they have some other stuff somewhere if they're doing tapeouts at 28nm. Hard to imagine unless they just really trust the synthesis and formal verification tools.

The flow is here:

http://www.cs.berkeley.edu/~yunsup/papers/riscv-esscirc2014....

Are those tools and techniques good enough to get first pass if the Chisel output was good enough to start with? Would it work in normal cases until it hits corner cases or has physical failures?


Interesting paper. It sounds good until you look for the actual work. With a possibly limited amount of testing, you can't be sure of anything. In verification, you can never just trust the tools. With no code coverage numbers, how do I know how thorough the existing tests are? The tests themselves have no docs.

The torture test page said it still needed support for floating point instructions. That kinda says, they did no torture tests of floating point instructions. I wouldn't be happy with that. Same goes for loops. Etc.

You have to think about physical failures as well: the paper mentions various RAMs in the 45 nm processor. You should have BIST for those and Design For Test module/s. Otherwise you have no way to test for defects.


Yeah, that all sounds familiar from my research. Especially floating point given some famous recalls. Disturbing if it's missing. I'll try to remember to get in contact with them. Overdue on doing that anyway.


I'm glad you like Racket, but it's really not the case that what we need is to spend time in a Google product group to make Racket better.


Nobody said you did. He suggested several possibilities. One was working in industry to understand real-world development, deployment, or support needs. Another suggestion is considering real-world issues. That's main one as the other merely supports it.

An example would be putting Racket to use on industrial scale projects with groups of programmers from different backgrounds. These would discover any pain points of language/tooling plus opportunities for improvement. Doesn't have to be Google: just practical, diverse, and outside Racket's normal sphere.

The reason I used Racket as an example is that they already do some of that at least in their forums. Maybe some commercial sector but I lack data on that. They've evolved to be constantly more useful for both academic and practical stuff through such feedback.

If you doubt that or that they're purely academic in a bubble, then feel free to share why. You may have data I dont have but Ive seen them adapt based on external feedback and usefulness in specific cases. Not a Racket user or insider, though.


I'm one of the core Racket developers, so I also think we're doing pretty well. :)

But what you're suggesting requires persuading a large group of developers to adopt a new language -- if you have a recipe for doing that, lots of people including me would love to learn it.


I figured you might be. ;)

"But what you're suggesting requires persuading a large group of developers to adopt a new language -- if you have a recipe for doing that, lots of people including me would love to learn it."

What I'm suggesting is a group of people interested in trying something and reporting the results try something and report the results. You don't have to convince anyone of anything as you're responsible for you, not them. :)

All you'd have to do is make sure the tutorials/guides, tooling, core libraries, and distribution are in order. Optionally a little bit of evangelism for awareness. Random person on forum or HN: "Hey, I'd like to try to write some business code or some service in a new language. What should I use?" Drop a Racket link, guide, and something on macros & live updates (if racket has it). I remember loving those when I played with LISP back in the day. I wouldn't expect any more from the Racket team.

Now, I'll drop Racket references in these tangents if you're saying you all sat around in Plato's Cave with no exposure to real programming past its shadows in academic papers and just came up with everything in Racket on your own. Just seems like there's some feedback loops in there from useful projects that caused improvements that make it more useful in practice. If you say no, I'll concede I'm wrong given you're part of the core team. Then be mystified at its evolution.


We certainly put effort into documentation, distribution, libraries, tooling, etc, and there are many Racket users who will bring up Racket unprompted. It turns out language adoption is hard, though.

And far be it from me to encourage you to stop mentioning Racket! But I think fewer of the academic projects you mentioned than you think were developed by people based on industry needs. Instead, we Racketeers are all software developers, and we make Racket the language we want to program in. The most significant Racket application at the beginning (and maybe still) is DrRacket, the IDE. Developing that has led to everything from FFI improvements to contract systems, just as an example. I expect the same to be true for many other real working systems developed by academics.


" But I think fewer of the academic projects you mentioned than you think were developed by people based on industry needs."

So, I issue a retraction that's the opposite of my prior claims: Rackets features, libraries, and tooling were all developed by academics or Racket community with feedback or justification from real-world projects (eg web servers) or use in industry. Purely Racket community working within the Racket community on day-to-day, academic or leisurely needs.

I'll revisit the others on the list to see which of them might be the same.

"And far be it from me to encourage you to stop mentioning Racket!"

I wouldn't anyway. You all have earned mention with the right mix of attributes in the project. :)

"Instead, we Racketeers are all software developers, and we make Racket the language we want to program in. "

That makes sense. Scratching an itch as the old FOSS motto goes. I did the same thing with a 4GL a long time ago. I understand the motivation. Staying with such a project for many years is an angle I apparently haven't caught up to. Regrettably. ;)

"The most significant Racket application at the beginning (and maybe still) is DrRacket, the IDE. Developing that has led to everything from FFI improvements to contract systems,"

That makes sense. It looks like a very complex program. It would stretch the language into programming-in-the-large and robustness territory by itself.

"I expect the same to be true for many other real working systems developed by academics."

I'll keep it in mind. Thanks for your time and information on the Racket project.


You have read the hair shirt paper, right?


The what?


"Wearing the hair shirt: A retrospective on Haskell" by Simon Peyton Jones. (And it's not actually a paper, but was an invited talk.)

http://research.microsoft.com/en-us/um/people/simonpj/papers...


Third time I've seen his name recently. First, a definitive guide on FP compilers. Another I think touring them. Then this. Is this guy supposed to be one of the grandmasters of FP or something? ;)


Yes, actually.


Just read the paper. Has valuable info for academics or developers wanting to take Haskell (or ML) to the next level. Thanks for the link. :)


See my reply above - I'm all for industry opening up where possible, and a lot of stuff gets open sourced these days (hell, didn't Facebook even open source its data center designs?). Opening up technology doesn't necessarily mean academics will focus on the right problems, though.


Google patents a lot of cool stuff, actually. But the ideas tend to not get noticed in academia; when was the last time you saw a patent in a bibliography?


Academia patents tons of stuff. They just put that in patent applications and technology portfolios instead of bibliographies.


The point is that patents don't appear to contribute to the sum total knowledge that academics build on top of. They're by and large intellectual dead ends, if profitable ones.


Oh I mostly agree. Some organizations will only fund you if they get IP out of it. Many significant tech in that area. But it's mostly an after the fact make money thing. Or just bullshit altogether.


Of course, there's tons of academic research that does all of those things. And I of course agree that too much academic software is "works on my grad students machine"--an enormous amount of my time on Racket is spent on real-world works-in-practice issues. But this doesn't seem like it's a particular issue of industry-relevant systems work, just Sturgeon's Law.

Also, failure to address the general case is not so bad--it just means that the next part of the general case has to be addressed by the next researcher.

Finally, I think the real issue is academics who have an idea, and cloak it in pseudo-relevance to industry to sell it. A program analysis framework isn't suddenly industry-relevant now that it applied to JavaScript, and we should just be ok with not chasing the latest industry fad.


Today, in academia, it's considered risky to do research in computer vision, machine learning, or speech processing, for example, because it's likely that you will get "out-Googled". Google probably has an entire team of 20 working on what your one graduate student is doing. They'll have petabytes of real data to test against, hundreds of thousands of computers to run their jobs on, and decades of institutional experience. Your graduate student has a macbook air, six months of experience from an internship at Microsoft, and a BS in computer science. If you're lucky. They're going to lose. They should just go to work at Google.

Over time, fields of study become industrialized. There was a time when doing research in computer vision, machine learning, and speech processing was risky because the field was new, difficult to enter, and the prospects for commercialization were slim. That time has passed. Those 20 people working at Google are the people that helped that time pass. One could argue that the place for this work is now in industry - the motivations are all right and the resources and data are aligned to carry the work forward at a rapid pace.

This happens in other fields. For example, there's some word on the street that DARPA is going to stop funding so much basic research into applied robotics. Industry, they say, has got this covered. You can argue that they're right. The commercial sector is starting to get real thirsty for robots. Amazon talks about automated drone delivery. Everyone talks about self driving cars. The military wants to buy automated planes as a purchase, not as a research project. The time for basic research, it seems, is over.

As far as I can tell, this happened with systems about fifteen years ago, so the academic activity you see in systems is what is left over after all of the researchers that could do things moved into applying their research in industry. You no longer need to have weird hair and be buried in a basement to think about 20 computers talking to each other in parallel - you can go work at any technology company and think about two million computers talking to each other in parallel, and get paid two orders of magnitude more money. So the people doing systems research in academia are the people that cannot take their systems research into industry. If they could get internships, they would, and then they would get jobs. They haven't.


Such nonsense. I know plenty of people doing great systems research that just doesn't align with the goals of current technology companies. Just look at the proceedings of the top systems conferences and there are plenty of good papers and ideas out there.


> Industry is far more collaborative and credit is widely shared.

This couldn't be farther from the truth. Your idea's are generally credited to the company, which in turn is credited to the CEO or some other high up. On collaboration, it's only more collaborative within a given company, and not always even then. Between companies it's outright hostile to collaboration by definition.


Something I've seen at every large tech company I've worked at (which includes the author's company), is that some people do the work for something cool. The next step for them will be to prepare a slide deck so that some big name can give a talk at a conference. That doesn't always happen, but it's common.

Depending on the managers and team leads involved, that kind of thing can also happen when promotions come around. At every place I've worked, a common complaint is that the TL for the project got promoted despite not doing much because they were TL. Of course that's not supposed to happen, but it happens all the time.

The post seems to compare the worst case in academia vs. the best case in industry. You could just as easily flip things around and make industry sound bad.


In case it wasn't clear in the original comment, the big name is usually someone who was only tangentially involved in the project, if at all. Sometimes it's the head of the org the project took place in. They may have approved the budget for the project, but it's rare that the big name did any of the work.

This is like when Tim Bray mentioned that Amazon is great because he hasn't experienced the same problems that have gotten a lot of press lately. Of course he's treated well! He's Tim Bray!

Matt Welsh is exactly the kind of big name that isn't going to lose credit on something. Of course he gets credit! He's Matt Welsh!


It's often not even collaborative within the company, but only within a specific fiefdom within the company.


> I know this sounds like a waste of time because you probably won't get a paper out of it

It's unfortunate that a "lessons learned" paper summarizing a sabbatical in industry doing customer-facing work would not be publishable. Surely it's far more useful to other academics than most papers. It'd definitely be more broadly relevant.

My wife is a professor in a practical field and I'm always sad to hear what counts as a "good" paper or a publishable article. The big journals in these fields drive the notion of what is and isn't legitimate research. That's the point where what constitutes career-advancing "academic output" has to be changed. But I'm not enough of an insider to have any idea of how to go about doing that.


The issue is that when doing a sabbatical/internship at a company, it's often not possible to write a paper - either because there's not time, or the company may not want to publish the work (which could be confidential). I wouldn't go to a company expecting to be able to publish about the project.


Confidentiality is overrated, outside of narrow areas of core tech. A few years ago, Facebook hired a massive fleet of Google experts, and replicated a lot of the tech and open sourced it. But the reason they cause Google competitive trouble is because Google didn't understand the social product and have a culture to build it, not trade secret tech problems.


But if a company wanted to make sabbaticals or science-focused internships more interesting for people on the academic career tracks, it probably could ease their confidential policies and design such internships so that said academics can publish, right?

You have mentioned elsewhere that you are looking into how Google can help the academic world focus more on the right industrial questions, and you personally recommend such sabbaticals/internships -- so this seems like a natural step.


It's a good idea but fairly challenging in practice. The amount of information you need to reveal in a scientific publication may make many companies uncomfortable. My view is that academics should not just be focused on getting another paper on their CV -- there is value in having the industry experience even if no papers come out of it.


I'm not in academia per se, but (in Germany) just getting to do my (equivalent of a) Master's thesis at a company (still with a supervisor at university) was hard enough - because sometimes they just seem to search for people to do work based on their topic suggestions and if you want to do something that's a little off the tracks... Let's just say there were no real arguments, mostly just "we don't like companies that are not university spin-offs and oh shock that company might reuse that thesis to advertise their exploits instead of granting the kudos to the university.

But maybe that's not the general case, but as someone coming from the tech sector to get a degree (and not the other way round) this sounds all too similar :)


People often misunderstand what it means to get a paper published. Nobody publishes a paper to "disseminate their work." You can do that perfectly well by making a blog post or posting on arXiv. A published paper is a badge recognizing your contribution to the scientific community. If the lessons I learned from my summer exploits at FooCorp do not constitute a significant scientific discovery, then they're not publishable, and justly so.


I'm sad that journal reputation often trumps practicality of not exactly "breakthrough" articles that are nonetheless extremely interesting. One could argue that blog-reading should form an important part of being well-read in the literature, but again there's the concern over quality of research & insights that leads us to journals in the first place.


More and more researchers are also blogging. I think we're seeing a move in this direction.


I agree with some of this, but I wonder...

> It drives me insane to see papers that claim that some problem is "unsolved" when most of the industry players have already solved it, but they didn't happen to write an NSDI or SIGCOMM paper about it.

I've seen many examples of industry "solutions" that aren't documented, aren't published, and aren't even validated. There's a place for papers like these. I'm not quite your typical CS researcher (I do applied math and software for medical imaging), so YMMV, but I think this criticism is too harsh.


That's a fair point. The issue is that many of these papers don't seem to acknowledge that industry has (unpublished) solutions, and are somewhat naive as a result.


Another way to look at it: avoid the open/academic community stop doing xrypto research because NSA mathematicians are (secretly) way ahead of them?

"Open" is a different league from proprietary. It doesn't matter that they are behind proprietary. but it matters when they are working on the wrong problems.


It's unfortunate that, in science, if it isn't written down, it doesn't exist.

Having been bitten by the 'but we already know how to do that' comments, I find this particular aspect of industry to be very irritating. There are 3 possibilities:

The unpublished solution is brilliant.

The solution fits for very limited constraints applicable only to that situation.

The solution is a half-assed hack that only looks like it works.

In only one of those situations is the 'naive' comment valid.


Fair. This happens less in my field, but a lot of that could be because, for medical applications, people are far less willing to certify without published validation.


If you're doing industry-relevant research and you're in academia, leave. Your work can be supported by corporate profits because it is in essence for corporate profit. Get a job in industry, make more money, and make room for academics who want to do honest-to-god academics and work on theory or fundamental research. Or who want to do research relevant to improving society, not improving profit margins.

There aren't that many professor jobs out there. It's unbelievably greedy to be taking one up to do industry's dirty work.

You can always take an afternoon off a semester here and there to be an adjunct and teach a SE class or give a guest lecture.


Oh, my toosh, no. I don't agree with everything Matt said, but on this, you're totally off. I try pretty hard to do research that is (a) potentially industry-relevant (key word: potentially); and that (b) industry won't do for various reasons. Thus far, it's seemed to work pretty well. Using my favorite example of the week, take some of our work on cuckoo hashing -- it's contributed quite a bit to the literature of how to implement fast cuckoo hashes, contributed a new and fairly intriguing data structure for approximate set membership, and produced a design that I know to be in use in two Very Large Companies(tm).

The companies wouldn't have done this work, at least outside of their research labs, because the solution's theory is too far from the need of any one problem. But the result -- a more memory-efficient hash table design -- turns out to be broadly useful.

And yes, I do consider this to be "industry-relevant" research. I'm not going to solve their problems for them -- but there can be great synergies between industry and academia for having broad impact through adoption.

(full disclosure: I'm an academic on sabbatical at Google for the year. It's likely I'm a little biased in my belief that both have value. But this isn't a bias unique to me; systems as a general area is close to industry, and most of my colleagues rotate in and out of industry periodically via sabbaticals or startups.)


I don't agree with this at all. The partnership between industry and academia is long-standing and has proven to be extremely valuable -- much of the Internet came about because of it.


At this point it's not a 'partnership' -- industry has co-opted and taken over academia to a startling and troubling extent. Universities are expected to pump out software engineers trained in industry best practice, not thinkers and theorists, and woe betide any school that doesn't toe the line. Research is pushed towards corporate interests as requested in this article here and people just give up. As public research is defunded corporate money has to fill in, which forces academics to make some noises about "applications" or "industry" when they write their papers in hopes of more funding. In any paper, no matter how theoretical, you can make it sound like it has "applications". If someone published the halting problem today, the first half of the abstract would be "as industry deals with more and more complex programs and terabytes of data that are distributed as services in the cloud, we need to understand if there are limits to what we can compute. We present an argument that there are limits, and describe some practical applications".

Many capitulate altogether, trying to do industry work from their academic position. It is these last people who should get out.

The creation of the internet worked wonderfully. It was invented by academics based on government grants when it was basic research, and then refined and turned into something practical and workable and, most importantly, profitable, by industry. Exactly as it's supposed to be.


What stood out for me:

My PhD advisor never seemed to care particularly about publishing papers; rather, he wanted to move the needle for the field, and he did (multiple times).

Racking up publications is fine, but if you want to have impact on the real world, there's a lot more you can do.


Clickbaity title aside, this is sound advice for academics who wish to be relevant to industry from someone who has experience in both camps.

Other academics, for example those doing "stuff going way beyond where industry is focused today" as the author explicitly states, can safely ignore it.


Re collaboration. I work in a government research lab which prides itself on being a collaborative environment. The result is that we publish 10-author papers in which one author is doing all of the work and the other 9 are cheering from the sidelines. I don't think this is particular to my lab -- the typical scenario is that 90% of the work on any given project is done by 10% of the people. So when people praise their work environment for being collaborative, I'm sceptical. I'd much rather be in a situation where everyone gets the credit they deserve for the work they have actually done.


"Second: don't get hung up on who invents what. Coming from academia, I was trained to fiercely defend my intellectual territory, pissing all over anything that seemed remotely close to my area of interest. Industry is far more collaborative and credit is widely shared."

In my experience, this is only true until there is money to be made. Or more specifically, that industry was more than willing to share credit, but ownership was theirs.


> Coming from academia, I was trained to fiercely defend my intellectual territory, pissing all over anything that seemed remotely close to my area of interest.

Anyone who has ever been through a conference/journal submission process knows this pain. You can usually tell from the comments which of the reviewers is working in your field and wants to shut you out.


I think Industry Research is mostly driven by some constraints that applies to their architecture, their business use-cases and how much the company is willing to spend $$$ on research that adds value to their products or services. Academics on the other hand thinks beyond the box and researches and gives clues to upcoming industries on where the problem might be and how it could be solved, therefore eventually helping Industry grow with validation from the researches and allowing them to put them into "products" and "services".

Therefore, I don't think Academics should stop doing what they do (i.e. wander around) and have a laser focus on Industry's product-based researches.


Matt, I was intrigued by your throwaway "not another multi hop routing protocol" comment. As far as I can tell, the field is very slow-moving. The state of the art, Babel, is at least 5 years old and is an incremental improvement on protocols that are at least 20 years old. Some very promising research was done into DHT-based routing with Scalable Source Routing, but this work is now from more than 10 years ago, and interest seems to have dropped off completely.

Are there a bunch of protocols that I don't know about?

Are you maybe referring also to centralized path finding algorithms? This would explain the comment.


Nobody needs multihop routing protocols. Show me one instance in which they have been useful, despite 20+ years of academic work in the area.


> "Coming from academia, I was trained to fiercely defend my intellectual territory, pissing all over anything that seemed remotely close to my area of interest."

Apparently the author was unable to break that habit.


Program committees and the conferences they serve may be part of the problem, and hence part of the solution as well. Instead of picking the best N papers so as to fill out a conference schedule, pick the good papers and shorten the conference schedule if the number of good papers is <N. And then raise the standards to meet your expectations of reproducibility, code reviews, unit tests, etc. If a big conference like ISCA were to be shortened by one day by omitting the least worthy papers, you'd see much better work arriving the next year.


Who gives a damn if academic research is relevant to industry? Almost anything that could possibly be relevant to industry is highly uninteresting.

Imagine being someone who thinks that Capital could decide what is a good problem to work on...


This is so completely wrong. The most exciting work happening in systems, networking, programming languages, crypto, computer architecture, mobile, and many other subfields of computer science is highly relevant to industry and very interesting academically.


I do pure type theory, semantics, proof theory & intuitionistic mathematics. Very little of this will find a home in industry (at least, not for several decades). Industry has historically been incredibly resistant to 100% of the things I'm interested in, and I don't blame them!

But it's not just the hard & expensive stuff that they are resistant to. Even the "easy" stuff (like adopting a programming language designed by the professionals instead of the amateurs) they won't do.

I build interactive proof assistants. But I'm not pushing their applicability to industry, and I don't expect them to be relevant to industry (for a long time at least). Why? Because it's too expensive. Formal verification in type theory MAKES NO ECONOMIC SENSE; ask anyone who's actually ever done any industrial verification, and you will find out what tools they are using, and it has nothing at all to do with the area of research I'm involved in. This is because there are inherent trade-offs in every technique, and industrial use-cases tend to prefer a certain set of trade-offs, and I prefer a different one.

But it's a fascinating topic, and something that I'm preparing to devote the next several years of my life to. And I can safely say that a meteorite will more likely destroy Manhattan than will any of my computer science research be of widespread relevance to industry.

So, no, I totally disagree with everything you have said.


> I still serve on program committees and review articles for journals and the like.

Judging academia from your experience on program committees is like judging the entertainment industry from watching Britain's Got Talent.



Translation: Universities should continue on the path to becoming the research arm of industry. Academics should alter what they do and how they think, in order to better suit industry. Academic research does not have value or merit of its own, outside of its usefulness to industry. Impact on the world happens only in an industrial context.


Actually reading the article, explicitly none of those things.

Alternatively: if you are doing work that attempts to have industry relevance you should have some idea of what problems are actually relevant to industry. In particular, just because you think something is an interesting and challenging problem that just has to be affecting industry players does not mean it actually is. It may have been solved already, or you may have made some poor assumptions on conceiving the problem which, if corrected, make the problem disappear entirely (perhaps replaced by a different one that would've been a more valuable research target).

If you're trying to do forward-looking invent things that aren't even a twinkle in industry's eye yet, that's absolutely fine too, and the author explicitly calls out that the only important thing here is to recognize when your work isn't likely to be applicable in the near term. Nowhere does Matt state that this makes this sort of research less valuable, and honestly in many cases academia is the only place it can reasonably happen due to funding incentives.


It's already happened with CS education for undergrads, why not have industry dominate the research arm of CS departments as well? If we don't bend over backwards pandering to industry and completely restructure academic thought to pander to profit-driven corporations then how will our bright grad students get jobs at Google? God forbid if we don't obey every command of the software industry one day someone might say they're a CS major at a party and have someone ask "CS? Why? What are you going to do with that?"


#insert standard weekly hackernews attack academia article response here


I think we should talk about the constraints rather than relevancy.


I know you Silicon Valley people think that farmers are a bunch of hicks and an easy target to be disrupted.

No f-ing way.

Cornell University has many departments that are good, but when I look at the agriculture and vet school they are beyond anybody else.

Ag schools do research which is relevant to the technological and business problems of their industry. They do plenty of work on genetic engineering, chemistry and work closely with the likes of Monsanto. They are doing a lot for big ag. They also do research to help small farmers beat pests without pesticides, produce and market (delicious!) Halal meat, even help householders same money and have a better lawn. "Organic" and "Alternative" innovations diffuse into the mainstream. Pesticides are expensive to buy and to apply; if there is a cultural tweak that's cheaper, they'll do it in a flash. When corn prices got high, Cornell promoted dairy farmers to plant cabbages and other crops as an alternative forage.

They are always beta testing new crops in our area; Cornell and UNH are finding variants of plants that perform well in the cold Northeast climate, have expanded Wine production and are commercializing new fruits such as the Paw Paw.

Their research is relevant, and it is also communicated directly to the public and industry. Cornell Agricultural Extension has an office in every county of the state that you can walk up to and call and get questions answered, go to an event, etc. They work with trade publications, local government.

And it is not just New York, they do research on tropical agriculture and run a program to get access to agricultural literature to anyone in poor countries that need it.

I would point to that as being a much more real "technology transfer" than the people who are concerned about copyrights and patents.


> I know you Silicon Valley people think that farmers are a bunch of hicks and an easy target to be disrupted. No f-ing way.

Please don't make unsubstantive, divisive generalizations like this in HN comments. They make the threads worse, including your otherwise fine comment.


Honestly it is a joke. It drives me nuts though to see how HN people don't get it that better rural broadband would mean they make more money.


Stop. There are no "HN people". There are people who post to HN, but the opinions are for the most part surprisingly varied. You are shooting yourself in the foot by making over-generalized comments like this.


An ag researcher usually needs to go out into the surrounding community to get good data or info or otherwise complete his research. They often get their hands dirty on real farms, so naturally they'll be more rooted in the real problems of the industry. I grew up on a cherry orchard near UC Davis, and some group of random students would show up every other year looking for help.

This is all much different from a software researcher who can just work from his/her lab (or even bedroom) at all times. It's easy to lose sight of the greater industry when you don't need its (direct) help to complete your research.


I'm not based in Silicon Valley, and I went to Cornell for my undergrad degree.


I'm having trouble figuring out how this relates to the article.


Agricultural engineering has a significantly better interrelationship between academia and industry. Or in other words, ag engineering closes the loop on the R-to-D stage of R&D. As a result, developing practical solutions leads to more research questions, and it forms a tighter virtuous cycle. As a result, it provides a better example than what the initial article implies between tech academia and industry.


It's a model for emulation.


Cornell? It's pronounced "colonel" and it's the highest rank in the military.


Just to clarify the downvotes:

http://www.cornell.edu/


It was a joke that nobody found funny.

https://www.youtube.com/watch?v=eSxlfLTZu0k


When you're telling other people what they should do, you're already lost.

I also don't feel academia is obliged to any particular promise of delivery, but that's kind of independent.


That's not really what the article is doing, though...

Instead it's saying roughly: much of the work coming out of academia attempts solve problems applicable to industry, but in actuality industry is not actually suffering from the problems solved (either because underlying assumptions are wrong and industry is plagued by a different problem or because the problem has already be adequately solved by existing work).


Some quotes from the article:

  "My first piece of advice: do a sabbatical or internship in industry."

  "you have to work on a real product team"

  "hold yourself to a higher standard"

  "keep an open mind"
Kind of painful to reread it, actually.


All of those quotes are conditioned on the premise that you're doing research which purports to be relevant to industry. In that context I agree with Matt.

> Now, I don't think all academic research has to be relevant to industry. In some sense, the best research (albeit the riskiest and often hardest to fund) is stuff going way beyond where industry is focused today. Many academics kind of kid themselves about how forward-thinking their work is, though. Working on biomolecular computation? That's far out. Working on building a faster version of MapReduce? Not so much. I'd argue most academics work on the latter kind of problem -- and that's fine! -- but don't pretend you're immune from industry relevance just because you're in a university.

This paragraph makes this point abundantly clear: there's plenty of research that is focussed on expanding the field and is discovering problems that will exist once industry catches up. Fantastic. It's risky to try and find the future, but someone has to do it, and I think (based on this and his other writing that I've encountered) Matt would agree that time spent in industry is of dubious utility to those folks[0].

For research that is attempting to tackle extant problems encountered by industry, though, the researcher would be well served by ensuring that the problem they're attempting to solve is actually extant and that the assumptions they're making in trying to solve it don't simply sound reasonable but actually reasonably embody assumptions underlying real (aka industry) deployments. Otherwise they may get a lovely paper which solves a problem that nobody actually has or presents a solution that is impractical because it assumes things which are, in real deployments, incorrect.

[0]: On one hand, it might give them a better feel for in which direction a tractable future lies, but by the same token it might prevent them from exploring potentially fruitful avenues by constraining their thinking.


With all due respect, munin's point upstream about commoditized research areas doesn't apply to quantum information yet, Prof. Wehner, but if I were you I'd be afraid about Google poaching talent, even from Singapore.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: