Hacker News new | past | comments | ask | show | jobs | submit login

(I'm the author of the original blog post.) Ad hominem attacks aside, I do think it's a big deal if a bunch of academics are spending time working on the wrong things, if for no better reason than maximizing efficiency: Don't forget that most academic research is funded by the government. Since I also happen to help decide which research proposals Google funds, I also care that academics are well-aligned with the problems we care about. Clearly we also need to invest in long term bets. But there is a big difference between doing long-term, potentially-groundbreaking research and bad industry-focused research.



>it's a big deal if a bunch of academics are spending time working on the wrong things

At the risk of sounding a little prickly with my comment - isn't it a little presumptuous to write off a certain amount of research as "wrong"? There are plenty of examples of research that didn't have a clear purpose leading to breakthroughs - safety glass, microwave ovens, and Teflon all spring to mind.

Also, focusing on efficiency doesn't really align with academia's purpose, which (to my mind) has more to do with fundamental research. This article about Xerox PARC springs to mind:

http://www.fastcodesign.com/3046437/5-steps-to-recreate-xero...


That's a fair point. I'm trying to draw a distinction between "speculative" research (which might pan out long term) and "industry-focused" research (which tries to solve problems we have today). My concern is not with speculative research -- that's great -- but rather flawed industry-focused research: making incorrect assumptions, failing to deal with the general case, not considering real-world constraints.


> but rather flawed industry-focused research: making incorrect assumptions, failing to deal with the general case, not considering real-world constraints.

I find this point rather condescending. What you term "incorrect assumptions" are often necessary simplifications that reveal the core of the problem at hand. The point is to try and understand what makes problems hard and to develop new ways of solving them; not to deliver to companies like Google ready-made solutions that cater to their every operational need. Don't like that? Too bad. Fund your own research.


No, I mean incorrect assumptions. It doesn't matter if we're talking research in industry or academia; doing research based on flawed (not simplifying) assumptions is bad science.


If you want to make a concrete point about bad science then do it. At the moment however you are not making that point. Your argument is that "the assumptions" (whatever those are; you provide no specifics) made by researchers don't match up with industry expectations; a wholly different point for which I have little sympathy.

In my opinion the scientific authors doing the work are in the best position to judge what assumptions are and are not appropriate for their work. If the science is bad we expect the community to pick up on that via peer review or in a subsequent publication.


I was originally going to list examples of bad industry-focused science in the original post, but decided against it, since I didn't want to offend anyone. Your username is "gradstudent", suggesting you have read a few papers. My bet is that you've read papers where you scratch your head and say, "is that really how things work?" I read lots and lots of those papers - usually they don't end up getting published.


>My bet is that you've read papers where you scratch your head and say, "is that really how things work?" I read lots and lots of those papers

That sort of contextualizes your post as a long winded statement of "My job is hard".


Honestly, if the academics didnt choose their research the private industry wouldnt have to hire r and d. This sounds more of a ploy to save costs by having academics build for the industry on government money than to hire people to further a companies interests.


> making incorrect assumptions

If it was easy to know when you make incorrect assumptions, nobody would.


Well, the whole point of my post is to give some tips on how not to.


I understood that as more of a lament that a lot of work is being duplicated because academics aren't aware of prior work in the industry. But then again, how are you supposed to know that someone has a solution unless they tell the world about it in a publication?

> It drives me insane to see papers that claim that some problem is "unsolved" when most of the industry players have already solved it, but they didn't happen to write an NSDI or SIGCOMM paper about it.


There are more ways to "publish" work than in academic publications. I'm not sure to what extent it is unrealistic to expect academics to be keeping up with what is going on at industry conferences or on github or by talking to people at user groups or wherever, but the reality will always be that academic publications are not the only, or the most up-to-date, source of information.


Agreed, blog posts and open source projects can be just as helpful as journal-published articles. I didn't intend publications to mean only academic journals.


The problem is that many academics are not aware of anything outside the academic journals, and thus work on "unsolved" problems that already have well known solutions in industry.


I didn't think that's what you intended, but in the quote you included, the author is specifically lamenting the lack of knowledge of things published outside academic journals, not the lack of knowledge of things that aren't published anywhere.


Matt is talking about academic who publish papers showing that System X is Y% better than System Z, under completely nonsensical conditions. They aren't advancing the state of the art, they are mapping the terrain of the compost heap, but don't realize it.


Is he? Maybe he should have said that.


I also care that academics are well-aligned with the problems we care about.

That is not the point of academia, and indeed not the point of why academics are offered tenure with their institutions. Please consider Galileo and Copernicus and the Church of the 15th and 16th centuries. Please consider history.

Google as a business may care about research being aligned with its business, or what 'society' or certain segments of society care about; Google really should not care what academia and academics want to work on. That is the point of, for better or worse, academia: independent research by some of our smartest people.

If Google, and industry, care about what gets research, they should fund it. They then can choose. Please leave academia and academics to be just that, academic.

Without government funding, support, and indeed the wider academia 'ecosystem', my father would have never been able to be a historian of the Scottish enlightenment, and producing the seminal book on Adam Smith. And if you don't think that the Scottish Enlightenment is important to our modern understanding of the Universe, then I suppose that you then have to rethink the contributions of these notable academics, scientists, engineers, and philosophers: David Hume, McLaren, Taylor, James Watt, Telford, Napier--off the top of my head.

My point is that my fathers book may not itself be a major contribution to a current revolutionary idea, but it will likely be part of some future realisation about economics, since Adam Smith is a major cornerstone of our current understanding of economic thought.


I'm sorry, but I think this perspective is fairly naive and ignores the reality of how applied science and engineering work in universities today. You're talking about 17th century scientists, but the reality is that in the middle of the 20th century there was a tremendous shift to applied sciences -- computer science being one of those fields -- with the goal of producing useful innovations.

The whole point of my blog post is this: Most academics are trying to do work that is relevant to industry, but many of them are going about it the wrong way. Nobody is saying you have to work on industry-relevant research, but if you're going to try, at least do it right.


>Most academics are trying to do work that is relevant to industry,

That is certainly not what most academics think.


Bullshit.


In other words, academic research is a punt in the dark. Much of it will not yield all that much, but some of it will.

It is kind of similar to what YC and VC's do: give a little bit of funding to a small number of businesses, but hope a (very) small number will pay off big.

Academia is a small investment to a number of ideas, with the hopes of big pay-offs in a very small number. The technology behind the Web was invented by Tim Berners-Lee while at CERN in the early nineties. You might not think that funding in high energy and particle physics would be worthwhile, but yet something completely different and revolutionary for all of humanity came out of it.

We just do not know where these revolutionary ideas come from, but what we do know is that they tend to come from some of the very brightest people.


That's not true that they come from the brightest people, I'd say that's very much a bias.

Much like how YC often says that the best startup founders aren't necessarily the smartest people in a room. Occasionally, being the very brightest can even be counter-productive because in the end the people who are less smart and realize it will strive to work harder.

And that's what research really is, really hard laborious work. Not this fantasy of someone bright sitting in a chair thinking up discoveries. That's why research is hard.


> That's not true that they come from the brightest people, I'd say that's very much a bias.

[Citation needed]

There are such things as talent, insight and experience but intelligence makes everything easier, from learning new fields and tools to evaluating and formulating new ideas. Ceteris Paribus more intelligence is better.


That's a bit reductionist on something that is inherently non-reductionist: how do you define a good problem worth solving.

I believe you presented a logical leap between having all the tools one could ever dream of <-> delivering scientific impact.

Here's a citation: "scientific impact is a decelerating function of grant funding size".

http://journals.plos.org/plosone/article?id=10.1371/journal....


I would further add that Google, and similar technologica enterprise are some of the richest organisations of all time. I'm not sure about comparison to the Spanish Empire, however Google and 'Industry' can well afford to pay for their own research. If you have not been to Seville, and other major European Empirical cities, Rome, Paris, London, St Petersburg, Berlin, etc, then I recommend that you do a bit of travelling in Europe and witness the legacy of the dangers of what happens when money and power have no checks.


You can edit your posts if you realize you have something to add after hitting submit.

Click "Threads" and you will be presented with a list of your recent comments. Anything that is still editable will have an "edit" link. I am not sure how long this link is available but its definitely longer than an hour.


As someone that spent seven years in industry as an engineer and is now three years into a PhD, I lament the failure of industry to adopt old crypto research that has clear immediate application.

I like your post because I think it is the duty of academics to communicate their results for maximum positive effect and to represent real world impact accurately, but I also have to ask what can industry do to expose itself to excellent research which is sitting unused?


Just like it's not fair for me to conflate all of academia together in my post, it's difficult to lump all of industry together. Places like Google have a pretty good track record of leveraging the latest innovations from academia when it makes sense to do so. Not all companies work this way. You need people who are aware of the research, and willing to put the extra effort in to make it practical.


I think you are focusing on the wrong problem. The real problem of academics is not what people research, the huge elephant in the room is the "publish or die" dogma.

This dogma is one of worst thing happening to research.


I agree that "publish or perish" is not a good model for research. What are alternative models that are potentially viable?


It is absurd model for evaluation of people that solves only one problem: how to make evaluation easy for the administrators and soft on researchers. We should rather find and implement a model that leads to best research results possible. I think one such model would be this: give tax money only to people whose results of their last work have proven to be beneficial to society (for other people outside their interest group, not only colleagues in the same field). Impose reasonable set of practical goals and supervision (roadmap) on people who want their work financed and periodically re-evaluate. Leave basic, speculative and other research of unclear significance to be financed privately, by patrons, no tax-based financial support.


I really think you have it backwards on basic and speculative research to lose financing from the public. How would you propose "proving that results are beneficial to society"? And how is this different from the current funding proposal process? Committees that fund proposals certainly look at the success of prior research by the group.

I don't think anyone would disagree with you when you say "We should rather find and implement a model that leads to best research results possible." Viable alternative models are what I am looking for.


Researcher with practical results can provide evidence of usefulness of his results to society and discuss his work and result with his money-guarding superiors who are not researchers themselves. Superiors take some time to confer and decide whether and how much he gets.

Basic research cannot be so easily evaluated, often it is decades, sometimes centuries before the practical use is found, if ever. Basic research, by its definition, has unclear significance for anything except researcher's curiosity and intellectual fulfillment. The results in the form of a paper are often too remote from the needs of society and often so intellectually involved nobody except few people competent in the field can judge whether the work is even meaningful, far from judging its usefulness.

The current evaluation process for basic research is that grant committees don't even try to evaluate the usefulness of the research requestors have done. They use superficial indicators like existing publication score, academic rank and history of the requestors, relation of the work to other high-profile trending topics, association with popular research groups and political considerations.

That's why I think people who want to do basic research should not be funded by their colleagues in field directly from tax money. It's too opaque and ridden with corruption.


Your point of view honestly makes me sad. It's the kind of industry-centric thinking that is killing culture.

Under this premise, investing in the arts is a waste of money, when you could invest in UX studies. Investing in the humanities is a waste of money, when you could invest in data analysis. Hell, why even have academia at all? Wouldn't it be better if education was managed and provided directly by the industry?

This fetish for optimisation only ends up hurting the sense of humanity in all of us. Not everything has to be done with the purpose of being disruptive, or in a frenzy for efficiency. Some people are just goddamn curious about bees and will study honey bees (as useless as it may sound) because understanding nature is a goddamn delight in its own right, even if it doesn't yield the next billion-dollar idea.

That innate delight and curiosity is where science is born. Let's appreciate that. It applies to STEM too. Maybe some academics don't care about your problems or in fact solving any problems at all, and decided to be in academia not for the "potentially-groundbreaking research" but just the thrill of finding new stuff and uncovering the beauty in the universe, don't you think?


I'm not sure if you read the original article or not, but I very clearly point out the benefit of long-range research in addition to things more directly relevant to industry. So I don't agree with the premise of your criticism.


I agree with you completely about "bad industry-focused research". It serves no end. My question is that, is it just a reflection of mediocracy and gaming/dishonesty being everywhere, including academica ?

In this case, please trying to take shortcuts towards their goal of improving the amount of publications and grant approvals. There is reward in the system for this kind of behaviour. You being a reviewer/jury position unfortunately do not have the luxury of a filter.

Is there a way to early catch this , by looking at past trends ?

Slight detour is that this is one of my rationale for spending time reviewing papers for journals/conferences. In average, only 10 to 20% of papers I review really stands out or appeal to me, which is correlated to acceptance rate of a top journal/conference.


I worked in industry research for a long time and almost every intern or new hire came in with laughably wrong assumptions, but it wasn't due to mediocracy, gaming, or dishonesty. They just didn't have anywhere to learn from. Their assumptions were copied from earlier "best paper" winners in the field whose assumptions were copied from a random previous paper whose assumptions were probably mostly speculation.


How do you think about pure mathematics research then? Or fundamental physics? Or art/literature/history's place in publicly funded universities? There's so much more to all of this than pushing forward in industry.


Hmm, I wouldn't write off the value of "bad industry-focused" research too quickly.

"Industry focused" doesn't necessarily mean it's looking to push the needle of industry much yet. Often the first paper academics and students do in an area will really be about trying to take an industry problem into the lab, to see if they can prod it and discover what's interesting about it. Not yet about trying to push their research out into industry. Little experiments are how we get to look at new problems for the first time when we're not close enough to buy you a beer. And little papers are how we get to start discussing what we're doing with our peers at other universities. In a sense, where start-ups have Minimum Viable Products, PhD students and academics have Smallest Publishable Activities to start exploring a new area ;)

Sabbaticals and internships are a lovely idea, but it's increasingly hard to get universities to give leave to do that. (Backfilling an academic's teaching can be surprisingly hard to arrange, and then there's the academic's service roles for the university, their PhD, masters, and honours students, etc) So I expect most academics will always have many more research interests than sabbaticals / placements.

Ok, those are the constraints - on to the value:

Most universities do focus fairly heavily on industries in their area. (eg, agriculture in regional areas). But it's not a good idea for all their research to focus on that. If we teach in an area, we want to be at least somewhat research-active in it, as that's how we keep teaching from stagnating. And in lots of countries, students find it too expensive to move far from home, so they study at a uni they're in the catchment for. So unless we want to shrink the future pool of employees to children who grew up in academic-beer-buying-reach of a company in that industry, we really do need some research happening at universities outside beer-buying distance. Which means you'll get a bunch of "bad industry research" papers as they first explore the area.

You might not want a new multi-hop algorithm. But a graduating masters student who's designed their own multi-hop algorithm with different characteristics, and undergrads having had some exposure to the problem...

And sure, the interaction design student might only have developed their app for one mobile platform, not several, and only tested it with a dozen people not a million. But really that's because we figure it's probably sensible to find out if it was even worth writing for one platform, before we spend so much of that government-funded time on writing it to work across several...

There's quite a few things industry, such as Google, could do to help if you're keen to make the match between research and industry closer. One is to disseminate your problems, not just your libraries.

If Google emailed "here's a good small problem and project we'd be interested in" (a different problem to each university), I doubt there is a CS department in the world that would refuse to offer it as a project for their honours students. (Which means the faculty who supervise the students would be looking at the problem with interest too.) The academics would just need to know it's a unique problem for them -- that they're not being asked to compete with an unknown number of other honours students and academics around the world on the same problem (which really would be inefficient as well as unfair on the students).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: