Hacker News new | past | comments | ask | show | jobs | submit login

Now, I don't think all academic research has to be relevant to industry.

I'm getting kind of sick of this tactic, where this fake concession is made before going on about how academia isn't designed well enough to deliver to industry.

If authors of articles like this didn't care about the efficiency of academia to deliver them research, what's the big deal if there are a bunch of people somewhere running around in circles? Hey, at least the CS academics occasionally produce something useful to industry, which is more than I can say about other groups running in circles. Sure it could be about burning through money, or a real desire to improve the state of CS academia. But if that was the case I'd expect these articles to have categorically different discussion and calls to action. Or - at the very least - address goals of acedemia other than performing research for industry.

I've never been an academic, and never want to be, but even I gave up reading after the "(I know, crazy, right?)" line. At that point, I knew for 100% sure, that the target audience of this article is not academics. Nobody is that much of a condescending prick to the person they're trying to persuade. This is not a "we need to talk" conversation this is a "I need to talk at you, so I can show off to other people."




I'm getting kind of sick of this tactic, where this fake concession is made before going on about how academia isn't designed well enough to deliver to industry.

This +1, er +1,000,000.

I would firmly state that academia by definition is not supposed to 'deliver to industry', and not even to society. The whole point of academia is to give the very smartest people (whatever that means) the freedom to explore ideas, so that a some point humanity may benefit from their ideas, experiments, and research.

My point is that it could be any number of generations of research from the current generation of (what seems like quack) research (to the establishment) will turn into being instrumental to our understanding of the universe. Galileo Galilei (and Copernicus) for instance. Their ideas (and research) did not sit well with the establishment of the Church (which was pretty much the central authority of everything at the time), but their ideas are now central to our understanding of our Solar system.

edit: formatting.


There is an attitude of entitlement here.

Students spend their money and time to learn something and frequently they get worked like a racehorse and nobody cares about if what they learn is transferable after the academic system no longer needs them.

If you are not going to be relevant to society the only thing I can do is vote for somebody who is going to cut your research budget.


Some would say that we should divide universities up into more separate institutions for Arts, Philosophy, Fine Arts, Natural Sciences, Applied Science, Math, Social Sciences, Business, Law, Medicine (or is that a Science?) and others, but the point of universities is be places of knowledge, and to help foster cross pollination between the disciplines. The point of independent research is to be unfettered by government, business, and indeed wider society as a whole.

Government and business and society should not be fettering independent research, just the same as they should not be interfering with law courts. The point of independent research is for the independent researchers to have the freedom to follow their research even though it might not be popular with the local Wood Pulp mill pumping dioxins into the local river, and that is pollution is going to cause cancer. This research may not be popular with many in the community because it will impact their jobs. This research may not be popular with the government, and elected officials because it will affect their tax receipts and donations. This research my not be popular with you because it is going to put your family out jobs, but it is important research, we can ultimately make informed decisions.

That is the point of research, information, and knowledge--so we can make informed and reasoned decisions.

I just can not believe the attitude and entitlement of some who think that just because they pay tax they think everything that is not obviously useful to themselves is useless.


Honestly, that makes more sense which is why PRIVATE industries should create schools inorder to foster educatin relevent to them.

Having organized education that is relevent to the real world is an exellent idea. However, reforming schools is not the appropriate approach. We live in a capitalist society, if they dont like whats around, they should compete. Im confident people are more than willing to pay 20k a year for a guarenteed job. Code boot camps charge that much for inadequate preparation and people are still clamboring to get in there.

Edit: i agree with you, im just so passionate right now. I need to relax


This is also very dangerous.

Having schools that don't teach for the sake of fostering knowledge and critical thinking but rather just train you to serve in a specific industry/company would be the final blow to higher education in the US.


Why? Vocational schools exist and always have. They are not supposed to replace universities, but complement them.


It is the consumers choice. I personally believe that vonventional universities enable creativity and freedom that a private institute would likely avoid. These types aspects are paramount to popular culture and forward thinking. However, if I want to get a job out of school, id prefer a guarentee


That's a tremendously short sighted view to take. It's very hard to judge relevance to society over short time frames.


Do you benifet from the ideas of Copernicus and Galileo Galilei?


As would others vote to continue it.

Relevance is a very broad term, with industry a subset.


>Relevance is a very broad term, with industry a subset.

That's very insightful. It's easy to lose sight of things and be reduced to a particular worldview. The Higgs boson doesn't have to have commercialization potential for us to justify spending a few billion and a few decades discovering it. :) It's an end in itself.


That's how I see it. Pure research is necessary and possible.


Cutting the research budget only makes academia more cutthroat and exploitative. We need to fundamentally get rid of the pyramidal academic job structure.


I think you and Matt are talking about different kinds of research. He's specifically talking about incremental research in computer systems. And he's 100% right that most of such work has zero impact, both over the short term and long term. I didn't read this essay as a condemnation of research in general, especially not of what he describes as "visionary, out-of-the-box, push-the-envelope research." It's talking about "research" that is misguided and is not doing anything meaningful to advance knowledge.


This is exactly what your grandparent (and parent to some extent) comments are arguing against.

There aren't 2 different kinds of research, kind A that is never gonna go anywhere and kind B that is visionary. The vast majority of the advancement of knowledge has been what academia has always valued: probably never gonna go anywhere, but might push the envelope by an inch or a millimeter. The whole reason you have to research a thing is you don't know whether it's gonna go anywhere.

Which isn't how we think in industry, not even in industry R&D. We have to know it'll probably to go somewhere, or at least have a small chance of going really far. That's the only way to get a profit in expectation.

But work that will probably go nowhere or even if it does go somewhere, won't get very far, like most of scientific progress? The point of systems like tenure is to foster that, since industry can't.


I don't think you understand the point being made about different kinds of research. You are correct, it is not "kind that will never go anywhere" and "kind that is visionary".

But I do think we can separate "kind that is incremental improvement on existing systems" and "kind that is visionary change of existing systems". This is particularly true in computer systems research. It's common to see a paper where people tweak a small part of an existing system; I consider that incremental research, and it is necessary. But the value of such incremental research is dependent on the assumptions made by the researchers. Often, those assumptions are informed by impressions of what is important to the wider field of computing, and the author is saying those impressions are often misguided.


> I would firmly state that academia by definition is not supposed to 'deliver to industry', and not even to society.

That, and also, academia is several thousand years older than industry.


If I recall correctly, Maxwell was once asked by a minor noble-person what value his research would lend to industry.


"Mostly I'm involved in mobile, wireless, and systems, and in all three of these subfields I see plenty of academic work that tries to solve problems relevant to industry, but often gets it badly wrong."

If you want to work on problems irrelevant to industry, go nuts.

If you want to work on problems relevant to the industry, it helps to double-check that your problems are relevant to the industry.

The desire to work on problems relevant to industry is coming from academia, not the author.


Since he doesn't really give specifics, it's hard to tell, but I wouldn't necessarily read most academic papers that mention applications (especially "potential" applications) as having an actual desire to work on short-term industry-relevant problems. Yes, papers often have a vague gesture towards, "breakthroughs in [graph algorithm X] have potential uses in [networking problem]", but this doesn't necessarily indicate a deep desire to work on problems relevant to industry. Rather, it more often indicates a deep desire to work on graph algorithms, and external pressure to throw in a mention about potential applications in some hot area. So I wouldn't read too much into it.


Indeed! The question is: are we ready to drop the requirement of listing applications everywhere?

The department where I am studying is big on graph theory and other theoretical CS concepts, and I always feel a bit uneasy when we write or read about applications (on a grant proposal or in a conference paper, usually) and yet none of the authors really cares about applications. They usually care because "it was an interesting problem that some people studied in the past and we can do it better than them".

Yet, as far as I know, nobody can really write that sentence unless it is such a fundamental problem that its usefulness or importance goes without saying. So we are now in a situation where everyone (theoretical CS, applied theoretical CS, and practical CS) pretends their work have actual applications, and I feel Matt is calling them out on it.


(I'm the author of the original blog post.) Ad hominem attacks aside, I do think it's a big deal if a bunch of academics are spending time working on the wrong things, if for no better reason than maximizing efficiency: Don't forget that most academic research is funded by the government. Since I also happen to help decide which research proposals Google funds, I also care that academics are well-aligned with the problems we care about. Clearly we also need to invest in long term bets. But there is a big difference between doing long-term, potentially-groundbreaking research and bad industry-focused research.


>it's a big deal if a bunch of academics are spending time working on the wrong things

At the risk of sounding a little prickly with my comment - isn't it a little presumptuous to write off a certain amount of research as "wrong"? There are plenty of examples of research that didn't have a clear purpose leading to breakthroughs - safety glass, microwave ovens, and Teflon all spring to mind.

Also, focusing on efficiency doesn't really align with academia's purpose, which (to my mind) has more to do with fundamental research. This article about Xerox PARC springs to mind:

http://www.fastcodesign.com/3046437/5-steps-to-recreate-xero...


That's a fair point. I'm trying to draw a distinction between "speculative" research (which might pan out long term) and "industry-focused" research (which tries to solve problems we have today). My concern is not with speculative research -- that's great -- but rather flawed industry-focused research: making incorrect assumptions, failing to deal with the general case, not considering real-world constraints.


> but rather flawed industry-focused research: making incorrect assumptions, failing to deal with the general case, not considering real-world constraints.

I find this point rather condescending. What you term "incorrect assumptions" are often necessary simplifications that reveal the core of the problem at hand. The point is to try and understand what makes problems hard and to develop new ways of solving them; not to deliver to companies like Google ready-made solutions that cater to their every operational need. Don't like that? Too bad. Fund your own research.


No, I mean incorrect assumptions. It doesn't matter if we're talking research in industry or academia; doing research based on flawed (not simplifying) assumptions is bad science.


If you want to make a concrete point about bad science then do it. At the moment however you are not making that point. Your argument is that "the assumptions" (whatever those are; you provide no specifics) made by researchers don't match up with industry expectations; a wholly different point for which I have little sympathy.

In my opinion the scientific authors doing the work are in the best position to judge what assumptions are and are not appropriate for their work. If the science is bad we expect the community to pick up on that via peer review or in a subsequent publication.


I was originally going to list examples of bad industry-focused science in the original post, but decided against it, since I didn't want to offend anyone. Your username is "gradstudent", suggesting you have read a few papers. My bet is that you've read papers where you scratch your head and say, "is that really how things work?" I read lots and lots of those papers - usually they don't end up getting published.


>My bet is that you've read papers where you scratch your head and say, "is that really how things work?" I read lots and lots of those papers

That sort of contextualizes your post as a long winded statement of "My job is hard".


Honestly, if the academics didnt choose their research the private industry wouldnt have to hire r and d. This sounds more of a ploy to save costs by having academics build for the industry on government money than to hire people to further a companies interests.


> making incorrect assumptions

If it was easy to know when you make incorrect assumptions, nobody would.


Well, the whole point of my post is to give some tips on how not to.


I understood that as more of a lament that a lot of work is being duplicated because academics aren't aware of prior work in the industry. But then again, how are you supposed to know that someone has a solution unless they tell the world about it in a publication?

> It drives me insane to see papers that claim that some problem is "unsolved" when most of the industry players have already solved it, but they didn't happen to write an NSDI or SIGCOMM paper about it.


There are more ways to "publish" work than in academic publications. I'm not sure to what extent it is unrealistic to expect academics to be keeping up with what is going on at industry conferences or on github or by talking to people at user groups or wherever, but the reality will always be that academic publications are not the only, or the most up-to-date, source of information.


Agreed, blog posts and open source projects can be just as helpful as journal-published articles. I didn't intend publications to mean only academic journals.


The problem is that many academics are not aware of anything outside the academic journals, and thus work on "unsolved" problems that already have well known solutions in industry.


I didn't think that's what you intended, but in the quote you included, the author is specifically lamenting the lack of knowledge of things published outside academic journals, not the lack of knowledge of things that aren't published anywhere.


Matt is talking about academic who publish papers showing that System X is Y% better than System Z, under completely nonsensical conditions. They aren't advancing the state of the art, they are mapping the terrain of the compost heap, but don't realize it.


Is he? Maybe he should have said that.


I also care that academics are well-aligned with the problems we care about.

That is not the point of academia, and indeed not the point of why academics are offered tenure with their institutions. Please consider Galileo and Copernicus and the Church of the 15th and 16th centuries. Please consider history.

Google as a business may care about research being aligned with its business, or what 'society' or certain segments of society care about; Google really should not care what academia and academics want to work on. That is the point of, for better or worse, academia: independent research by some of our smartest people.

If Google, and industry, care about what gets research, they should fund it. They then can choose. Please leave academia and academics to be just that, academic.

Without government funding, support, and indeed the wider academia 'ecosystem', my father would have never been able to be a historian of the Scottish enlightenment, and producing the seminal book on Adam Smith. And if you don't think that the Scottish Enlightenment is important to our modern understanding of the Universe, then I suppose that you then have to rethink the contributions of these notable academics, scientists, engineers, and philosophers: David Hume, McLaren, Taylor, James Watt, Telford, Napier--off the top of my head.

My point is that my fathers book may not itself be a major contribution to a current revolutionary idea, but it will likely be part of some future realisation about economics, since Adam Smith is a major cornerstone of our current understanding of economic thought.


I'm sorry, but I think this perspective is fairly naive and ignores the reality of how applied science and engineering work in universities today. You're talking about 17th century scientists, but the reality is that in the middle of the 20th century there was a tremendous shift to applied sciences -- computer science being one of those fields -- with the goal of producing useful innovations.

The whole point of my blog post is this: Most academics are trying to do work that is relevant to industry, but many of them are going about it the wrong way. Nobody is saying you have to work on industry-relevant research, but if you're going to try, at least do it right.


>Most academics are trying to do work that is relevant to industry,

That is certainly not what most academics think.


Bullshit.


In other words, academic research is a punt in the dark. Much of it will not yield all that much, but some of it will.

It is kind of similar to what YC and VC's do: give a little bit of funding to a small number of businesses, but hope a (very) small number will pay off big.

Academia is a small investment to a number of ideas, with the hopes of big pay-offs in a very small number. The technology behind the Web was invented by Tim Berners-Lee while at CERN in the early nineties. You might not think that funding in high energy and particle physics would be worthwhile, but yet something completely different and revolutionary for all of humanity came out of it.

We just do not know where these revolutionary ideas come from, but what we do know is that they tend to come from some of the very brightest people.


That's not true that they come from the brightest people, I'd say that's very much a bias.

Much like how YC often says that the best startup founders aren't necessarily the smartest people in a room. Occasionally, being the very brightest can even be counter-productive because in the end the people who are less smart and realize it will strive to work harder.

And that's what research really is, really hard laborious work. Not this fantasy of someone bright sitting in a chair thinking up discoveries. That's why research is hard.


> That's not true that they come from the brightest people, I'd say that's very much a bias.

[Citation needed]

There are such things as talent, insight and experience but intelligence makes everything easier, from learning new fields and tools to evaluating and formulating new ideas. Ceteris Paribus more intelligence is better.


That's a bit reductionist on something that is inherently non-reductionist: how do you define a good problem worth solving.

I believe you presented a logical leap between having all the tools one could ever dream of <-> delivering scientific impact.

Here's a citation: "scientific impact is a decelerating function of grant funding size".

http://journals.plos.org/plosone/article?id=10.1371/journal....


I would further add that Google, and similar technologica enterprise are some of the richest organisations of all time. I'm not sure about comparison to the Spanish Empire, however Google and 'Industry' can well afford to pay for their own research. If you have not been to Seville, and other major European Empirical cities, Rome, Paris, London, St Petersburg, Berlin, etc, then I recommend that you do a bit of travelling in Europe and witness the legacy of the dangers of what happens when money and power have no checks.


You can edit your posts if you realize you have something to add after hitting submit.

Click "Threads" and you will be presented with a list of your recent comments. Anything that is still editable will have an "edit" link. I am not sure how long this link is available but its definitely longer than an hour.


As someone that spent seven years in industry as an engineer and is now three years into a PhD, I lament the failure of industry to adopt old crypto research that has clear immediate application.

I like your post because I think it is the duty of academics to communicate their results for maximum positive effect and to represent real world impact accurately, but I also have to ask what can industry do to expose itself to excellent research which is sitting unused?


Just like it's not fair for me to conflate all of academia together in my post, it's difficult to lump all of industry together. Places like Google have a pretty good track record of leveraging the latest innovations from academia when it makes sense to do so. Not all companies work this way. You need people who are aware of the research, and willing to put the extra effort in to make it practical.


I think you are focusing on the wrong problem. The real problem of academics is not what people research, the huge elephant in the room is the "publish or die" dogma.

This dogma is one of worst thing happening to research.


I agree that "publish or perish" is not a good model for research. What are alternative models that are potentially viable?


It is absurd model for evaluation of people that solves only one problem: how to make evaluation easy for the administrators and soft on researchers. We should rather find and implement a model that leads to best research results possible. I think one such model would be this: give tax money only to people whose results of their last work have proven to be beneficial to society (for other people outside their interest group, not only colleagues in the same field). Impose reasonable set of practical goals and supervision (roadmap) on people who want their work financed and periodically re-evaluate. Leave basic, speculative and other research of unclear significance to be financed privately, by patrons, no tax-based financial support.


I really think you have it backwards on basic and speculative research to lose financing from the public. How would you propose "proving that results are beneficial to society"? And how is this different from the current funding proposal process? Committees that fund proposals certainly look at the success of prior research by the group.

I don't think anyone would disagree with you when you say "We should rather find and implement a model that leads to best research results possible." Viable alternative models are what I am looking for.


Researcher with practical results can provide evidence of usefulness of his results to society and discuss his work and result with his money-guarding superiors who are not researchers themselves. Superiors take some time to confer and decide whether and how much he gets.

Basic research cannot be so easily evaluated, often it is decades, sometimes centuries before the practical use is found, if ever. Basic research, by its definition, has unclear significance for anything except researcher's curiosity and intellectual fulfillment. The results in the form of a paper are often too remote from the needs of society and often so intellectually involved nobody except few people competent in the field can judge whether the work is even meaningful, far from judging its usefulness.

The current evaluation process for basic research is that grant committees don't even try to evaluate the usefulness of the research requestors have done. They use superficial indicators like existing publication score, academic rank and history of the requestors, relation of the work to other high-profile trending topics, association with popular research groups and political considerations.

That's why I think people who want to do basic research should not be funded by their colleagues in field directly from tax money. It's too opaque and ridden with corruption.


Your point of view honestly makes me sad. It's the kind of industry-centric thinking that is killing culture.

Under this premise, investing in the arts is a waste of money, when you could invest in UX studies. Investing in the humanities is a waste of money, when you could invest in data analysis. Hell, why even have academia at all? Wouldn't it be better if education was managed and provided directly by the industry?

This fetish for optimisation only ends up hurting the sense of humanity in all of us. Not everything has to be done with the purpose of being disruptive, or in a frenzy for efficiency. Some people are just goddamn curious about bees and will study honey bees (as useless as it may sound) because understanding nature is a goddamn delight in its own right, even if it doesn't yield the next billion-dollar idea.

That innate delight and curiosity is where science is born. Let's appreciate that. It applies to STEM too. Maybe some academics don't care about your problems or in fact solving any problems at all, and decided to be in academia not for the "potentially-groundbreaking research" but just the thrill of finding new stuff and uncovering the beauty in the universe, don't you think?


I'm not sure if you read the original article or not, but I very clearly point out the benefit of long-range research in addition to things more directly relevant to industry. So I don't agree with the premise of your criticism.


I agree with you completely about "bad industry-focused research". It serves no end. My question is that, is it just a reflection of mediocracy and gaming/dishonesty being everywhere, including academica ?

In this case, please trying to take shortcuts towards their goal of improving the amount of publications and grant approvals. There is reward in the system for this kind of behaviour. You being a reviewer/jury position unfortunately do not have the luxury of a filter.

Is there a way to early catch this , by looking at past trends ?

Slight detour is that this is one of my rationale for spending time reviewing papers for journals/conferences. In average, only 10 to 20% of papers I review really stands out or appeal to me, which is correlated to acceptance rate of a top journal/conference.


I worked in industry research for a long time and almost every intern or new hire came in with laughably wrong assumptions, but it wasn't due to mediocracy, gaming, or dishonesty. They just didn't have anywhere to learn from. Their assumptions were copied from earlier "best paper" winners in the field whose assumptions were copied from a random previous paper whose assumptions were probably mostly speculation.


How do you think about pure mathematics research then? Or fundamental physics? Or art/literature/history's place in publicly funded universities? There's so much more to all of this than pushing forward in industry.


Hmm, I wouldn't write off the value of "bad industry-focused" research too quickly.

"Industry focused" doesn't necessarily mean it's looking to push the needle of industry much yet. Often the first paper academics and students do in an area will really be about trying to take an industry problem into the lab, to see if they can prod it and discover what's interesting about it. Not yet about trying to push their research out into industry. Little experiments are how we get to look at new problems for the first time when we're not close enough to buy you a beer. And little papers are how we get to start discussing what we're doing with our peers at other universities. In a sense, where start-ups have Minimum Viable Products, PhD students and academics have Smallest Publishable Activities to start exploring a new area ;)

Sabbaticals and internships are a lovely idea, but it's increasingly hard to get universities to give leave to do that. (Backfilling an academic's teaching can be surprisingly hard to arrange, and then there's the academic's service roles for the university, their PhD, masters, and honours students, etc) So I expect most academics will always have many more research interests than sabbaticals / placements.

Ok, those are the constraints - on to the value:

Most universities do focus fairly heavily on industries in their area. (eg, agriculture in regional areas). But it's not a good idea for all their research to focus on that. If we teach in an area, we want to be at least somewhat research-active in it, as that's how we keep teaching from stagnating. And in lots of countries, students find it too expensive to move far from home, so they study at a uni they're in the catchment for. So unless we want to shrink the future pool of employees to children who grew up in academic-beer-buying-reach of a company in that industry, we really do need some research happening at universities outside beer-buying distance. Which means you'll get a bunch of "bad industry research" papers as they first explore the area.

You might not want a new multi-hop algorithm. But a graduating masters student who's designed their own multi-hop algorithm with different characteristics, and undergrads having had some exposure to the problem...

And sure, the interaction design student might only have developed their app for one mobile platform, not several, and only tested it with a dozen people not a million. But really that's because we figure it's probably sensible to find out if it was even worth writing for one platform, before we spend so much of that government-funded time on writing it to work across several...

There's quite a few things industry, such as Google, could do to help if you're keen to make the match between research and industry closer. One is to disseminate your problems, not just your libraries.

If Google emailed "here's a good small problem and project we'd be interested in" (a different problem to each university), I doubt there is a CS department in the world that would refuse to offer it as a project for their honours students. (Which means the faculty who supervise the students would be looking at the problem with interest too.) The academics would just need to know it's a unique problem for them -- that they're not being asked to compete with an unknown number of other honours students and academics around the world on the same problem (which really would be inefficient as well as unfair on the students).


I don't think it's a fake concession. There is academic value in far-out research that isn't relevent to industry, just like there's value in math research and in metaphysics.

The point the article tries to make is that if you're trying to research something relevent to industry, it's probably not nearly as valuable academically as something more exotic would be, so you'd better make a good effort to make it practically valuable to compensate.

Nobody is that much of a condescending prick to the person they're trying to persuade.

In context the line didn't appear condescending at all as I read it. A little sarcastic, that's all. The author made it clear that he doesn't think academics need to produce something that's actually used by consumers. And the author was/is an academic. If you'll allow me to speculate, it seems that you don't have the highest opinion of academics and so it seems possible that you might further think that in order to be valuable they need to produce a product that is economically valuable. If you read the "I know, crazy, right?" line with that attitude, I can see how it might appear much more condescending than it actually is.

This is not a "we need to talk" conversation this is a "I need to talk at you, so I can show off to other people."

This is a non sequitur. "This article isn't trying to persuade academics, therefore it is trying to promote the author's social standing at academics' expense". The author identified a problem and offered a potential solution, does that mean he also has to persuade? Your statement presents a false dichotomy, that he has to either persuade academics or shame academics to promote himself. I say that he doesn't make an attempt to persuade beyond identifying problem and solution because he doesn't want to bore the reader with rhetoric, nothing more nefarious.


The author is on the program committee for some academic conferences, so he is not just an observer, but a participant. It is also common for academic computer science papers to use industry applications as motivation for their research - and, similar to the author, I often find some of these motivations misguided.

(I work in an industry research lab, write code for a real product, and publish academic computer science papers.)


Look, this blog post contains good advice for academics. You seem to be overreacting quite severely. The truth is that the utopia where anyone can work on interesting problems is long gone or rather never really existed (http://amapress.gen.cam.ac.uk/?p=1537). Academia produces a massive oversupply of academics, and there isn't the capacity to fund them all. Funding bodies in general are talking more and more about 'impact' as a deliverable. The original post is just trying to say don't ruin your chances to get funding with faulty thinking about industry applications.

You can argue, quite rightly perhaps, that focusing on impact/industry applications is a bad idea, and that we will miss out on important epochal discoveries, but that is a problem with society, not the author of the blog post.


Matt Welsh used to be a tenured systems professor at Harvard. He probably knows more about academia than 99.9% of people in industry.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: