Hacker News new | past | comments | ask | show | jobs | submit login

Corporate labs do fund very long term research. Google was interested in and working on AI more or less from the early years (anyone remember Google Sets?), it's still funding fundamental AI research more than 20 years later. Now they've been funding self driving cars for more than 10 years and still do so.

The reason it appears rare is because funding research on the assumption it might be useful in more than 20 years from now is extremely wasteful. Companies don't work so far ahead because the risk of just going down a dead-end for half a lifetime is very, very high. In academia that doesn't matter because people are rewarded merely for researching novel things, in the real world people are rewarded for doing things that are useful.

It matters. For every DNA you can cite, others can cite dead end branches that despite decades of research have gone nowhere and probably never will. In CS, how many programmers are using Haskell every day? It's been in development for 35 years yet virtually nobody uses it - the new languages that gain traction (Rust, Swift, Go, Kotlin etc) invariably come from corporate R&D labs, and outside of a few bits of useful syntax borrow little from academic research languages. Even Haskellers have admitted now that lazyness was a dead end, new FP langs like Idris don't have it. Practically the entire field of PL research was swallowed up by FP and continues to be dominated by that paradigm (e.g. dependent types), despite the vast majority of PL users being disinterested in them.

Computational epidemiology. It's been researched for >20 years yet the models are always wrong. There are private sector epi models, but not surprisingly little work is done on them because there's obviously a missing piece, and developing huge and insanely complex simulations (e.g. the 15,000 LOC monster Ferguson produced) is obviously a dead end.

String theory. How much time has been sunk into that? Not a single testable prediction.

And in biology. You know, one biotech firm found 9 in 10 papers don't replicate. Papers professional labs can't replicate is not "useful in 20 years". That's "not useful today and never". People were selectively breeding for many centuries. Agritech firms would have eventually figured out the structure of DNA if Watson and Crick hadn't.

Meanwhile we tend to take for granted all the long term R&D projects the private sector does because it's much better at finding useful outcomes quicker. It doesn't need to wait 20 years to find products out of the research it does, and that's good!




> Practically the entire field of PL research was swallowed up by FP and continues to be dominated by that paradigm (e.g. dependent types), despite the vast majority of PL users being disinterested in them.

I couldn't disagree with this more. I'm biased because I really like PL research, but when I look at modern languages like Rust, Haskell's shadow is plain to see. ADTs, immutability and parametric polymorphism for instance.


I used to agree.

These days I think there's a lot of wishful thinking along these lines, as if nobody would have noticed without Haskell that re-using lots of global variables leads to frequent bugs. C++ got the const keyword in 1985, the same year Haskell was born. Templates were proposed in 1986. And would nobody have developed the notion of first-class functions without academic PL research? Given that even C has the notion of function pointers, it's hard to argue that.

Rust is hardly related to Haskell. If there's a shadow it's a very small one. Rust's primary research idea is adding linear types to an imperative type system. There's no lazyness, it's not pure, the syntax is obviously C-based and not ML/Haskell based. The similarities to C++ are much stronger than the similarities to Haskell.

When I look at the huge quantity of taxpayer money sunk into this line of programming languages, and how much impact it's had, I can't really support it. Academia/Haskell supporters like to lay claim to ideas and argue they "came" from academic PL research, but when you look into the histories and timelines that's just clearly not true. The ideas were either already in development a long time ago, or they were trivial and easily thought of.

Meanwhile, like I said - lazyness is now a dead end. I remember one of my CS lecturers who worked on Haskell-related DT research when I was an undergrad. He sang the praises of lazyness, how much better it made everything. They don't think that anymore and new son-of-Haskell langs don't have it. That entire line of PL theory was born, lived and died entirely within the taxpayer funded public sector.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: