Hacker News new | past | comments | ask | show | jobs | submit login

Instead of doing an obvious ad-hominem (Yes :D, this is one of the times it is genuine!), can you address what the other commenters are pointing out?

I have degrees in Physics, EE and a PhD in CS. I munched axiomatic set theory and infinite ordinals during the course of my PhD. I dabbled in theoretical machine learning for 3 years. See, I can play the credentials game too.

But does that address the fact that they are not used in the industry? AI is full of charlatans and broken promises. Sadly, by listing "deep learning" alongside Deep Blue and Watson, it seems more charlatany.




I am not playing the credentials game, quite the contrary, I remember trying to understand DBFs while at the university and failing miserably due to the complexity of the subject. I am also not defending deep learning in any way or having any stance about the subject myself. I just think in a place like HN you should not criticize technology without having practical experience or technical arguments, and of course "not used in industry" is not a technical argument. With Google just yesterday hiring George Hinton who lead most of the deep learning research and with Jeff Dean working on it there already[1] it is also overall a rather weak one.

[1] http://research.google.com/archive/large_deep_networks_nips2...


>of course "not used in industry" is not a technical argument. With Google just yesterday hiring George Hinton

Again, you're quoting me on that. Yes, 'not used much in data science' is a valid argument that it's not one of the biggest breakthroughs in data science.

And if you want to discuss the topic (while blanket criticizing people for not knowing what they're talking about) at least get the father of deep network's name right: it's Geoff Hinton, not George.


1. Google hiring X is not the same as X's model being scientifically valid.

2. Model X for intelligence being highly technical or cool in some mathematical way is not scientific validation.

3. Google hiring X is not the same as X's model being successful in the industry. I have long switched over to DuckDuckGo for technical queries.

Anyways, what AI people should first address always is point number 2.

AI has always jumped from one cool thing to the next without answering whether that cool thing has any scientific basis.

Don't bring another AI winter ;)

It is always cool to see excitement over research in AI! (As long as it does not drown out other competitive approaches which might bear fruit in the long run.)


1. Google hiring X is not the same as X's model being scientifically valid.

That's exactly what I am complaining about.

2. Model X for intelligence being highly technical or cool in some mathematical way is not scientific validation.

I never said that, it just narrows the amount of people that can comment on it with merit.

3. Google hiring X is not the same as X's model being successful in the industry.

It was just a side-comment.


The downvote is amusing. Care to engage in an interesting conversation rather than resorting to drive-by-downvoting? I guess I touched an sensitive point with the Google fan boys here. Sigh.


Nobody has ever convinced me humans are intelligent. Every specific example of human intelligence put forward, it eventually ends up that machines do it better.

How is one to scientifically validate against something that can't even be defined?


>Nobody has ever convinced me humans are intelligent. Every specific example of human intelligence put forward, it eventually ends up that machines do it better.

First of all that is completely wrong even for simple things like image recognition (try building a face recognizer which works under all possible conditions).

But more starkly consider the following question:

Is Geoff Hinton a machine?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: