Hacker News new | past | comments | ask | show | jobs | submit login

The state of machine learning research these days seems pretty good. Essentially all research is published on ArXiv and there is a lot of code released too (though there could certainly be more).

I think openness has been a big contributor to the recent explosion in popularity and success of machine learning. When talking to academics about this, machine learning would be a great field to hold up as an example.




I'd say the opposite as a member of a group at my university who review ML papers. First off right now there seems to be a drive to explain many phenomena in ML in particular why neural networks are good at what they do. A large body of them reaches a point of basically "they are good at modeling functions that they are good at modeling". The other type of papers that you see, is researchers drinking the group theory kool-aid and trying to explain everything through that. At one point we got 4 papers from 4 different groups that tried to do exactly that. All of them are flawed, either in their mathematics or assumptions (that will most likely never be true, like assumptions of linearity and your data sey being on a manifold). Actually speaking of math, many papers try to use very high level mathematics (functional analysis with homotopy theory) to essentially hide their errors as nobody bothers to verify it.


>First off right now there seems to be a drive to explain many phenomena in ML in particular why neural networks are good at what they do. A large body of them reaches a point of basically "they are good at modeling functions that they are good at modeling".

Since this is closely related to my current research, yes, ML research is kind of crappy at this right now, and can scarcely even be considered to be trying to actually explain why certain methods work. Every ML paper or thesis I read nowadays just seems to discard any notion of doing good theory in favor of beefing up their empirical evaluation section and throwing deep convnets at everything.

I'd drone on more, but that would be telling you what's in my research, and it's not done yet!


I'm talking about the openness of the papers and code, which is the subject of the article. You are talking about something different.


> All of them are flawed, either in their mathematics or assumptions (that will most likely never be true, like assumptions of linearity and your data set being on a manifold).

Although I can see global linearity being unlikely in most cases; why is local linearity unlikely?


Ever read a systems performance paper? A really fun game is "spot the bullshit."




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: