Hacker News new | past | comments | ask | show | jobs | submit login

I am not an expert on speech recognition, but I am somewhat involved with machine translation. For MT, it's very architectural systems issues that limit us, first from trying different models and algorithms, but also in general.

How do you know ideas are "good" enough to be publishable unless you do plenty of experiments involving billion (or trillion) word corpora? I have a hard time imagining that research in other fields don't require validation.




I'm not saying that the papers that get published aren't good or valid. "Good enough to publish a paper" is indeed good.

It's just that once the paper is published, it becomes a cul-de-sac, a nice little city with no roads leading in or out, etc. Other researchers can only use the result by reproducing the idea by-hand (or at best through crufty Matlab code).

Yes, I'm sure the papers I've scanned involved considerable work and data (I worked in computer vision). But that work is often if not generally unavailable to the reader of the paper.

The point is that in creating a working system, Google has to do more than extend academic research, even if academic research involved good ideas that had been given some thorough tests in isolation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: