Hacker News new | past | comments | ask | show | jobs | submit login

Computer Vision, for one, is making massive advances.

I just spent three weeks (class project) implementing a new algorithm to find the minimum cut of a directed planar graph in O(nlgn) time. The algorithm is actually quite elegant:

http://www-cvpr.iai.uni-bonn.de/pub/pub/schmidt_et_al_cvpr09...

This came out of a Ph.D. thesis written in 2008, and was applied to some computer vision problems in the paper I linked above. This isn't a minor speedup or optimization... it yields asymptotically faster results.

My vision professor is fairly young, and recently did his own Ph.D. work on Shape From Shading. This is the problem of recovering 3D shape from a single image (no stereo or video). His solution used Loopy Belief Propagation and some clever probability priors to achieve solutions that were orders of magnitude better than previous work. In fact, his solution is so good that rendering the resulting 3D estimate is identical (to the naked eye) to the original (although the actual underlying shape varies, since there are multiple shapes that can all appear the same given the lightning conditions and viewing angles).

There is also a ton of interesting progress in the last two decades making functional languages practical in terms of speed (and hence useful). My advisor did his Ph.D. in this area.

The entirety of CS is not evidenced by the current state of operating systems. In fact, I'd argue that OS research at this point has less to do with computation than it does with human-computer interaction, which seems like it requires more research about humans than computers.




That sounds like clever engineering to me, not science.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: