Hacker News new | past | comments | ask | show | jobs | submit login

To be fair, SWE patterns are insightful.

But here's where the problem lies - the proposal in the article is representative of how the community has become so used to this nice side-effect of modularity of neural networks, giving the impression that a clean linear decomposition of contributions is even conceivable.

There are examples where this modularity isn't possible (or not optimal even if possible). The poster child, I would say, is probabilistic programming. It is easy to think about building general purpose inference algorithms in probabilistic programming. However, more often than not, inference heavily relies on context.

An example for this case is Gaussian Processes. In theory, it is easy to conceive a chain of ideas that connect threads of research within this tiny field. Unfortunately, in practice it turns out inference with GPs is more efficient when we exploit structural assumptions within the context of modeling choices and general purpose few-liner code changes would be terribly inefficient. You'd find the code will look vastly different to an untrained eye even though it would compose similar sounding (or looking) building blocks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: