Hacker News new | past | comments | ask | show | jobs | submit login

A neural network can also be set to learn unconditionally return a fixed value with no learning feedback. I don't think lower bounds on capabilities are very informative. So could many arbitrarily complex arrangements that do massive amounts of work only to discard it and return a constant. An upper bound of what an approach is capable of is more useful. Say no matter how vast a look up table is it will never return a different value for the same input regardless of prior sequence.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: