Hacker News new | past | comments | ask | show | jobs | submit login

It's not a transmission line though, SNR does not apply in the same way. It's more like CMOS where the signal is refreshed at each gate. Each stage of an ANN applies some weight and activation. You can think of each input vector as a vector with a true value plus some noise. As long as that feature stays within some bounds, it is going to represent the same "thought vector".

It may require some architecture changes to make training feasible, but it's far from a nonstarter.

And that is only considering backprop learning. The brain does not use backprop, and has way higher noise levels.




I think the parent was referring to the same noise that you are, compute precision, not transmission, and was suggesting that perhaps it won’t easily stay within bounds due to the fact that some kinds of repeated calculations lose more precision at every step.

Maybe it’s application dependent, maybe NNs or other matrix-heavy domains can tolerate low precision much more easily than scientific simulations. It certainly wouldn’t surprise me if these “LightOPS” processors work well in a narrow range of applications, and won’t improve or speed up just anything that needs a matrix multiply.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: