Hacker News new | past | comments | ask | show | jobs | submit login

>especially in convolutional/correlation neural networks, which often make use of the convolution theorem to do the convolution

Is this true? With the learned filters being so much smaller than the input imagery/signals, and with "striding" operations and different boundary conditions being wrapped into these algorithms, it doesn't seem like a natural fit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: