>especially in convolutional/correlation neural networks, which often make use of the convolution theorem to do the convolution
Is this true? With the learned filters being so much smaller than the input imagery/signals, and with "striding" operations and different boundary conditions being wrapped into these algorithms, it doesn't seem like a natural fit.
Is this true? With the learned filters being so much smaller than the input imagery/signals, and with "striding" operations and different boundary conditions being wrapped into these algorithms, it doesn't seem like a natural fit.