While it's all very posh to scoff while saying "well, obviously!", the author has explained it, demonstrated it, documented it. It's not about proving something new, it's about sharing something neat with the world. It's also well written and emotes a bit of a fun in the telling.
I'm reminded of xkcd's "Ten Thousand" principle[0]: (Paraphrased) "For each thing that 'Everybody knows', every day there are, on average, 10,000 people in the US learning it for the first time". Good writing makes the experience better for those learning it for the first time today.
But it explains the wrong thing. It's a fundamentally ridiculous framing. A basic neural network layer is a linear transformation + constant offset + activation function. Linear transformations are the elementary building block. And Fourier Transform is a quintessential example of a linear transform. But the article doesn't actually discuss that at all. If you don't understand this how will you ever deal with more complicated NN architectures?
The problem isn't an "everybody knows" kind of problem. People (myself included) are rolling our eyes because it's the most rube goldbergian way to say it.
It's analogous to saying that subtraction is a neural network. Addition and negation are core elements of modern state of the art neural networks, and you can express subtraction as a combination of those two modern neural network techniques. Therefore subtraction is a neural network.
The line in the article about the formula y = A x is illustrative:
> This [y = A x] should look familiar, because it is a neural network layer with no activation function and no bias.
Imagine that as:
> This [y = a + b] should look familiar, because it is a neural network layer with no activation function and no bias and no multiplication. Meanwhile, this [y = -x] should also look familiar, because it is a neural network layer with no activation function, no bias, no addition, and only negation.
You'd expect that kind of explanation of subtraction if someone had never learned about addition and negation outside of neural networks, but you'd roll your eyes and say it's just such a bizarre way to talk about addition, negation, subtraction, and neural networks.
While it's all very posh to scoff while saying "well, obviously!", the author has explained it, demonstrated it, documented it. It's not about proving something new, it's about sharing something neat with the world. It's also well written and emotes a bit of a fun in the telling.
I'm reminded of xkcd's "Ten Thousand" principle[0]: (Paraphrased) "For each thing that 'Everybody knows', every day there are, on average, 10,000 people in the US learning it for the first time". Good writing makes the experience better for those learning it for the first time today.
[0]https://xkcd.com/1053/