I don't think it's entirely inaccurate to call neural networks "rewriting their own code". For a long time people experimented with algorithms that could actually rewrite human programming languages, like genetic programming. And it's mostly a failure, because most mutations to most programs just break them.
Neural networks fix this problem by being sort of 'continuous programming language' where any small change to the net results in a small change to the output. And because of that we have algorithms that can find good changes to make much easier than with code. They are theoretically turing complete under some circumstances. But even when they aren't completely recurrent as in this case, they can still learn very sophisticated functions. Theoretically a neural net can learn anything a fixed number of logic gates can learn, which is still pretty powerful.
I don't think parent comment meant by computers that can "re-write their own code" was that the programs are self modifying. But the computer itself is self modifying, as one program can change another on that computer, so the computer has modified itself.
In both the examples you cite (genetic algorithms and neural networks), the optimization process is what modifies the parameters. The network doesn't modify its own parameters.
It's like calling pencil and paper self-modifying because somebody could use a pencil to write down plans to build a better pencil.
As I said, the parent comment didn't say the network was self modifying. The exact quote was "this is about teaching computers to re-write their own code" which is roughly correct. If you see the weights of neural network as a sort of code, then the computer is rewriting it's own code.
And if you see the universe as "a sort of code", the network is rewiring the universe!
How's that for a bombastic editorial title?
(To be clear, I'm 100% agreeing with gcr here. Parameter optimization is a tremendously powerful technique, and quite possibly the "true currency" of our universe, but there's a few abstraction layers missing between that and a claim of "self-modifying AI")
The universe isn't code, and only gcr used the term "self modifying". There is absolutely nothing bombastic implied by the statement that a program can rewrite code.
Neural networks fix this problem by being sort of 'continuous programming language' where any small change to the net results in a small change to the output. And because of that we have algorithms that can find good changes to make much easier than with code. They are theoretically turing complete under some circumstances. But even when they aren't completely recurrent as in this case, they can still learn very sophisticated functions. Theoretically a neural net can learn anything a fixed number of logic gates can learn, which is still pretty powerful.
I don't think parent comment meant by computers that can "re-write their own code" was that the programs are self modifying. But the computer itself is self modifying, as one program can change another on that computer, so the computer has modified itself.