I don’t see how the last sentence (the brain must model the environment) follows from the admission on the previous page that a regulator can skip the model by taking on unnecessary complexity. It seems there is a built-in assumption that the brain has no unnecessary complexity, if I am following correctly. I wouldn’t be so sure about that! (Although, I should add, the idea that the brain models its environment sounds intuitively true beyond question... I’m just trying to follow the arguments put forth in the paper itself.)
The fact that PID loops work for many systems is a consequence of the dynamics of those systems, for example that the integral of velocity is position and the derivative of velocity is acceleration. A vast array of phenomena can be modeled as 2nd order linear ODEs.
As a bad physicist ("physics is the art of approximation", some prof said during my studies): Indeed, you only have to model the relevant aspect(s) of the system, which may be completely unrecognizable as the system.
I guess it's somewhat more true in programming, because if you have only ten lines, can you even call it modeling?
That said, I like to say that the best kind of code teaches you something new abou the problem it's solving. Some invariant, widely usable simplification, etc. Still not the same as modeling the system as such, though.
Yes. That sentence is generally untrue if the context is not specified.
Either the authors made rudimentary error or they specified the context. If you read forward, they define the problem and discuss error and cause controlled regulation.
I tacitly assumed that "model-free" really meant "model has been marginalized out", i.e. instead of treating it as a function to be estimated/learned through regression. Is that not the case?
I don't think anyone doubts that being able to regulate a Turing machine would require a system that is Turing complete. But this seems very far from saying that regulating any system requires a complete (isomorphic) model it. Or did you mean something else?
A surprisingly large number of things that we normally don't think of as computers are Turing-complete. It's a very low bar of complexity. And regulating requires modeling requires simulation. That's what Turing's result says: to know whether a program halts you must run it.
A surprisingly large number of things can be modeled as Hamiltonian systems -- in fact, all things -- but this does not imply that Liouville's theorem can be usefully applied universally (or even frequently). The reason is that the subset of variables we actually have access to and care about are not Hamiltonian.
Likewise, observing that some microscopic piece of a system has operations that can be mapped on to a Turing machine does not mean that the output of the Turing machine controls the variables we care about.
Additionally, we prove constraints about the outputs of particular software (executed on Turing machines) all the time. Noting that some piece of a system is isomorphic to a Turing machine does not actually mean it will be fed arbitrary instructions.