Imagine you have a 2-input NAND gate, but for some reason it is implemented with 1000 transistors (perhaps for redundancy in case one of the transistors gets hit by a cosmic ray, or perhaps for other reasons). That gate still behaves the same, so for all an (external) observer could measure, it is a NAND-gate, which is a simple device. Internal complexity does not always mean external (observable) complexity.
This is exactly where complexity hides. Simplicity of models relies on abstractions, which in the real world are invariably leaky. The complexity of making a robust NAND gate is very much observable at some level, and only goes away once you ignore the messy details. The more we look, the more this seems to hold for pretty much everything in our observable universe, from galaxies to quarks. The more you dig the more worms you find. There are thousands of sub-fields of molecular biology which try to understand how a single cell actually works, and we still are not done by a large margin. Of course we will always ignore what we can to make workable human models that we can actually reason about.
I would argue that it does matter for the brain. The large number of variations on the large number of different types of receptors means a great amount of variation in adaptability of the neural circuits to a great number of edge cases. But it also means there's a lot of possibility for maladaptation, such as with some presentations of mental and non-mental illnesses. Neural circuits can "remember" firing patterns through some of the varying adaptations, and not all circuit memories have the same function or the same effect.
The parent comment about varying transistor combinations was not quote correct in my opinion, as these variations in receptor makeups DO change how the neuron and circuits respond to stimuli.
This makes sense to me. It's like we're peering into a portion of the main logic in a function with one frozen global state and ignoring the idea that there are zillion global variables that can alter that logic.
Needless complexity has costs associated with building it and maintaining/running it, so I'd expect in the majority of cases it would be selected against strongly enough to disappear over time. Which implies the majority of complex systems are complex for a reason, because if a cheaper less complicated equivalent was equally good then that would win out.
Biological matter can't exactly opt out of being made of jiggly proteins immersed in water. And nerves can't opt out of the million things a cell needs to do to maintain itself. That's the kind of thing that adds immense complexity whether it's useful or not.
> Internal complexity does not always mean external (observable) complexity.
Yet you mention observable reasons at the beginning, before abstracting it right past spherical cows on friction less planes to its purely mathematical concept.
Especially with attacks like row hammer one could argue that redundancy or the lack thereof has a significant observable impact on how modern systems behave.