Perhaps poor choice of phrase. It is my feeling that moving complexity out of methods and possibly having it in the interrelationships between methods results in a net gain in understandability. Perhaps asymmetric would have been a better word?
It codifies the relationships between the parts - but also adds ceremony (the call, the method definition, the arguments). Whether this is a net gain depends. Probably the key is whether it feels like a conceptually distinct component - a theory or abstraction that you can build on. This is probably true even if it isn't reused (but typically coincides with it).
There was some "non-linearity": the interaction between parts increases (roughly) with the square of the number of parts: 4 parts have 16 directed interactions. Combinatorial explosion is a more accurate measure. If you can separate the parts into modules, that interact with only the caller, then the complexity increases linearly (but if many methods need to interact, then it's as if you wrapped all the cables up in one tie, and forced them all to go through the "main" method - not actually an improvement. However, it's rare for everything to interact with everything, and even then, it may be clearer to codify it somehow).
I also wanted to say that while proving correctness is important, the organization that is best for proving is not always the same organization that is best for human clarity (sometimes they are). It depends on what your goal is. Proving absolute correctness is not important for most programs (all software has bugs; bugfixes are common) - to sacrifice human clarity for correctness is usually not the optimal trade-off.