If only there were some scientific discipline that studied complexity and proved that problems possess some essential, minimal, complexity below which no implementation can go...
If anyone proved such a result, I'm sure they would be awarded a prize of some kind.
Well, there are many complexities. People here are talking about the Programming Complexity (of which one measure is Cyclomatic Complexity) of a textual program vs. the Time/Space complexity of the same program. There is the weakly-related concept of the Kolmogorov complexity of a string.
It's obvious that the time/space complexity of a program can stay fixed while you arbitrarily raise the programming complexity of a program by arbitrarily raising the number of branches that are rarely taken, an action that won't affect the asymptotic time complexity of a thing.
And sure, you can talk about the Kolmogorov complexity of a string of computer code, but the minimal string representation of that code is unlikely to be one that a programmer would describe as simple. Even minimizing the string that would behave as that of the original program is usually non-desirable.
Pick any of those. Given a problem, there is a minimal complexity to the programs that solve it. You can always raise the complexity, but not reduce it.
I'm confused by your response to ninjapenguin[0]. Were you agreeing with him or disagreeing or something else? I interpreted your response to be disagreement. I feel that your original style of phrasing "If only x etc. etc." is not easy to understand. I'm having a hard time placing your subsequent comments in context of that original response.
I was disagreeing with the statement that "Comparing [complexity] to something as fundamental as energy is pure bollocks." There is, in fact, a discipline that studies complexity -- computer science, and, in particular the field of complexity theory. Complexity was, in fact, found to be fundamental and "irreducible". Two people, Hartmanis and Stearns, did, in fact, discover that through a comparison to physics [1] in 1965, and for that discovery -- that led to the creation of complexity theory -- they won the Turing Award in 1993.
Not really. You can describe the complexity of "code understanding" as the computational complexity required to answer a question about it, say the time it takes to find it or the length of the proof -- both are common computational complexity measures. The proof complexity of determining that bubble-sort sorts is, indeed, shorter than that of Timsort. You could even talk about the simplest sorting algorithm as the one with the shortest proof. There is, indeed, a minimal proof complexity for sorting, just as with any other kind of computational complexity.
No, they're completely different uses of the term.
A lot of real-world CRUD code, for instance, does nothing of computational interest whatsoever, but it isn't simple, because it has to cope with real-world business-logic complexity. (And possibly a lot of complexity beyond the necessary complexity, due to poor design, changing goals slowly messing up the code, etc.)
> You can describe the complexity of "code understanding" as the computational complexity required to answer a question about it, say the time it takes to find it or the length of the proof
I don't follow. How are proofs relevant? A programmer having to wade through poorly-named functions and a lack of documentation, is not well modelled by computational complexity theory.
> You could even talk about the simplest sorting algorithm as the one with the shortest proof.
Who'd care? That bears little relation to either the complexity theoretic properties of the algorithm/problem, or to complexity in the informal sense.
They are one way of measuring how hard or easy it is to know something, and I define simple as easy to answer questions about, and complex as the opposite.
If anyone proved such a result, I'm sure they would be awarded a prize of some kind.