There is a distinction between necessary complexity and "busy beaver" artificial complexity. And it turns out that there's a pretty compelling mapping between energy and information and fundamental complexity. There's a reason they both use a concept of 'entropy'.
That said, there's no denying that there's probably a lot of unnecessary complexity in most software.
I'm sorry without some citations this is still handwavy bollocks.
Energy does have a conservation law. Entropy does not. Not does information, complexity etc.
I majored in phyiscs, I'm familiar with these terms.
Grand parent post made no indication they were talking about essential complexity when they made their bold assertion about how it's conserved like energy.
I'm also not aware of any formal or precise way of measuring "essential complexity". Not in the way we do for energy or entropy.
If only there were some scientific discipline that studied complexity and proved that problems possess some essential, minimal, complexity below which no implementation can go...
If anyone proved such a result, I'm sure they would be awarded a prize of some kind.
Well, there are many complexities. People here are talking about the Programming Complexity (of which one measure is Cyclomatic Complexity) of a textual program vs. the Time/Space complexity of the same program. There is the weakly-related concept of the Kolmogorov complexity of a string.
It's obvious that the time/space complexity of a program can stay fixed while you arbitrarily raise the programming complexity of a program by arbitrarily raising the number of branches that are rarely taken, an action that won't affect the asymptotic time complexity of a thing.
And sure, you can talk about the Kolmogorov complexity of a string of computer code, but the minimal string representation of that code is unlikely to be one that a programmer would describe as simple. Even minimizing the string that would behave as that of the original program is usually non-desirable.
Pick any of those. Given a problem, there is a minimal complexity to the programs that solve it. You can always raise the complexity, but not reduce it.
I'm confused by your response to ninjapenguin[0]. Were you agreeing with him or disagreeing or something else? I interpreted your response to be disagreement. I feel that your original style of phrasing "If only x etc. etc." is not easy to understand. I'm having a hard time placing your subsequent comments in context of that original response.
I was disagreeing with the statement that "Comparing [complexity] to something as fundamental as energy is pure bollocks." There is, in fact, a discipline that studies complexity -- computer science, and, in particular the field of complexity theory. Complexity was, in fact, found to be fundamental and "irreducible". Two people, Hartmanis and Stearns, did, in fact, discover that through a comparison to physics [1] in 1965, and for that discovery -- that led to the creation of complexity theory -- they won the Turing Award in 1993.
Not really. You can describe the complexity of "code understanding" as the computational complexity required to answer a question about it, say the time it takes to find it or the length of the proof -- both are common computational complexity measures. The proof complexity of determining that bubble-sort sorts is, indeed, shorter than that of Timsort. You could even talk about the simplest sorting algorithm as the one with the shortest proof. There is, indeed, a minimal proof complexity for sorting, just as with any other kind of computational complexity.
No, they're completely different uses of the term.
A lot of real-world CRUD code, for instance, does nothing of computational interest whatsoever, but it isn't simple, because it has to cope with real-world business-logic complexity. (And possibly a lot of complexity beyond the necessary complexity, due to poor design, changing goals slowly messing up the code, etc.)
> You can describe the complexity of "code understanding" as the computational complexity required to answer a question about it, say the time it takes to find it or the length of the proof
I don't follow. How are proofs relevant? A programmer having to wade through poorly-named functions and a lack of documentation, is not well modelled by computational complexity theory.
> You could even talk about the simplest sorting algorithm as the one with the shortest proof.
Who'd care? That bears little relation to either the complexity theoretic properties of the algorithm/problem, or to complexity in the informal sense.
They are one way of measuring how hard or easy it is to know something, and I define simple as easy to answer questions about, and complex as the opposite.
Introduce enterprise into any system and complexity goes over the roof ASAP - suddenly its not only your machine but TEAM of people, history, dashboards with metrics, logs, automation, security etc...
This one isn't even approaching any of it except dev side and CI.
I think the analogy to energy is pretty good. Like the FizzBuzz thing, you can also build a machine that has a lot of energy but doesn't accomplish anything useful.
The conservation of energy, specifically, is a misleading analogy. If you want an energy-related analogy, I think you will find a better one in entropy (especially given its close relationship to information.)
Heres an amusing and simple example of manufactured complexity: https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...