Hacker News new | past | comments | ask | show | jobs | submit login
Why software ends up complex (alexgaynor.net)
188 points by kiyanwang on Dec 13, 2020 | hide | past | favorite | 168 comments



This is one vector for complexity, to be sure. Saying "no" to a feature that is unnecessary, foists a lot of complexity on a system, or has a low power to weight ratio is one of the best skills a senior developer can develop.

One of my favorite real world examples is method overloading in Java. It's not a particularly useful feature (especially given alternative, less complicated features like default parameter values), interacts poorly with other language features (e.g. varargs) and ends up making all sorts of things far more complex than necessary: bytecode method invocation now needs to encode the entire type signature of a method, method resolution during compilation requires complex scoring, etc. The JVM language I worked on probably had about 10% of its total complexity dedicated to dealing with this "feature" of dubious value to end users.

Another vector, more dangerous for senior developers, is thinking that abstraction will necessarily work when dealing with complexity. I have seen OSGi projects achieve negative complexity savings, while chewing up decades of senior man-years, for example.


Well what I mostly experienced in my years in the field was that developers, wether senior or not, feel obliged to create abstract solutions.

Somehow people feel that if they won't do a generic solution for a problem at hand they failed.

In reality the opposite is often true, when people try to make generic solution they fail to make something simple, quick and easy to understand for others. Let alone the idea that abstraction will make system flexible and easier to change in the future. Where they don't know the future and then always comes a plot twist which does not fit into "perfect architecture". So I agree with that idea that abstraction is not always best response to complex system. Sometimes making copy, paste and change is better approach.


Kevlin Henney makes interesting points on this. We often assume that abstractions make things harder to understand, at the benefit of making the architecture more flexible. When inserting an abstraction, it is supposed to do both, if at all possible. Abstracting should not only help the architecture, but also the understanding of the code itself. If it doesn't do the latter, then you should immediately question whether it is necessary, or if a better abstraction exists.

The take-away I took from it is that as developers, we love to solve problems using technical solutions. Sometimes, the real problem is one of narration. As we evolve our languages, better technical abstractions become available. But that's not going to prevent 'enterprisey' code from making things look and sound more difficult. Just look at any other field where abstractions aren't limited by technicalities: the same overcomplicated mess forms. Bad narrators narrate poorly, even when they are not limited.


I think we forget that while “engineering” is about maximizing the gain for a given investment of resources, it can be stated another way as building the least amount of bridge that (reliably) satisfies the requirements.

Abstraction can be used to evoke category theoretical things, but more often it’s used to avoid making a decision. It’s fear based. It’s overbuilding the bridge to avoid problems we don’t know or understand. And that is not Engineering.

I find sometimes that it helps me to think of it as a fighter or martial artist might. It is only necessary to not be where the strike is, when the strike happens. Anything more might take you farther from your goal. Cut off your options.

Or a horticulturist: here is a list of things this plant requires, and they cannot all happen at once, so we will do these two now, and assuming plans don’t change, we will do the rest next year. But plans always change, and sometimes for the better.

In Chess, in Go, in Jazz, in coaching gymnastics, even in gun safety, there are things that could happen. You have to be in the right spot in case they do, but you hope they don’t. And if they don’t, you still did the right thing. Just enough, but not too much.

What is hard for outsiders to see is all the preparation for actions that never happen. We talk about mindfulness as if it hasn’t been there all along, in every skilled trade, taking up the space that looks like waste. People learn about the preparation and follow through, instead of waiting. Waiting doesn’t look impressive.


Your analogy with Chess and Go is flawed though. In these games you try to predict the best opponent move responding to yours, the worst case basically, and then try to find the best response you have to it, and so on, until you cannot spend any more time on that line or reach your horizon. You are not "hoping things do not happen". If you did that, you would be a bad chess or go player. You make sure things do not happen.

So that analogy works against your point.


I disagree. Especially in teaching games, and everyone writing complex software is still learning.

In Go there are patterns that are probably safe, but that safety only comes if you know the counters. In a handicapped game, it’s not at all uncommon for white to probe live or mostly live groups to see if the student knows their sequences. You see the same in games between beginners.

Professional players don’t do this to each other. They can and will “come sideways” at a problem (aji) if it can still be turned into a different one, but they don’t probe when the outcome is clear. In a tournament it inflicts an opportunity cost on the eventual winner, and it is considered rude or petty. They concede when hope is lost.

They still invested the energy, but now it comes mostly from rote memorization.


And how does that contradict my point to always expect the best opponent move and think about the best thing to do in return, instead of simply hoping the worst will not happen? I think you are actually even supporting my point here.


I think the thing that comes with ~~seniority~~ experience is being better able to predict where abstraction is likely to be valuable by becoming: more familiar with and able to recognize common classes of problems; better able to seek/utilize domain knowledge to match (and anticipate) domain problems with engineering problem classes.

I’m self taught so the former has been more challenging than it might be if I’d gone through a rigorous CS program, but I’ve benefited from learning among peers who had that talent in spades. The latter talent is one I find unfortunately lacking in many engineers regardless of their experience.

I’m also coming from a perspective where I started frontend and moved full stack til I was basically backend, but I never lost touch with my instinct to put user intent front and center. When designing a system, it’s been indispensable for anticipating abstraction opportunities.

I’m not saying it’s a perfect recipe, I certainly get astronaut credits from time to time, but more often than not I have a good instinct for “this should be generalized” vs “this should be domain specific and direct” because I make a point to know where the domain has common patterns and I make a point to go learn the fundamentals if I haven’t already.


I agree that premature abstraction is bad. Except when using a mature off-the-shelf tool, e.g. Keycloak. Sometimes if you know that you need to implement a standard and are not willing to put in the effort for an in-house solution, that level of complexity just comes with the territory, and you can choose to only use a subset of the mature tool's functionality.

I also have a lot of experience starting with very lo-fi and manual scripting prototypes to validate user needs and run a process like release management or db admin, which would then need to be wrapped in some light abstractions to hide some of the messy details to share with non-maintainers.

Problem is, I've noticed that more junior developers tend to look at a complex prototype that hits all the user cases, and see it as being complicated. Then they go shopping for some shiny toy that can only support a fraction of the necessary cases, and then I have to spend an inordinate amount of time explaining why it's not sufficient and that all the past work should be leverages with a little bit of abstraction if they don't like the number of steps in the prototype.

So, not-generic can also end up failing from a team dynamic perspective. Unless everyone can understand the complexity, somebody is going to come along and massively oversimplify the problem, which is a siren song. Queue the tech debt and rewrite circle of life.


Sure over abstraction is a problem. And sometimes duplication is better than depend y hell. But other times more as traction is better.

In true it’s an optimisation problem where both under abs over abstracting, or choosing a the wrong abstractions lead to less optimal outcomes.

To get more optimal out comes it helps to know what your optimisation targets are: less code, faster compilation, lower maintenance costs, performance, ease of code review, adapting quirky to market demands, passing legally required risk evaluations, or any number of others.

So understand your target, and choose your abstractions with your eyes open.

I’ve dealt with copy paste hell and inheritance hell. Better is the middle way.


I would like to be able to upvote this answer 10 times.

I often remember that old joke:

When asked to pass you the salt, 1% of developers will actually give it to you, 70% will build a machine to pass you a small object (with a XML configuration file to request the salt), and the rest will build a machine to generate machines that can pass any small object from voice command - the latter being bootstrapped by passing itself to other machines.

Also makes me remember the old saying

- junior programmer find complex solutions to simple problems

- senior programmers find simple solutions to simple problems, and complex solutions to complex problems

- great programmers find simple solutions to complex problems

To refocus on the original question, I often find the following misconceptions/traps in even senior programmers architecture:

1) a complex problem can be solved with a declarative form of the problem + a solving engine (i.e. a framework approach). People think that complexity can be hidden in the engine, while the simple declarative DSL/configuration that the user will input will keep things apparently simple.

End result:

The system becomes opaque for the user which has no way to understand how things work.

The abstraction quickly leaks in the worst possible way, the configuration file soon requires 100 obscure parameters, the DSL becomes a Turing complete language.

2) We have to plan for future use cases, and abstract general concepts in the implementation.

End result:

The abstraction cost is not worth it. You are dealing with a complex implementation for no reason since the potential future use cases of the system are not implemented yet.

3) We should factor out as much code as possible to avoid duplication.

End result:

Overly factored code is very hard to read and follow. There is a sane threshold that should not be reached in the amount of factorization. Otherwise the system becomes so spaghetti that understanding a small part requires untangling dozens and dozens of 3 lines functions.

---

When I have to argue about these topics with other developers, I often make them remember the worst codebase they had to work on.

Most of the time, if you work on a codebase that is _too_ simplistic and you need to add a new feature to it, it's a breeze.

The hard part, is when you have an already complex system and you need to make a new feature fit in there.

I'd rather work on a codebase that's too simple rather that too complex.


I like what you are saying here! My observations below.

1) When gradually most of your implementation is happening in a DSL / graph based system all of your best tools for debugging and optimizing are useless.

2) So often I've seen people make an 'engine' before they make anything that uses the engine and in practice the design suffers from needless complexity and is difficult to use because of practical matters not considered or forseen during the engine creation. Usually much work has been spent on tackling problems that are never encountered but add needless complexity and difficulty in debugging. Please design with debugging having an equal seat at the table!

3) Overly factored code is almost indistinguishable from assembly language. - Galloway


> Another vector, more dangerous for senior developers, is thinking that abstraction will necessarily work when dealing with complexity.

I'm pretty good at fighting off features that add too much complexity, but the abstraction trap has gotten me more than once. Usually, a moderate amount of abstraction works great. I've even done well with some really clever abstractions.

Abstraction can be seductive, because it can have a big payoff in reducing complexity. So it's often hard to draw the line, particularly when working in a language with a type of abstraction I've not worked much with before.

Often the danger point comes when you understand how to use an abstraction competently, but you don't yet have the experience needed to be an expert at it.


Yes, but remember Sanchez's Law of Abstraction[0]: abstraction doesn't actually remove complexity, it just puts off having to deal with it.

This may be a price worth paying: transformations that actually reduce complexity are much easier to perform on an abstraction of a program than on the whole mess with all its gory details. It's just something to keep in mind.

[0] https://news.ycombinator.com/item?id=22601623


You're echoing my comment: abstraction can be very useful, but you have to take care.

Also, it's pointless to claim that transformations on an abstraction can reduce complexity, but abstractions themselves can not. Abstractions are required for that reduction in complexity.


As a Java end user I'm really glad that method overloading exists. The two largest libraries I ever built would have been huge messes without overloading. But I take your point that method overloading might be a net negative for the Java platform as a whole.


Yes, java would be a mess without overloading (particularly for telescoping args), but that's because it doesn't include other, simpler features that address the same problems. Namely:

- default parameter values

- union types

- names arguments

I would also throw in list and map literals, to do away with something like varargs.

All of these are much simpler, implementation-wise, than method overloading. None would require anywhere near the compiler or bytecode-level support that method overloading does. It just has a very low power to weight ratio, when other langauge features are considered. And, unfortunately, it makes implementing all those other features (useful on their own) extremely difficult.


I don‘t really see a significant difference between overloaded methods and a single method with a sum type (where, in the general case, the sum is over the parameter-list tuple types of the overloaded methods). One can interpret the former as syntactic sugar for the latter.


Static vs dynamic dispatch, to start with.

You need default values (and ideally named parameters) to really make them equivalent, but yes, both address a similar set of developer needs.

The java implementation, where it bleeds into bytecode and method scoring in insane ways, is particularly unfortunate.


But it’s not actually syntactic sugar in java. There is a whole concept of type-signature which is part of method look up.

Without overloading lookup would just be via name.


On the other hand languages get by completely without method overloading and use for example default parameter values.

I don not think that method overloading is necessarily required to not make a mess.


Overloading is very valuable, especially in conjunction with generic code.

But unfortunately, it is way overused. It takes a long time to develop good judgement. I'm still working on it.


> This is one vector for complexity, to be sure. Saying "no" to a feature that is unnecessary, foists a lot of complexity on a system, or has a low power to weight ratio is one of the best skills a senior developer can develop.

I don't consider myself to be an exceptional developer, but this alone has launched my career much faster than it would if I was purely, technically competent. Ultimately, this is a sense of business understanding. The more senior/ranking you are at a company, the more important it is for you to have this tune in well.

It can be really, really hard to say no at first, but over time the people ask you to build things adapt. Features become smaller, use cases become stronger, and teams generally operate happier. It's much better to build one really strong feature and fill in the gaps with small enhancements than it is to build everything. Eventually, you might build "everything", but you certainly don't need it now. If your product can't exist without "everything", you don't have a strong enough business proposition.

----

Note: No, doesn't mean literally "I'm/we're not building this". It can mean two things:

* Forcing a priority. This is the easiest way to say no and people won't even notice it. Force a priority for your next sprint. Build a bunch of stuff in a sprint. Force a priority for another sprint. Almost inevitably, new features will be prioritized over the unimportant left overs. On a 9 month project, I have a 3 month backlog of things that simply became less of a priority. We may build them, but there's a good chance nobody is missing them. Even if we build half of them, that still puts my team 1.5 months head. For a full year, that's almost like getting 2 additional months of build time.

* Suggesting an easier alternative. Designers have good hearts and intentions, but don't always know how technically difficult something will be. I'm very aggressive about proposing 80/20 features - aka, we can accomplish almost this in a much cheaper way. Do this on 1 to 3 features a sprint and suddenly, you're churning out noticeably more value.


I have seen OSGi projects achieve negative complexity savings, while chewing up decades of senior man-years, for example.

I'm not surprised; that and a lot of the Java "culture" in general seems to revolve around creating solutions to problems which are either self-inflicted or don't actually exist in practice, ultimately being oriented towards extracting the most (personal) profit for a given task. In other words: why make simple solutions when more complex ones will allow developers to spend more time and thus be paid more for the solution? When questioned, they can always point to an answer laced with popular buzzwords like "maintainability", "reusability", "extensibility", etc.


I always found it surprising that Java implemented method overloading, but not operator overloading for arithmetic/logical operators. It's such a useful feature for a lot of types and really cleans up code, and the only real reason it's hard to do is because it relies on method overloading. But once you have that, why not just sugar "a + b" into "a.__add(b)" (or whatever).

You don't have to go all C++ insane with it and allow overloading of everything, but just being able to do arithmetic with non-primitive types would be very nice.


Operator overloading is deliberately more limited in D, with an eye towards discouraging its use for anything other than arithmetic.

A couple of operator overloading abominations in C++ are iostreams, and a regex DSL (where operators are re-purposed to kinda sorta look like regex expressions).


Yeah.

One in-between option I have kicked around w/ people is offering interfaces that allow you to implement operator overloading (e.g. Numeric). Then you wouldn't have one-off or cutesy operator overloading, but would rather need to meet some minimum for the feature. (Or at least feel bad that you have a lot of UnsupportedOperationExceptions)

Java had/has a ton of potential, but they kept/keep adding features that make no sense to me, making the language much more complex without some obvious day-to-day stuff like list literals, map literals, map access syntax, etc.

Oh well.


C++ did set a bad example with operator overloading.

    cout << "Hello World";
What the heck is that? Shift cout by "Hello World"?

It looks like they did that just because they can.


> It looks like they did that just because they can.

The thing is, people thought it was a good idea at the time, and had good reasons. It took maybe a decade before experience showed otherwise. Even today it takes a while to convince newer programmers that it is indeed a bad idea, and even then they don't really believe it.

An even more perniciously bad language feature is macros. I'm pretty sure that once I step down as D's BDFL or am forcibly removed D will get macros :-/


Does this actually confuse anyone? There are plenty of problems with iostreams, but I have never been encountered an issue caused by the use of << for streams.


Optionals help, but what's really missing is union types for arguments.


And named arguments.

All three features, useful on their own, could be added to java at maybe 10% of the complexity of method overloading. With method overloading, they are all exponentially more complicated.

It's crazy how many places method overloading ends up rearing its ugly head if you are dealing with the JVM.


I wish all the args always just came as one structured object with some convenient syntax for accessing destructured parts. That object can have named parts as well as unions, optional, collections, and combinations of them.


What is method scoring? I’ve never heard that term in VM/compilers and my Google-fu is failing me.


I imagine it refers to the comparison of argument types to find the best match among overloaded methods with the same arity. e.g. when the machine has got "void foo(java.lang.Object)" and "void foo(java.lang.Number)" to choose from.


Yep, exactly.

Gets to be lots of fun when you throw autoboxing and, especially, generics, into the mix.


Your example is funny. Default parameters are far more complicated than method overloading.

That said, I fully agree on the OSGi point. Makes me worried about a lot of the new features I hear are on the way. :(


> Default parameters are far more complicated than method overloading.

I've implemented both, I disagree. Unless you are talking about default arguments in the presence of method overloading, which is insane, and which I have also implemented.


The important thing with default parameters, if you have separate compilation, is that they are fixed in the callee and not in the caller. Otherwise, when the default value is changed, previously compiled callers will use a different value than declared by the callee. Compiling the default value into the caller is a problem in C++, and (IIRC) in Swift. In other words, (positional) default parameters should act like syntactic sugar for equivalent method overloads. And then actual overloads provide more flexibility for the callee implementation because it doesn’t have to represent the default case as a special in-band value.


I can see that as an implementation detail, but it need not be one. There isn't a reason that a runtime couldn't use a flag value for absent args and let the callee code set the value up on their side.

I mean, the java runtime has all sorts of crazy stuff going on w/ a security manager, to almost zero benefit. Seems easy enough to make something like this work.


I'm taking about in use, mainly. Python, as an example, really screws it up by not giving you a way to know if you have the default or a value that equals it. Then, the more default values you pile in, the more this fun facet piles up.

So, if you skip on that part of it, and you can while still getting far, it is easier. With it, though, is stupid complicated for little benefit.


Common Lisp has a pretty simple fix for this: when you declare a parameter with a default value, you can also declare a variable that will be true iff the parameter was actually passed.

The function definition looks like this:

  (defun foo (&optional (a 1 a-passed))
    (if a-passed (print a) 
       (print "a not passed"))

  > (foo 10)
  10
  > (foo 1)
  1
  > (foo)
  a not passed
This is still relatively easy to implement, and very easy to use in my opinion. Of course, combining this with named arguments is even better, and that is supported as well (just replace &optional with &key, and then specify the name when calling the function - (foo :a 1)).


Why would you need to know if a value is the default or a value the equals the default? I can't think of a reason for that as I've never faced this problem in my career.


It is niche, but it does come up. Usually with boolean parameters, but also on migrations. Makes it possible to change defaults in a controlled way rather easily, since you can very easily log all of the places you used the default.


Nah, you should be using another value if you care about that. For example in C# a nullable bool would be a fine choice (three values: true, false, null), or alternatively Option types.


This feels like classic blaming the user. :)

You can also have better behavior between multiple defaults if you know whether someone gave a value or not.

Basically, it is another tool compared to overloading. It should be no surprise that there can be preference on how to use between them.


one example is `namedtuple`'s `._replace`:

  >>> x = Foo(a=1, b=2, c=3)
  >>> x._replace(a=1000, c=3000)
  Foo(a=1000, b=2, c=3000)
in this case, you really need to know if the user passed a replacement or not.


> Python, as an example, really screws it up by not giving you a way to know if you have the default or a value that equals it.

It does: set a unique `object()` instance as the default. It will only compare equal to itself, but it's even clearer to read if you check for identity with `is`. You'll need to define it outside of the function definition.

It's not the most concise, but it works, and it's an established pattern used in many high quality open source projects.


Thanks. This is kind of reinforcing my "not actually simple to use" point, though. :)


Taking on the responsibility of pushing back hard on poorly conceived new features is one of the important hidden skills of being an effective software developer. Programmers who just do whatever they get told like marching ants end up shooting an organization in the foot long term.

You have to develop an opinion of the thing you're building/maintaining and what it should be able to do and not do. You can't count on project managers to do that.

The trick to doing this effectively is to find out the problem the feature is actually trying to solve and providing a better solution.

Usually the request is from end users of software and they have identified the problem (we need to do xyz) and prescribed a solution (put the ability to do xyz in a modal on this page.) But if you can look to what other software has done, do a UX review and find a way to add a feature in that solves their problem in a way that makes sense in the larger context of the software, they won't have a problem with it since it solves their problem and the codebase will take less of a hit.

Unfortunately, it's a lot easier to just add the modal without complaint.


> Programmers who just do whatever they get told like marching ants end up shooting an organization in the foot long term.

This. This is why you want senior devs too. You want people who can stand up to poorly conceived features. The most important job is to say “no”, or “how about X instead?”. I get furious when I see senior colleagues defend horrible design decisions with “it was what was specified”. Your job as a developer is to take the spec from the product owner and suggest one that actually fits the system in terms of maintainability, performance etc. Blindly implementing a specification is a horrible idea.


I'd like to add that it also depends on the culture of the team.

In team A, I challenged many ideas coming from the designer and the product owner. I was also pushing back on technical decisions coming from the CTO. They always listened: I would change their minds a couple of times, sometimes I realized I was wrong. Only a few times could we not resolve the issue, but in the end I felt that they heard me and considered my point of view.

In team B, I started out with the same mindset and was trying to challenge the validity of their product decisions. It was superficially acknowledged, but 98% time my input was basically ignored, and I felt like a party-pooper for pointing out contradictions and mistakes in their thinking. After months of trying to be listened to, I realized I'm here to be a coding monkey, they don't want my input neither on product, nor on technical problems. I learned to just nod and smile, cheer them on on their bad decisions, they felt great because they felt validated. It was also better for my happiness short term, as it's not a great feeling to feel that I'm bumming them out.

Long term, I started looking for new positions, and since then quit already. I still feel it's a shame as the "idea" had great potential.


I'm in the same situation as you were. Currently I'm the only senior engineer as others left shortly after I joined. They filling up their positions with juniors, because they are "enthusiastic".

In January I'm moving on to "Long term" and they will left with a bunch of juniors. Good luck with that on the long run


What advice would you give to someone who already feels like they're on a Team B after 3/4 months?


"In the beginning was the word". Language shapes reality. As software engineers, the second we accept that 'product owner' is a legitimate title, that second we lost agency to push back on poorly conceived features. Say it loud and clear: you also have a stake in the product.


> You can't count on project managers to do that.

This is one of my pet peeves when it comes to software development. I _really_ think that software development project managers ought to be able to spot the difference between a good architectural decision and a bad architectural decision, a good design decision and a bad design decision, a well implemented function and a badly implemented function. It sinks my heart, as a software development professional, having to work for project managers who, in many cases, would be hard pressed to explain what a byte is. It's just so wrong.

It's like working for a newspaper editor who does not know how to read or write. It does not mean that you cannot produce a newspaper, but it depends upon the workers stepping in and doing all the strategical technical decision behind the project managers back. As an engineer you can live with it for some time, but eventually it ends up feeling fake, and like a masquerade.

I'm much more in favor of hands on leadership types like Microsoft's Dave Cutler, with the technical skills to actually lead, and not just superficially 'manage'.


I'm not sure I would ever "work for" a project manager. Work with, sure, but not for.


> Taking on the responsibility of pushing back hard on poorly conceived new features is one of the important hidden skills of being an effective software developer.

example from my current project: 1. Inherit half finished large software with lots of features. 2. It contains bugs and is impossible to effectively maintain/develop for the allocated manpower. 3. Management still wants all features. 4. Be brave and don't work on anything except essentials until they're sorted out. Lie to management if you have to e.g. that you found serious bugs that must be fixed (which is kind of true but they wouldn't understand)


I've also seen the opposite. Leads who push the minimal features to the point that IMO the product would fail.

I don't know what good examples would be. maybe a word processor without support for bold and italics. Maybe a code compiler with no error messages. Maybe an email client with no threading.

Does a word processor need every feature of Word? No. But does it need more than notepad? Yes!

Basically you get one chance to make a good first impression. If the customers and press label your product a certain way it will take a ton of effort to change the inertia of anchor you've given them.


It's also faster to just add the modal. When you are asked to do xyz ASAP because it has been sold to a customer and should have been deployed a week ago, you don't feel the need to do a UX review.


Missing a deployment by a week (or more accurately, deploying on that long of a timeframe), and only doing UX reviews when you "feel the need" both speak to larger organizational problems that probably aren't going to get solved by having a senior dev push back on a poorly thought-out feature.


That's true.


What you describe is a lack of a designer/architect in the loop. Devs are supposed to implement what is requested, as requested. Designers and architects are supposed to figure out what to request to the devs, based on the customers' needs. And this indeed entails figuring out the customers' actual problem, rather than parroting their solutions (which they are almost always unqualified to design).


Your job description of devs is very limiting. Software developers should be close to the customer problem, work to understand it, and develop for it. When you silo them away this way and expect them to be order takers you add bloat to the team and inefficiencies.


If you're working with a small team on some customer-facing SaaS, maybe. But let's be honest, there are a lot of software developers out there who either don't have the inclination or skill to push back against their boss's boss's boss, or question why a particular abstraction is request over another. They've got to work too, and for the most part a senior or architect is just going to give them tasks and wait for them to be completed.

I've also noticed a distinct lack of "work[ing] to understand [the customer problem]" from the "I'm going to be a dev for 7.5 hours a day at work and not touch a computer or think about programming for a single moment more" subset of programmers.


I mostly run large teams who have handled both internal and consumer facing services. The software devs who don't have an inclination to understand the customer problem generally have not been highly effective and are not people I would use as role models or for setting expectations to others. I'll add to this that I do not believe there is a place for an "architect" role that is separate from a developer role. Good developers learn to think about architecture and do so with larger impact as they become more senior. "architects" who don't code tend to be "architecture astronauts" who do not help you ship value fast, at quality, to your customers.



>there are a lot of software developers out there who either don't have the inclination or skill to push back against their boss's boss's boss, or question why a particular abstraction is request over another.

I definitely agree with you. There are a lot of people who for various reasons (inherently lazy, not engaged with their current role, etc...) would not do this.

My problem is that I think this is a learned behavior in a lot of environments. A lot of places try to treat developers as code monkeys, where it's actually the brilliant designers and business analysts making the decisions and it's just the responsibility of these programmers to execute their vision.

This compartmentalization doesn't work well and it leads to a lot of really bad software. Here's my theory as to why.

If you look at all the various jobs involved in creating a piece of software: Design (UX/Screen Design), testing, project management, business analyst/requirements gathering, programmer. There's only one that's actually required to produce a functional piece of software: the programmer.

And as much as people want to pretend like this isn't the case, the programmer often has to do a little bit of every one of those other roles to effectively do their job. In this comment thread, we've been discussing how they have to put on their project management hat and help shape requirements, but it's deeper than that.

After all, what programmer hands off a feature for testing without themselves testing that it worked? UX might give you a wireframe, but there's almost always edge cases not accounted for. As for project management, if you're lucky you have a good one who actually makes the project easier but more often you will have a bad one who adds no value and only forwards emails to you and bugs you about timelines, which effectively means you're managing the work yourself.

And this isn't to say that the people in those other roles are necessarily bad at their job - but there's a context you get as the person actually building the thing that you can't get in these silo'd, standalone roles. You see issues that other people can't and you have an ownership over the end result that they don't. The programmer is the most important piece of it all. They tie it all together and the buck really stops with them. Programmers who don't take ownership ultimately lead to worse software.

If companies understood this on a more fundamental level and tried to select more for it in hiring than whiteboarding questions, they'd probably be more successful.

The parent comment tried to make a distinction between "architect" and "dev" - but this should start at the Jr. levels. Teach them at the beginning of their career.

To be clear, I do think those other roles are important and valid. I just think it's very very rare for a team to be staffed with people who are good in all of those areas and when someone is deficient, it basically falls on the programmer to make it up.


Within a problem space, there are two kinds of complexity: inherent complexity, and accidental complexity. This article is about accidental complexity.

There is, as far as I can tell, and enormous amount of accidental complexity in software. Far more than there is inherent complexity. From my personal experience, this largely arises when no time has been taken to fully understand the problem space, and the first potential solution is the one used.

In that case, the solution will be discovered as partially deficient in some manner, and more code will simply be tacked on to address the newfound shortcomings. I'm not referring here to later expansion of the feature set or addressing corner cases, either. I'm referring to code that was not constructed to appropriately model the desired behavior and thus instances of branching logic must be embedded within the system all over the place, or perhaps some class hierarchy is injected, and reflection is used in a attempt to make make the poor design decisions function.

I don't think adding features makes software more complex, unless those features are somehow non-systemic; that is, there is no way to add them into the existing representation of available behaviors. Perhaps an example would be a set of workflows a user can navigate, and adding a new workflow simply entails the construction of that workflow and making it available via the addition to some list. That would be a systemic feature. On the other hand if the entirety of the behaviors embedded within the workflow were instead presented as commands or buttons or various options that needed to be scattered throughout the application, that would be a non-systemic addition, and introduce accidental complexity.


One things I've noticed about building software is that the most appropriate contours of the problem space often only become clear with hindsight.

Even if you start off with the best intentions about not putting in too many features it won't always help.

This is why the second mover can also have an advantage in some areas. If they recognize the appropriate contours they can avoid the crufty features and more directly and effectively tackle the main problem.


For those interested in accidental vs inherent complexity, “Out of the Tar Pit” is a very easy to read paper that explores those definitions: https://github.com/papers-we-love/papers-we-love/blob/master...


While there is accidental complexity, we can not measure what is or isn’t accidental. So I think the statement that the majority of complexity is accidental is completely made up. I also think it’s wrong.

The majority of complexity in software is unavoidable. Accidental complexity just makes it even worse.


I feel that a lot of people misunderstand "complexity" vs "complicated". There's nothing wrong with complex. It's the nature of life that things are complex. Complicated though is almost always a negative. Complex code is fine, it's probably solving a real problem. Complicated code is not, it's just hard to work with.


Rich Hickey had a great talk on this that introduced me to the word "complecting" (https://github.com/matthiasn/talk-transcripts/blob/master/Hi...)

Relatedly, I have a simple new maths for counting complexity, so that you can compare two "complex" solutions and pick the less "complicated" one: https://github.com/treenotation/research/blob/master/papers/...

SVG form: https://treenotation.org/demos/complexityCanBeCounted.svg


My experience, is that "complicated" vs. "complex," as you define, changes, depending on who is looking at the code.

If someone has a philosophical aversion to something like abstraction, then they will label it "complicated," but I use abstraction, all the time, to insert low-cost pivot points in a design. I just did it this morning, as I was developing a design to aggregate search results from multiple servers. My abstraction will afford myself, or other future developers, the ability to add more data sources in the future, in a low-risk fashion.

I also design frameworks in "layers," that often implement "philosophical realms," as opposed to practical ones. Think OSI layers.

That can mean that adding a new command to the REST API, for example, may require that I implement the actual leverage in a very low-level layer, then add support in subsequent layers to pass the command through.

That can add complexity, and quality problems. When I do something like that, I need to test carefully. The reason that I do that, is so, at an indeterminate point in the future, I can "swap out" entire layers, and replace them with newer tech. If I don't think that will be important, then I may want to rethink my design.

That is the philosophy behind the OSI layers. They allow drastically different (and interchangeable) implementation at each layer, with clear interface points, above and below.


To be clear: I'm responding to your approach to frameworks, not your first example of search result aggregation. I also want to emphasise I'm posting this out of genuine interest, not contrarianism or antagonism.

Is there overlap between your philosophical layers and practical utility? The kinds of things that have been required to change in my career so far were base assumptions in the business domain, which no amount of premature abstraction could have prepared me for.

I've never witnessed a need to "swap out" an entire layer. Have you? In what scenario did you need to swap out... what exactly? Did these philosophical abstractions turn out to be the correct ones when a need did arise? Did they make the transition to the new reality easier? Does the transition cost outweigh the slower development cost incurred from the abstractions' overhead?

I keep seeing people claim your approach is a good one, and I'm genuinely curious if there is any evidence backing it up. I'll gladly take anecdata.


> I've never witnessed a need to "swap out" an entire layer. Have you?

Many times, but the code assets (and the examples) are not ones that I can share. Feel free to dismiss it, if you like. I won't lose sleep over it.

What I can share, is that I have written frameworks in technology that is either deliberately "low-tech," or that is something that I know will probably not age particularly well, so I make it easy to replace. One example that I can give, is the BAOBAB server, that I wrote a couple of years ago[0]. It has four layers, and the ANDISOL layer[1] is the one that acts as a programmatic façade to the SQL-based database layers, below. It is explicitly designed to be the "swap out" point for the system.

In reality, I would suspect that this would only be the first part of a swap that would eventually replace the BASALT layer[2], but it would allow a graduated, incremental replacement strategy (I have done these a number of times. They are not for the faint of heart).

For example, when I talked about adding a command to the API, I just added one a few days ago, where I added a call to return a fast list of user IDs and display names[7]. I needed this for a "look-ahead" text entry autofill, in a social media app that I'm building on top of BAOBAB.

I could have easily added this to BASALT, and that would have been that. It would have been fast, quite safe, and wouldn't have required the elaborate nested submodule synchronization that is required when I tweak BADGER[3].

The problem is, that the command was a low-level SQL call, at its heart, and those belong in BADGER or CHAMELEON[4]. They certainly should never be exposed above ANDISOL.

So I added it to BADGER, then implemented a "pass-through" in CHAMELEON, and in ANDISOL (what is exposed to BASALT). Since BADGER is nested inside of CHAMELEON, which is nested inside of COBRA[5], which is nested inside of ANDISOL, which is nested inside of BASALT, I had to bump each of their versions, and update the submodule (I despise Git submodules) chain for each.

This is a fairly trivial example, but I take things like modularity, and domains fairly seriously. Also, it's a good way to keep security[6] fairly high.

This may not have answered your question, but it might help you to see how I work.

[0] https://riftvalleysoftware.com/work/open-source-projects/#ba...

[1] https://open-source-docs.riftvalleysoftware.com/docs/baobab/...

[2] https://open-source-docs.riftvalleysoftware.com/docs/baobab/...

[3] https://open-source-docs.riftvalleysoftware.com/docs/baobab/...

[4] https://open-source-docs.riftvalleysoftware.com/docs/baobab/...

[5] https://open-source-docs.riftvalleysoftware.com/docs/baobab/...

[6] https://riftvalleysoftware.com/BAOBAB/PDFs/Security.pdf (Opens a PDF File)

[7] https://github.com/RiftValleySoftware/baobab/commit/d8316a1b...


I should mention that this also allows me to do things like hire a programmer that is better than I am to refactor portions of the system.

I am not an expert at server tech. There's a good chance that we'll be hiring contractors to work on the server, once we get a bit more spending dosh.


And quite often the philosophy doesn't line up with reality. The OSI layers have little relation to how networking actually works and it would be next to impossible to replace some of those layers.


TCP/IP is actually a simpler model, with the layers mashed-up differently.

Seems to have worked pretty good, so far.

I'm very much a practicum-oriented guy. Theory and academic purity are great, but it's important to ship, which is what I have been doing for thirty-plus years.

I have just found that it is important to do things like establish domains for things like layers, modules, objects, protocols, whatever, and then stick to it.

If my domain is erroneous, then I need to look at that. If not, then I need to stick with it. I tend to plan for the long game. I've written APIs and frameworks that last decades, and that requires planning for the unknown; always a challenge.


This is a good point. I once worked on a system that checked projects met various legal standards and rules before allowing changes to be saved. This system was complex because the rules were complex, the only way to make it simpler would have been to convince the government to make the rules simpler.


I agree with your premise but I don't think you're using the correct terms here. "Complex" and "complicated" are synonyms as far as I can tell.

What you're describing sounds like "essential complexity" vs. "accidental complexity." See "No Silver Bullet."

Sorry, this is pedantic, but using the incorrect terms adds accidental complexity to a topic that is already essentially complex. ;)


> "Complex" and "complicated" are synonyms as far as I can tell.

No. Complex is the opposite of simple. Complicated is the opposite of easy.

Simple and easy aren’t synonyms - see the talk by Rich Hickey linked in a sibling comment.


I was going off the Webster definitions of these words, which do not make this distinction in their antonyms. They both seem to be the opposite of simple. The opposite of easy would be "hard" or "difficult".

I understand there is no definitive source for English, but what is your source on that? Is it Rich Hickey? Because I don't think he has the authority to change the definitions of common English words.


No. Hard / difficult is the opposite of easy. See the talk by Rich Hickey linked in a sibling comment.


I agree with your comment. However, a good tool for controlling complexity is deciding what your system is going to do. As I said in a sibling comment, consider method overloading in java: this is a real world feature, not uncommon in other languages. There are arguments for and against it (I am against it.)

The implementation of it may be amazing code, but none the less it makes the java compiler and runtime far more complicated that it would be if the feature were omitted.

So, again, I agree with you, but I also agree with the articles point that choosing features carefully is an important tool in controlling complexity.


Out of the tarpit anyone?


"Complexity is a crutch" - I'm told Neil Gayman said this. Not 100% sure but I totally agree.


I'd say that the only reason that software seems too complex, rather than as complex as it needs to be, is because every programmer thinks he can rewrite it in a simpler way, but when he's done, it's as complex that which he has rewritten.

I've seen it happen so many times, and I've done it. It's the very same principle that leads to almost every construction project running behind schedule — a man simply underestimates the complexity of nigh every task he endeavors to complete.


I see your points, and I see the merit of "rewrite syndrome", and lean strongly towards automated-test backed refactoring, and all in all I disagree with your thesis.

Sometimes, software patches and new features get tacked on and tacked on and the system loses all semblance of cohesion or integrity. Thinking of the system as a whole, iterating with the confidence brought by tests of some sort, one can begin to detangle all the unncessary intermixing and duplicate work and begin to make the system sensible.


I completely disagree. certainly the chances that a rewrite in a standard software organization are pretty good that the new version will be just as broken, but in a new and different way than the last.

but I've taken several projects in the 100s of k-lines and translated them into projects with equivalent functionality and spanning between 1-2 decimal orders of magnitude less source code.

that's not an argument for rewriting in all circumstances - I just think at least half of most mature software is just 'junk dna' - useless boilerplate, unused paths, poor abstractions, etc


Depends. In my experience, I've never regretted a rewrite and always ended up with a better and simpler code.

It can be very frustrating to modify low quality and ugly code so I feel much better after a rewrite.


Depends. If you can re-evaluate the requirements when you start your rewrite, you can likely consolidate a lot of the features.

If the requirements stay exactly the same, then yeah, there’s no point.


Well, when Wayland started they went in on the assumption that they could cast away quite a bit that “no one was using” or that “wasn't necessary”.

And then they had to include more of what they cast away because they underestimated the number of consumers of things they personally weren't using.

libinput originally actually did not have a way to disable and configure pointer acceleration I believe because the developed thought there was no reason to ever turn it off. He was not a gamer and was largely ignorant of how essential being able to disable it is for the level of accuracy required for video games.


Wayland is a display protocol, and the base of that protocol is deliberately really minimalistic - it’s there so different programs understanding the protocol can draw their rectangular windows on to the screen and that’s it.

But Wayland developers did think of additional requirements but were thinking that ad hoc inventing something that may or may not be implemented downstream would be a bad idea (it is), so they created a standard way of adding extensions to the base protocol - which are created with cooperation between major desktop environments - the actual implementers of said protocols.

And as of now, wayland is pretty feature-compatible with x, in an actually extendable way (much more close to the UNIX philosophy if that’s your thing).


Now days Wayland compatible DEs seems to support everything the average user needs while supporting quite a few modern features that were apparently too difficult to support in X11 like monitors with different DPI scales.


That you think that would suggest you have a similar lack of appreciation of what users that aren't you need that the developers that went into it had.

At this point Wine is still at the point of not considering a Wayland port until some severe changes happen at all — it cannot even begin to map the WinAPI to Wayland as it could to X11 at this point because too many features they need are missing that the developers didn't consider would be necessary for many things that Windows simply exposes.

https://bugs.winehq.org/show_bug.cgi?id=42284#c1

It's from 2017 but it details some of the features that they would need which Wayland does not consider to support.


The wine situation is kind of unique. Programs should not be able to force their locations, the wayland devs are correct here. The unique issue is trying to map windows apis to wayland. The wine devs are pushing to have wayland implement all of the mistakes in the windows API while the wayland devs want the best possible solution without being chained to the windows api.

I'm also not convinced there is _nothing_ the wine devs could do to resolve this. They already suggested some ways to deal with the lack of APIs but they refuse to implement them more on principal rather than practical issues.

In the end, if only Wine is using xwayland, thats not a particularly bad position to be in and Wayland seems to work well for native linux applications.


> The wine situation is kind of unique. Programs should not be able to force their locations, the wayland devs are correct here.

Absolutely not. This is needed for many things, and that you think it can simply be stripped shows your lack of appreciation of the needs of many other users and software.

How will you without this for instance make:

- a notification daemon that creates a notification bubble at some part of the screen - a toolbar of some sorts that sits at a specific place on the screen and is summoned on command - a collapsible terminal application that summons itself on a certain global hotkey and otherwise lays hidden - any such application that summons itself - a program that rearranges other windows for any particular purpose such as tiling them in a specific way that the user wants

All these things obviously exist as of this moment, and are used by many users, and can't be made to work on Wayland because the developers were “correct” according to you in their assumption that none might ever need this.

> I'm also not convinced there is _nothing_ the wine devs could do to resolve this. They already suggested some ways to deal with the lack of APIs but they refuse to implement them more on principal rather than practical issues.

Your conviction would be wrong; I've been in the same situation and talked to the Wayland developers and it is they who refuse to implement various features that are highly requæsted based on principle.

> In the end, if only Wine is using xwayland, thats not a particularly bad position to be in and Wayland seems to work well for native linux applications.

Yes, if it would be only Wine, perhaps, but it's not only Wine — and that you think it's only Wine shows either having never researched the mountain of issues, or willfully ignoring them because these concerns are immediately encountered when researching the issue of developers of many an application who are not pleased that their application fundamentally can't work on Wayland because it endeavored to not include basic functionality that every other display protocol has.

These were not “mistakes” — the reason everything from Windows, to X11, to Quartz has these features is because users need them for what they wish to do.

And this is only about programs choosing the positions of their own windows. A simple, trivial other thing is every time a new Hearthstone expansion comes out and I buy about 80 new packs with in-game currency that I game insists that I sit through the animations of opening them which bores me. I refuse and rather simply minimize the Hearthstone window and even though it doesn't have focus rapidly send it endless spacebar key events with a trivial xdotool command which then does this in about 15 minutes without needing my attention — such a simple thing cannot be done in Wayland at the moment and most likely never will because “users should not be able to simulate key events” as far as the Wayland developers are concerned, which is obviously a function with many use cases.

“Users should not be able to ...” was the mantra of the Wayland developers when they started, and in many cases they have relented and allowed them because they realized they severely underestimated for how many things users do need to do that.


>a notification daemon that creates a notification bubble at some part of the screen

The DE comes with one, you just send your notification to it. Apps creating their own notification bubbles is an anti feature and should be prevented if possible. They don't show in the notification box/ on the lock screen and ignore your do not disturb setting.

>a toolbar of some sorts that sits at a specific place on the screen and is summoned on command

Put the toolbar inside the app window or make it a new window and let the user decide where it goes. Apps being able to draw over the screen should probably be provided as a root feature as it is pretty dangerous left exposed.

>a collapsible terminal application that summons itself on a certain global hotkey and otherwise lays hidden

I just checked and gnome allows launching programs via shortcuts. An app should not be able to passively sit in the background collecting keystrokes to launch itself. This is a massive security risk

>a program that rearranges other windows for any particular purpose such as tiling them in a specific way that the user wants

The window manager implements wayland and is free to arrange windows however it wants as it is a trusted component.

Basically all of the stuff in your comment is achievable via the DE/WM. Its a good thing that programs no longer have free reign to do whatever they want and passively record the users keyboard, screen and draw over anything.


> The DE comes with one, you just send your notification to it. Apps creating their own notification bubbles is an anti feature and should be prevented if possible. They don't show in the notification box/ on the lock screen and ignore your do not disturb setting.

And what if you don't like the one the compositor comes with and want to use a different one?

Pantheon, XFCE, Mate and many other systems by design run their notification daemon as a separate process that can be disabled so that the user may choose what he wishes; there are also many standalone notification daemons to fill this gap.

> Put the toolbar inside the app window or make it a new window and let the user decide where it goes. Apps being able to draw over the screen should probably be provided as a root feature as it is pretty dangerous left exposed.

I mean a global toolbar at the bottom or top of the screen.

> I just checked and gnome allows launching programs via shortcuts. An app should not be able to passively sit in the background collecting keystrokes to launch itself. This is a massive security risk

It's a “security risk” of a far lesser magnitude than “an application can read and write all data owned by your user”.

It's silly to remove features because processes that run with your user's rights have the ability to fuck you up. Might we remove the rm utility now because it's a security risk that it can unlink files?

> The window manager implements wayland and is free to arrange windows however it wants as it is a trusted component.

And what if it not do so as the user wishes to and the user wishes to use something else to do it?

This is why X11 eventually evolved the EWMH standard to allow the window manager and a third program to coöperate without conflict on this.

> Basically all of the stuff in your comment is achievable via the DE/WM. Its a good thing that programs no longer have free reign to do whatever they want and passively record the users keyboard, screen and draw over anything.

Everything is achievable if it be built in to the compositor including fully fledged video games and web browsers.

The reason we have such a thing as software is because the compositor won't have all those things we want, and if it did for every single user, then it would be so bloated beyond belief.

The Wayland solution to missing features is “the compositor should do it”, but no compositor has all these features; that is why software exists, and that is why one often writes one's own software.


Or see my previous answer: write a custom extension for your compositor (which in most cases already exists and cross-DE) and add additional feature? Like, notification daemons like this already exist on sway (and every other compositor that use wlroots, a compositor backend).


And then they only work on wlroots rather than every Wayland compositor, and then we're back at the original part of the problem that wlroots now had to reimplement the parts of X11 that were deemed safe to remove, but were later found to be used enough to be required.

The difference is that on X11 since it's part of the standard it will work everywhere, rather than only with one window manager.


That doesn't work in reality because for any large, complex piece of software it's impossible to rediscover all of the requirements. There are always hidden requirements which were never properly documented but somehow ended up as code in the legacy system.


Biggest underlying reason for why software ends up complex, is because real world domain in which software operates, is also complex.


That together with the fact that nobody gets promoted and nobody collects revenue by simplifying or refactoring code.


People do get promoted and paid for simplifying and refactoring code. Heck, I got promoted and am currently being paid to do just that. It often involves adding value in other ways simultaneously, but what you said is just false.


I guess it's dangerous to use words like "nobody" or "everyone" or "always" or "never" on the overly pedantic Hacker News. I would hope that the sentiment of my point came across either way. If not, let me clarify: it's generally preferred to spend engineering effort on building new features and optimizing code to improve performance etc. over e.g. rewriting code from scratch to simplify it's maintenance. I would venture to guess that an overwhelming amount of engineering effort is spent on effectively adding complexity to code.


Whoever is able to demonstrate the value of a sequence of work to the business will get their work prioritized. Product’s job is to shape and predict the value of adding new features. Customer complaints about slow reports help Customer Support justify performance optimizations.

If engineers want the business to pay them for simplifying maintenance, they can identify a collection of support tickets that have a compelling root cause, and propose refactoring that system to eliminate the bugs and maintenance issues.

What I usually see is us engineers calling it “tech debt” and complaining we never take time for it, which is about as effective as the bookkeeper complaining about the paperwork that debt payments create for him. Businesses love debt. It’s a useful financial tool. The business doesn’t get the issue.


Funny that I'm nobody again. I work for a big german retailer. A big part of my work includes simplifying and refactoring code, so that we have lower maintenance cost and can move faster as a organisation unit. This was also a big factor in my last promotion.

We also increased profit by lowering our runtime cost by some of those optimizations.


If I were a betting man, I would bet that the main reason why you're allowed to do that is because you lowered runtime costs by some of those optimizations, not because of the cleaner code you can move faster.


? not parent, but I am really surprised by that mindset.

A large part of my career has been telling my boss "we should invest 5h of cleanup time here to make sure that in the future we won't have those 5h worth of obstacles here before building a new feature on top of it".

In industries where requirements change on a weekly or monthly basis, agility is worth a lot.


I completely agree. My bet would be the bosses usually don't. So in a lot of places you need another "excuse" (or as another poster mentioned, spend the time saved to invest more into maintainability but cover it up).


You can trade on previous achievements to spend time improving your codebase (unless you have a micromanager that tracks every hour).


Not directly. But simplifying and refactoring can help you understand the code better than doing routine maintenance. This helps you solve bugs and write new features faster and with more stability, and also give better input during meetings.

So, indirectly, yes: you can get promoted and collect revenue by simplifying and refactoring.


I don't know if I agree with that.

Most of the complexity and bugs I see in software are not because of the problem domain, but rather because of over-abstraction, under-abstraction and abstraction leaks, and also because of limitations and complexities introduced by the programming model or environment.

(unless you consider that "supporting five operating systems and the language must be X" is part of "essential complexity")

Of course, the more complex your domain is, the bigger is the program. But the non-essential complexity that exists due to the bureaucracy of languages/libraries/frameworks is a much bigger factor in adding complexity, bug and lines of code. Some examples:

- Manual allocation and deallocation of memory is a good example of something that we might think as essential, since it's intertwined with our domain code, but turns out to be unnecessary (even though the replacement has downsides). The billion-dollar problem (nulls) is another one.

- Supporting multiple environments/browsers/platforms. Competition is good, but the cost is steep for third-party developers. Using multiplatform frameworks partially solves but also has drawbacks: performance, limitations, bugs, leaky abstractions. If you need to be closer to metal, then different OSs have different threading models or API styles. Sometimes they don't even expose the main loop to you. You need to work around those limitations.

- In most environments we still don't have a nice way of handling async operations without leaking the abstraction. The current solution is adding "isLoading" data everywhere (or creating some abstraction around both buttons and the fetching mechanism). Concurrent Mode in React is probably the best thing we have so far.

- Most modern Javascript toolchains need multiple parsing steps: in the transpiler (to convert to ES5), in the bundler (to detect dependencies), in the linter, in the prettifier, and in tests. Compatibility between them is not guaranteed, and you might get conflicts which have to be solved by finding a middle ground, and that sometimes take more time than writing features.

- Dogmatism is another issue. I remember in one workplace years ago there was a "ORM only" rule and most of us would work the SQL and then convert to Rails ActiveRecord (or worse: Arel). In the end it was a complete waste of time and the results were impossible to maintain.

- I also think that the old Peter Norvig quote that "design patterns are missing language features" still stands. Go has proven that it's possible to have "dumb, simple code", but in other languages our best practices involve adding non-essential complexity to products.

The only exception to that in my experience is SQL: if a query is too big is not due to some bureaucracy of the language, but rather due to the complexity of the domain itself.


For me what you're describing is more mess than complexity. I see your point tho, but I'd also say that some of things you describe here are direct result of real world problem domain being complex, and changing over time.

For example, wrong abstractions. As long as engineers writing the software are competent (if they aren't - that's a totally different story) they'll try to chose right level of abstractions for current understanding of the problem. Years down the road, a lot of their choices maybe end up to be a mistake, very often because understanding of the problem changed, or problems itself changed, and something that was nice and clean solution isn't one anymore. If problem isn't fully fixed and 100% understood, you'll never be able to make all the right, future-proof decisions about right abstractions.


Yes, it is "mess". But it's also additional complexity that is non-essential to the problem. It's what Fred Brooks called accidental complexity in his No Silver Bullet essay. It is important for us to acknowledge and differentiate between the two kinds of complexity, because we in the software industry can and should aim at reducing the accidental/non-essential type.

About abstraction, even when it's good, it still depends. For example: you can use a Visitor Pattern to solve a collision detection problem, but if the language had multiple dispatch you would have less complexity. So it is definitely non-essential complexity.


But multiple dispatch could also be argued as non-essential complexity... I would much prefer a visitor pattern for the abstraction benefits, for example.


But that's not the point, that's not an example to be picked apart. The point is if there's a less complex way to solve the problem, then it's still causing non-essential (aka accidental) complexity. Even if the simpler solution is ugly, the more complex one is still introducing non-essential complexity. If we claim that every single thing we do with the code is part of the "real world problem", then we're throwing up our hands in the air and giving up on the problem of complexity of software. It's a naive and simple justification that's very comfortable for us, but doesn't amount to much. I really recommend reading the No Silver Bullet paper and Mythical Man Month. Fred Brooks explains those things very well.


Here's my approach. I think many feature requests fall under the X/Y problem.

- view a new feature request as a new user capability

- extend the model that the software implements, to encompass that capability - regardless of how the feature was implemented in the requester's head.

- extend the software to match the new model. This may require refactoring as the model may have had to undergo shifts to encompass the new capability

For example:

I have a car. I model the car as four wheels, an engine, a chassis, and a lever. The engine drives the wheels, the wheels support the chassis, the chassis contains the engine. A lever in the chassis sets the engine in motion. It's a simple model and is capable of 1. sitting still and 2. moving forwards and backwards. This is all the capabilities we've needed so far.

A user requests a new feature where the wheels are instead mecanum wheels (https://en.wikipedia.org/wiki/Mecanum_wheel).

The default industry response is to either implement the change as-requested, or reject it. I propose that the correct move instead is to ask the user WHY they want mecanum wheels. They reveal that they want the car to move in 2 dimensions, rather than one. From that understanding you can extend the model of the car to encompass the feature - you may add the mecanum wheels and a mechanism to control them, you may add a steering wheel and rack-and-pinion, you may do something completely different - totally depending on how and why the user wants 2d movement (depending on further questioning, ie "5 whys"). But you are working to the capability, not the feature. By extending the model, you can then change the software to match this new model.

I think as software engineers we have a tendency to forget the model and focus only on the code. A request for mecanum wheels becomes a question of how to change the software to encompass that feature. But we must always remember the existence of the model, and the user's relationship to it.


In my humble opinion, a lot of projects go quietly bad when they experience some sort of new requirement that gets underestimated in terms of its architectural impact by project management. At such time senior Devs have either already moved on, or have their eye off the ball such that new features get incorporated without the necessary architectural support. These inflection points can themselves introduce complexity but often become the gateway for all sorts of subsequent small things that explode in size. In short, don't miss architecture moments


The other thing that happens, and really dooms a project is when the senior people leave. Eventually you end up with a team that doesn't really understand the code.

This leads them to just tack on features while changing as little as possible. This will grow into something truly unmaintainable, virtually guaranteeing no competent work will be done on the project again.


In this regard, software is quite similar to law - more so than to other STEM disciplines like mechanical or civil engineering or math.

A body of law is adapted and extended through many years, by various groups having different priorities, and rarely does someone dive in and refactor it to be simpler (while achieving the same "business goals").

In software, at least we have an option of creating automated tests to verify correctness or internal consistency, and using stricter languages to avoid ambiguities.

In light of this, we (as an industry) are actually doing quite good!


Yeah, I would argue that essential work is usually similar to law (you just encode the rules to meet some objective). There's lot's of added complexity though in practice (due to distributed nature of systems, limited resources or just compatibility)


My favorite point on how software end up complex, is our industry somehow thinks we are unique on this.

Everything winds up complex. Just look at how many ingredients go into simple store bought cookies. Look into the entire supply chain around your flour, that lets your homemade cookies be simple.


Unless you count obesity and diabetes, the public is not noticeably inconvenienced by the cookie supply chain, because it pretty much just works. So it's hard to find examples of Cookie Technology Failure Modes with serious consequences.

There's a huge difference between "hard to use" and "prone to failure."

Software is often hard to use, but it's also extremely prone to not working.

This is not true of airliners [1], cars, large buildings, large ships, and commodity rocket launch platforms - as well as global supply chains of all kinds.

[1] Unless there's software involved.


When the pandemic lockdown started some of the supply chain abstractions leaked, and the US public was noticeably inconvenienced by lack of toilet paper, cleaning supplies, and (in some cases) meat. It turns out the supply chains are actually rather fragile with many failure points.


This is beside my point. I only picked cookies because I'm making some. :)

Look into the supply chain for lumber to your house. Concrete. Electricity. Literally anything. Life is complicated.

Software just fools us by letting us more readily have a blank slate sometimes. Though, even then, the complexity of the tool chain to support such easy "hello world" programs is insane.

If you want to know frustration, try using any of the simple tools you can buy at a dollar store for a time. Simple can openers that will fall apart in no time. Tableware that will bend and likely go unusable in months.


> This means that supporters can always point to concrete benefits to their specific use cases, while detractors claim far more abstract drawbacks.

I don’t agree with this. If you have a salesperson/accountant who grew into being CEO and is just a bean counter who doesn’t (want to) understand the real costs of training new developers on increased complexity in a system or being able to maintain a complex system over decades, this could possibly be true. But in that case, your engineering manager isn’t a real engineering manager either, and is just another bean counter in disguise.

I get that senior management may not always (want to) understand the nuances in building, maintaining and supporting a complex system, but it’s not abstract. It costs real money that all these types can feel the pinch of.

The real reason why software is complex or can get complex is because the underlying domain and its requirements and constraints are complex, combined with layers of complexity added on the technical side to enable certain things (perhaps easy configurability, scalability, reliability, etc.). There are self-inflicted wounds too, where complexity is prematurely added. But that’s not the full story in all cases.


Software can be viewed as a function.

A function is designed to be called in a certain way for certain output. If the required use of the function changes, function’s signature would need to support more arguments, and its behavior will increase in complexity accordingly.

To avoid that, it is important to understand that a changed use case calls for what is effectively another function (possibly more than one). It may not be immediately feasible to rewrite and switch all callers over due to limited resources and lack of control over aforementioned callers, but conceptually it is another beast now—and existing implementation should move in that direction or be put on a deprecation schedule, rather than keep widening input and output spaces in a futile attempt to try to be everything at once.

Software is very similar in this regard, except it is much more tempting (and often significantly easier) to “append” features rather than rethink the fundamentals as time goes by and users with different needs get on board.

This is why, I believe, rigorously assessing[0] the scope and the intended audience of a piece of software at early stages (and constantly reassessing them afterwards) can go a long way against unchecked rise in complexity over time. As can prudently abstracting architecture pieces away into separate self-containing focused pieces, which can be recombined in new ways when context inevitably calls for what essentially is a different piece of software—instead of having to rewrite everything or horseshoe new features to support use cases that were not originally envisioned (but have to be supported for business reasons).

[0] By having frank discussions and asking “why” many, many times.


I don't think most software development should end up complex since most software development in the world is redundant.

The problem is that we don't have the right high level abstractions, low level robustness, and groups focusing on this problem to build software faster. It is a matter of time, there will be a silver bullet for most software needs.

There is also some conflict of interest: imagine if Microsoft gives you frameworks to build/integrate large software projects with a few "parameters"? There will be fewer developers and customers for them.


Making software that does everything by tweaking ‘a few parameters’ is orders of magnitude more complex than the software that would do it without.

Since I’m now generally part of several year software projects, we should be fine for the next few centuries or so.


Its interesting that you use the term Silver Bullet. I assume you are aware of the famous essay 'No Silver Bullet' https://en.wikipedia.org/wiki/No_Silver_Bullet


Yes, it was intended. I think with enough redundancy there are silver bullets in certain areas.


The far greater source of complexity is the extra that has resulted from features being analyzed and implemented sequentially. Each one may have been economical in isolation, but as a whole is not. This is why a rewrite often seems so attractive, to analyze the entire known scope and use a smaller set of mechanisms to do the same often allowing for additional changes to fit more easily. Painting yourself in a corner is either short-sightedness (technical/vertical, voluntary/oblivious), a tech-debt choice, something that didn't turn out as well as expected despite efforts for technical reasons or unexpected changes. It also comes from trying too hard, generalizing too early. So there are many reasons, and pragmatically balancing them is something that is given far too little weight.


One the one hand it's really hard to program well. I think anyone who can solve a sudoku puzzle can learn to program, but only people with freakish skill or determination can learn to program really well.

On the other hand, those folks who can program really well are often what I call "complexity junkies": programming is their sudoku, it's fun and exciting. It helps that you can get paid well to do it.

So you get things like Haskell and Rust.


There are complex things and there are complicated things. You want to make complex things easier to understand by refining abstractions and eliminate complicated things alltogether. Constant refactoring and tabu word - intelligence - is the key here. It doesn't mean you have to reshuffle everything every week. It means spending maybe 5% - 20% of time on small things to improve code based on current knowledge. This has to be pushed by developers because semi-agile methodologies most companies use have built-in forces to move those tasks out of sprints.


I had the impression the main problem was missing cognitive flexibility of developers.

Some people just do things like they always have. They don't bother to understand what people before did and then bend everything to their will.


Another twist on this is that some people are passive-aggresively lazy. They may understand that what they are being asked to do has longer term negative consequences, but they don't want to invest the effort in trying to change any minds. They just do what they are asked, and when things get complicated they can say "I did what you asked for; you didn't ask me to simplify anything"

This is seen more in rigidly hierarchical organizations where debate and ideas from the lower ranks tend to be quashed. Think of the boss who says some variant of "you're paid to do what I ask, not to talk about it"


This downplays the difficulty of "bothering" to understand things. Most developers work on applications of insane complexity that they couldn't ever hope to understand. Almost every change is done in some amount of ignorance of the surroundings.


I agree wholeheartedly on the Brooks' "conceptual integrity" thing. I really enjoyed his book The Design of Design.

It takes effort to maintain cohesion and sound architecture, but it pays off for future development.


Building features on top of other features is often zero cost. Code becomes a many layered cake consumed by the end user. In the web development stack the simplest of features like text on a screen is the achievement of decades of technological progress. Text may be localized, shaped with HarfBuzz, run through libicu's BIDI algorithm, encoded with a nontrivial encoding, wrapped in markup language, nested inside of multiple layers of network headers and corresponding metadata, sent over the wire as a series of 0s and 1s, and then painstakingly unpacked in reverse order.

This is clearly complicated and clearly works. Many different actors operating quasi-independently. You can imagine the difficulty when one actor in a time crunch tries to design a similarly complicated cake stitched together with parts homemade, parts open sourced and parts paid for.


I think too many HNers here have an incomplete picture of what makes something complex. There is genius in simplicity. Complexity can be best understood (for me) as simultaneous interactions of many simple things under the same roof, which we then consider to be in totality an object of reality.

Those simple things can be different in an infinite variety of ways, in which complexity can be derived. Personally, if I've jumped into a codebase that is messy and cluttered - unorganized... It's immediately noticable that the original project developers had a shallow and or narrow strategy for how they wanted to design their system.

Start with simple and useful mechanisms, which are the building blocks for whatever problem you are trying to solve. Complexity and abstraction can then be extrapolated from a simple yet brilliant foundation. I don't know how that isn't common sense.


1) You can't see software, which combines with 2) a company with bad sales but great engineering is a dead company, which results in 3) decisions trickle down from sales but salesmen are blind to the software. Proposed Solution: Programmers should learn sales. (So it's diversity at the root of all problems !)


Proposed Solution: make it so that you CAN "see software". this is what I work on, but also things like Sublime's Minimap are huge trends in this direction.


could work but we’ll need to condense 50k LOC crud spa into a flowchart without loss of essence. (that’s what i work on). what is your angle? i checked out treenotation.org , is there more? flowchart is graph


It’s doesn’t work with current languages. The shape of the AST needs to be the same as the shape of the source. Once you’ve done that though the program and visualization are isomorphic. Check out the toHTMLcube demo in the Sandbox, the language designer demo, or:

https://v20.ohayo.computer/?filename=ohayo.ohayo-source-code...

That’s all older stuff, but hints at some of the ideas.


As a programmer currently learning sales by necessity, I agree wholeheartedly.


Alex Gaynor! I've missed this guy. I remember him back from his work on Django.

http://pyfound.blogspot.com/2019/02/the-steady-leader-of-pyt...


This is what project maintainers or code owners are for. It's the maintainer's responsibility to reduce complexity, and because it's hard to quantify the maintenance cost of additional features, the maintainer should have the final say.


I have never heard of code ownership being a positive thing. Isn't this more an issue of encouraging internal knowledge sharing and confidence in refactoring?


Because there is no physical, 3dimensional limit.


What about other kinds of backpressure on complexity earlier in the cycle? For instance, when you are just scoping out a feature (before implementation), you could estimate the net lines that need to be added (for example). If it seems to high, people can push back before all the work has been done. And if the estimate starts to look wrong you can go back and re-scope the project and either simplify the feature or tell everyone it's going to be much more complex than planned.

The key would be to recognize exploding complexity before all the work is done.

Has anyone tried something like this? Would it work?


In my experience, all of the complexity comes from feature requests following the initial development of some code. I develop code for our internal automated data analysis systems. My boss doesn't care how it works, it just has to do exactly what he wants. Even if it destroys my abstractions. Suddenly, you have lots of if/elif/else cases and a creeping suspicion that your logic now has holes in it.


I wonder whether software ends up being complex, because the building blocks used by most programmers are simply insufficient to model complex systems.

Those building blocks are procedures, subroutines, control statements and modules/sub-programs.

Just because these can (eventually) be used to model complexity in code that increasingly approaches a big ball of mud, doesn’t mean they are the ultimate abstractions.

We need to invent and use better abstractions.


I like this essay, but I've also had success with "reject all feature requests" and ended up with happy users, not no users.


Software (like the universe) is entropy over time.


Let me expand on this thermodynamical metaphor. Open systems that may exchange matter and/or energy with their environment, can maintain or even decrease their own entropy. This is how living organisms exist, without violating the second law of thermodynamics.

If you see a software system as a living organism, then the natural entropy increase (new feature requests?) may be contained if enough energy (developer time?) is invested. Of course this kind of investment may contravene some other commercial software law, like being profitable.


Complexity is a fact of the world we live in. The human body is extremely complex, but all organs and systems in the human body work in harmony and with astounding efficiency.

We need to stop fighting complexity in software and embrace it instead. The goal shouldn't be to pursue simplicity at the cost of ignoring complexity of the domain being modelled, but to acknowledge that we're modelling a complex system, and design it such that subsystems work together in harmony.

This article in particular touches on an important point which resonates with me. It's what I call "displacement oriented design". I view a code base as a puzzle that grows in size over time; you can't grow a puzzle just by adding to it and leaving all other pieces intact. As you expand the size of the puzzle you need "displace" some of the existing pieces, reshape them, and fit them with the new pieces.


I agree with some of your points and disagree with others.

The human body is exceptionally awesome, but also so very fragile. I think we can do better with well-thought-out design, and a part of that that I embrace is the value of simplicity.

My thoughts can be best said with a quote (especially the part about a complexity budget):

> I used to tolerate and expect complexity. Working on Go the past 10 years has changed my perspective, though. I now value simplicity above almost all else and tolerate complexity only when it's well isolated, well documented, well tested, and necessary to make things simpler overall at other layers for most people. For example, the Go runtime is relatively complex internally but it permits simple APIs and programming models for users who then don't need to worry about memory management, thread management, blocking, the color of their functions, etc. A small number of people need to understand the runtime's complexity, but millions of people can read & write simple Go code as a result. More importantly, Go users then have that much more complexity budget to work with to build their actual application. I would've never built Perkeep had I needed to fight both its internal complexity and the complexity imposed on me by other contender languages/environments at the time.

>

> All that is to say, simplicity is not only refreshing, but it also enables. Go made me feel productive in a way I hadn't felt in many years where everything just felt like it was getting more complex. Ever since finding Go, I've been regularly hunting for other technologies that provide simplicity as a feature.

https://bradfitz.com/2020/01/30/joining-tailscale


I agree on the distinction between internal vs. external complexity; this particularly resonates with me: "the Go runtime is relatively complex internally but it permits simple APIs and programming models for users...". I probably should've made it clear that the external interface of a system (whether it's a UI or an API) should be simple and abstracts away the internal complexity.

To go back to the human body analogy, while the internals of the human body are complex, the external interface is extremely simple: you eat to get energy; you sleep to get rest; you use your reproductive organs to reproduce; etc. You don't have to learn how the internal systems work to operate your body.


While true, this is also incredibly reductive. All of the preaching about making software more simple leads to nothing.

Software, at the current scale of things we want to build, is inherently and unavoidably complex.


In my experience software complexity arises from handling errors and handling edge cases. Adding additional features means there are more errors to handle and that many more edge cases to consider.


Is this missing half the article? The page just stops on mobile and there’s no footer, nothing.

I count 4 paragraphs in total.


I believe that you're seeing the whole article; it is short.


Why does software end up complex? Growing requirements and poor decisions.


Software ends up complex since it attempts to model the real world which is oh so complex




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: