Well what I mostly experienced in my years in the field was that developers, wether senior or not, feel obliged to create abstract solutions.
Somehow people feel that if they won't do a generic solution for a problem at hand they failed.
In reality the opposite is often true, when people try to make generic solution they fail to make something simple, quick and easy to understand for others. Let alone the idea that abstraction will make system flexible and easier to change in the future. Where they don't know the future and then always comes a plot twist which does not fit into "perfect architecture". So I agree with that idea that abstraction is not always best response to complex system. Sometimes making copy, paste and change is better approach.
Kevlin Henney makes interesting points on this. We often assume that abstractions make things harder to understand, at the benefit of making the architecture more flexible. When inserting an abstraction, it is supposed to do both, if at all possible. Abstracting should not only help the architecture, but also the understanding of the code itself. If it doesn't do the latter, then you should immediately question whether it is necessary, or if a better abstraction exists.
The take-away I took from it is that as developers, we love to solve problems using technical solutions. Sometimes, the real problem is one of narration. As we evolve our languages, better technical abstractions become available. But that's not going to prevent 'enterprisey' code from making things look and sound more difficult. Just look at any other field where abstractions aren't limited by technicalities: the same overcomplicated mess forms. Bad narrators narrate poorly, even when they are not limited.
I think we forget that while “engineering” is about maximizing the gain for a given investment of resources, it can be stated another way as building the least amount of bridge that (reliably) satisfies the requirements.
Abstraction can be used to evoke category theoretical things, but more often it’s used to avoid making a decision. It’s fear based. It’s overbuilding the bridge to avoid problems we don’t know or understand. And that is not Engineering.
I find sometimes that it helps me to think of it as a fighter or martial artist might. It is only necessary to not be where the strike is, when the strike happens. Anything more might take you farther from your goal. Cut off your options.
Or a horticulturist: here is a list of things this plant requires, and they cannot all happen at once, so we will do these two now, and assuming plans don’t change, we will do the rest next year. But plans always change, and sometimes for the better.
In Chess, in Go, in Jazz, in coaching gymnastics, even in gun safety, there are things that could happen. You have to be in the right spot in case they do, but you hope they don’t. And if they don’t, you still did the right thing. Just enough, but not too much.
What is hard for outsiders to see is all the preparation for actions that never happen. We talk about mindfulness as if it hasn’t been there all along, in every skilled trade, taking up the space that looks like waste. People learn about the preparation and follow through, instead of waiting. Waiting doesn’t look impressive.
Your analogy with Chess and Go is flawed though. In these games you try to predict the best opponent move responding to yours, the worst case basically, and then try to find the best response you have to it, and so on, until you cannot spend any more time on that line or reach your horizon. You are not "hoping things do not happen". If you did that, you would be a bad chess or go player. You make sure things do not happen.
I disagree. Especially in teaching games, and everyone writing complex software is still learning.
In Go there are patterns that are probably safe, but that safety only comes if you know the counters. In a handicapped game, it’s not at all uncommon for white to probe live or mostly live groups to see if the student knows their sequences. You see the same in games between beginners.
Professional players don’t do this to each other. They can and will “come sideways” at a problem (aji) if it can still be turned into a different one, but they don’t probe when the outcome is clear. In a tournament it inflicts an opportunity cost on the eventual winner, and it is considered rude or petty. They concede when hope is lost.
They still invested the energy, but now it comes mostly from rote memorization.
And how does that contradict my point to always expect the best opponent move and think about the best thing to do in return, instead of simply hoping the worst will not happen? I think you are actually even supporting my point here.
I think the thing that comes with ~~seniority~~ experience is being better able to predict where abstraction is likely to be valuable by becoming: more familiar with and able to recognize common classes of problems; better able to seek/utilize domain knowledge to match (and anticipate) domain problems with engineering problem classes.
I’m self taught so the former has been more challenging than it might be if I’d gone through a rigorous CS program, but I’ve benefited from learning among peers who had that talent in spades. The latter talent is one I find unfortunately lacking in many engineers regardless of their experience.
I’m also coming from a perspective where I started frontend and moved full stack til I was basically backend, but I never lost touch with my instinct to put user intent front and center. When designing a system, it’s been indispensable for anticipating abstraction opportunities.
I’m not saying it’s a perfect recipe, I certainly get astronaut credits from time to time, but more often than not I have a good instinct for “this should be generalized” vs “this should be domain specific and direct” because I make a point to know where the domain has common patterns and I make a point to go learn the fundamentals if I haven’t already.
I agree that premature abstraction is bad. Except when using a mature off-the-shelf tool, e.g. Keycloak. Sometimes if you know that you need to implement a standard and are not willing to put in the effort for an in-house solution, that level of complexity just comes with the territory, and you can choose to only use a subset of the mature tool's functionality.
I also have a lot of experience starting with very lo-fi and manual scripting prototypes to validate user needs and run a process like release management or db admin, which would then need to be wrapped in some light abstractions to hide some of the messy details to share with non-maintainers.
Problem is, I've noticed that more junior developers tend to look at a complex prototype that hits all the user cases, and see it as being complicated. Then they go shopping for some shiny toy that can only support a fraction of the necessary cases, and then I have to spend an inordinate amount of time explaining why it's not sufficient and that all the past work should be leverages with a little bit of abstraction if they don't like the number of steps in the prototype.
So, not-generic can also end up failing from a team dynamic perspective. Unless everyone can understand the complexity, somebody is going to come along and massively oversimplify the problem, which is a siren song. Queue the tech debt and rewrite circle of life.
Sure over abstraction is a problem. And sometimes duplication is better than depend y hell.
But other times more as traction is better.
In true it’s an optimisation problem where both under abs over abstracting, or choosing a the wrong abstractions lead to less optimal outcomes.
To get more optimal out comes it helps to know what your optimisation targets are: less code, faster compilation, lower maintenance costs, performance, ease of code review, adapting quirky to market demands, passing legally required risk evaluations, or any number of others.
So understand your target, and choose your abstractions with your eyes open.
I’ve dealt with copy paste hell and inheritance hell. Better is the middle way.
I would like to be able to upvote this answer 10 times.
I often remember that old joke:
When asked to pass you the salt, 1% of developers will actually give it to you, 70% will build a machine to pass you a small object (with a XML configuration file to request the salt), and the rest will build a machine to generate machines that can pass any small object from voice command - the latter being bootstrapped by passing itself to other machines.
Also makes me remember the old saying
- junior programmer find complex solutions to simple problems
- senior programmers find simple solutions to simple problems, and complex solutions to complex problems
- great programmers find simple solutions to complex problems
To refocus on the original question, I often find the following misconceptions/traps in even senior programmers architecture:
1) a complex problem can be solved with a declarative form of the problem + a solving engine (i.e. a framework approach). People think that complexity can be hidden in the engine, while the simple declarative DSL/configuration that the user will input will keep things apparently simple.
End result:
The system becomes opaque for the user which has no way to understand how things work.
The abstraction quickly leaks in the worst possible way, the configuration file soon requires 100 obscure parameters, the DSL becomes a Turing complete language.
2) We have to plan for future use cases, and abstract general concepts in the implementation.
End result:
The abstraction cost is not worth it. You are dealing with a complex implementation for no reason since the potential future use cases of the system are not implemented yet.
3) We should factor out as much code as possible to avoid duplication.
End result:
Overly factored code is very hard to read and follow. There is a sane threshold that should not be reached in the amount of factorization. Otherwise the system becomes so spaghetti that understanding a small part requires untangling dozens and dozens of 3 lines functions.
---
When I have to argue about these topics with other developers, I often make them remember the worst codebase they had to work on.
Most of the time, if you work on a codebase that is _too_ simplistic and you need to add a new feature to it, it's a breeze.
The hard part, is when you have an already complex system and you need to make a new feature fit in there.
I'd rather work on a codebase that's too simple rather that too complex.
I like what you are saying here! My observations below.
1) When gradually most of your implementation is happening in a DSL / graph based system all of your best tools for debugging and optimizing are useless.
2) So often I've seen people make an 'engine' before they make anything that uses the engine and in practice the design suffers from needless complexity and is difficult to use because of practical matters not considered or forseen during the engine creation. Usually much work has been spent on tackling problems that are never encountered but add needless complexity and difficulty in debugging. Please design with debugging having an equal seat at the table!
3) Overly factored code is almost indistinguishable from assembly language. - Galloway
Somehow people feel that if they won't do a generic solution for a problem at hand they failed.
In reality the opposite is often true, when people try to make generic solution they fail to make something simple, quick and easy to understand for others. Let alone the idea that abstraction will make system flexible and easier to change in the future. Where they don't know the future and then always comes a plot twist which does not fit into "perfect architecture". So I agree with that idea that abstraction is not always best response to complex system. Sometimes making copy, paste and change is better approach.