"Simple is often erroneously mistaken for easy. 'Easy' means 'to be at hand', 'to be approachable'. 'Simple' is the opposite of 'complex' which means 'being intertwined', 'being tied together'" - https://www.infoq.com/presentations/Simple-Made-Easy/
I had a math teacher in primary school who used to shout with an exaggerated accent, "simple is not the same as easy!" She really wanted to drill the idea into our heads that just because you know exactly how to do something, doesn't mean that it will be quick or easy to accomplish.
Like, for a schoolchild, long division. The rules are simple, but given big enough numbers you'll probably mess up at least once. And then the same thing turns out to be true with algebra, geometry, derivation/integration, and on. It's not a bad mantra.
"It is straightforward to show that..." means that you could probably do it with your current knowledge, but it will take 6 dense pages, four false starts and about a week of focused work.
I used to joke that when a solution was known to exist the problem was "trivial"; when a solution was not known to exist it was "nontrivial". A problem that's bloody well impossible is "decidedly nontrivial".
'You' being personified here, rather than the general you.
Straightforward tends to suggest we don't have to have a bunch of meetings about it, because the right person either has the knowledge or we know precisely where to get it.
GUIs are easy for the specific things the programmers made easy, and potentially impossible for everything else. The moment you want something the developers didn't put in the GUI, there's no recourse other than writing your own tool.
Command lines are harder to begin with, but modern command lines give you a gentler ramp up to writing your own tools.
In all your examples, the complexity is hidden in the underlying technology, which I think makes them less than ideal. Sewing with a sewing machine is usually both less complex and simpler than sewing by hand. If you count the complexity of the hardware and the operating system and compiler, nothing in development is simple.
For me the dichotomy is better is better illustrated by: I need to create a new class that, with a few exceptions, does exactly what an existing class already does.
The easy way is to copy the existing class and make the small necessary changes in the copy. The simple way would be to refactor and put all the differences in delegates.
Did you mean "easier"? Because complex and simpler are antonyms, so it seems kind of redundant to use both words.
> the complexity is hidden in the underlying technology
The complexity is there. Maybe not all get involved with it, but it's still there.
> Sewing with a sewing machine is usually [simpler] than sewing by hand
The technology is more complex. The operation is maybe on par, though I would think it's also more complex. I may be biased in that I've hand-stitched many times and I find it super-simple, but I'm still a bit intimidated at the prospect of learning the basic use of a sewing machine. For very basic hand-stitching, you just put the thread through the needle, and the needle through the clothes in some pattern. That's it. For the sewing machine, I guess you have to lead the thread through some parts of the machinery, select some stuff through the knobs, etc. I think there certainly is a need to know a bit on the construction and workings of the sewing machine to be able to fix issues that arise.
> If you count the complexity of the hardware and the operating system and compiler, nothing in development is simple.
Complex and simple are relative terms, after all. If you refer to the last example of CLI vs GUI, they both involve the OS and compiler, etc. so that cancels out and we can refer to one as simpler or more complex than the other just based on the differences. Now, if you compare software development to making a sandwich, then sure, nothing in software development is as simple as making a sandwich.
> The easy way is to copy the existing class and make the small necessary changes in the copy. The simple way would be to refactor and put all the differences in delegates.
I agree to that, and that also aligns with the examples I gave. The complexity is mainly in how the thing is constructed. Duplicated code adds complexity to how the program is constructed. When you want to make a change to the common code, you have to make the change twice, maybe with a few differences. That makes development of the program also more complex.
It's the same as a sewing machine, or a stick blender with chopper attachment. Their construction and maybe operation is more complex than their counterparts.
I am yet to appreciate Rich Hickey's now famous "Simple Made Easy". While I agree with his points, I don't understand the significance of it. Simpler is easier than complex, right? Even the title said "simple-made-easy". What is the fuss about emphasizing "Simple is erroneously mistaken for easy"? They are not the same, but they are intimately related. Or is this an emphasis on relative vs absolute -- that relative simple can still be relatively not easy?
I don't think I misunderstood Rich Hickey, and I don't think I disagree. But I don't understand why people quote the opening sentence and feel so significant for them? To me, that is just a click-bate.
My takeaway was that if we conflate the two, we tend to use familiar (easy) tools to solve our problems, but that learning a new tool (hard) could result in a simpler solution.
E.G, passing something to a legacy program in a language I'm unfamiliar with from a program I wrote in a familiar language is easier than implementing my solution in the legacy language, but it's not simpler.
The 'relative vs absolute' seems like a heuristic to distinguish the two. Writing a solution in a different language is easier to me, but I can tell on an absolute level that there are more failure points to that approach.
Nice explanation. Python is a great example of this IHMO. It is a real struggle to get the Python programmers on my team to use any other language than Python.
Why? Because it's easy for them. But the solutions they create with it are highly suboptimal. They could be far more robust and expressed much more concisely and directly in other languages with more powerful type systems and better support for eg: functional concepts.
But they actually really think that because Python is easy for them, that it's "simple". It's not: it's incredibly complex.
Haha, I was thinking of that as I wrote it. My first language was C++ back in the day, then I dabbled in various languages for a while, and finally really dove into Python because there was a project I couldn't figure out how to write any other way. If I had to work with one of the languages I learned earlier, my first instinct would now be to write the solution in Python and pass it to the legacy program. Perfect example of what the speaker is warning of.
Thanks. I think I understand the background much better now. When we think easy, we always take the "my" and "now" perspective. When we think simple, we often take the wholesome point of view. Thus the need for differentiation.
Well, no. Complexity has an obvious price but simplicity does too. You have to work for simplicity, even fight for it. Think of code; it just somehow becomes more complex. You have to work to pare it back to what's needed.
I can't think ATM of better examples (and you deserve some), but no, simplicity does not come easy.
A nice phrase I came across: "elegance is refusal".
Until you find a good example, I challenge your understanding :)
Similar to my response to another comment, I suspect there is a switching of subjects. It starts with a problem, and the subject is a solution to the problem. Simpler solution is easier to understand and manage. A more complex solution is more difficult. Is there a counter example?
Try not to switch out the subject here. For example, one may propose to use a library to solve the problem by calling `library.solve`. And then one may argue that the simplicity of the code is actually more difficult to manage as one need trouble shoot all the details/bugs/interfaces with the library. We should recognize that the library itself is not the same as the solution. The solution includes the calling the library and its details/bugs/interfaces/packaging/updating/synchronizing etc. And these elements interwine to make the complexity. So the solution itself using the library is not necessarily simple. It is difficult exactly because of the complexity.
As you can tell, I am essentially making the same opinion as Rich Hickey, which is `simple-made-easy`. And it is very far away from the click-bate opening statement of "simple is often erroneously mistaken for easy". A more correct sentence probably should be "simple is often erroneously labeled by partial".
EDIT: To clarify, I am not saying a solution using a library is more complex. It depends. With a library, the solution is layered and delegated. The entire solution is more complex and more difficult to understand -- if one is to understand every byte of it. However, the layering means not all complexity need to be understood for practical reasons. So with proper layering and a good judgement of practicality, the part of the complexity that you practically need manage may well be simpler (and easier) by using a library, or not. It depends.
I don't deny your right to challenge, but tight now I can't give an example. I've just gone through months of my posts looking for one particular post that might clarify but I can't find it. Not being able to search your own comments is frustrating. I'll have a muse overnight.
Simple is easier than complex the same way that exercise is easier than chronic obesity. If you have the discipline to do the obvious that's great, but it takes willpower to create or do the simple thing. Oftentimes it's easier or more expedient to do the lazier easy thing in the moment, but you pay for it down the road. For example: I notice I'm doing the same calculation twice on the front and back end of my application. The "simple" thing to do would typically be to extract that logic to one place so that you don't end up having to modify it in two/five/twelve places down the road. But I'm already halfway through writing it, and the simplification will involve some non-trivial refactoring, so I take the easy route and write the same logic twice. It's easy for now, but will be complex when I have to change it down the road.
Modules are "simpler" than vectors because they have fewer axioms, but they are also much harder to understand. For example, not all modules have a basis, which can make them much harder to work with.
Good luck explaining "simpler" with modules and vectors :).
Simple is defined as not to inter-wine. To understand an axiom is to understand how it "inter-wine" with other axioms to prove certain results. So fewer axioms necessarily results in more interwines, ie complex. I think here we are switching the subjects: from axiom itself to the results that we want to prove. If we focus on the simplicity of proving the results, the simplicity of axioms are irrelevant.
The main reason modules are interesting is not as a generalisation of vector spaces, but because they are helpful in studying rings. Kernels of ring homomorphisms are ideals, which in general are not subrings, but they are modules - and of course every ring is a module over itself. So to study a ring R it pays off to instead study R-modules, since working with them is... you guessed it! Simpler.
The way I see it, when there's already a lot of complexity inherent to the domain (eg, software design), it's nearly always much easier to add to the complexity than to find a way to reduce it.
The problem here is not that "simple is not easy", it is rather "picking partial and sacrificing/neglecting whole". Since one is only part of a team and a part of the whole design/develop/use circle, the "whole" problem is not (necessarily) "my" problem, therefore it is easy to pick a simple and easy solution from "my" perspective. The "my" and "whole" can also be swapped with "now" and "future". "now" is here but "future" is uncertain.
That's where "local complexity : global simplicity" tradeoffs come into play; well-defined boundaries (coherent interfaces) are key to striking the right balance.
"Now:future"?
Yeah, YAGNI (You Ain't Gonna Need It") and STTCPW (Simplest thing that could possibly work) are good rules of thumb.
Finally, as for "not my problem"?
IMHO (and IME, 21yrs in the industry), that's a dangerously myopic stance. Those who make the effort to expand their perspective beyond the scope of their immediate tasks and responsibilities are those whose skills, powers, value and influence show commensurate growth. By all means, be a good team player and do your (current) job to the best of your abilities, which includes efficiency and ergonomics and awareness of available shortcuts. But if you do this for too longbe aware of the compounding effects, not only on the larger system's technical debt, but also the limits this may be placing on your career.
If you haven't seen this talk; watching it will make you a 10x better programmer. This is what I take for my definition of complex and it applies broadly in a very practical manner.
What rhetoric? Are you confusing this with "the 10x programmer" meme?
Claims of becoming a 10x better programmer aren't claims about making one a 10x programmer. The former is about relative self-improvement and motivationally hyperbolic; the latter is about relative comparison to others, is often used negatively to belittle, and is detrimentally hyperbolic.
I would defensively be more hyperbolic and use a different number, just because 10x is tainted by stupid ideas in programming. But your intent was pretty clear to anyone paying attention... that's just a high bar sometimes.
If respect is measured by an integer, going from level 2 to 20 is great. But if you have no respect, then gaining 10 times as much still leaves you at none.
If you are disrespected DON'T WATCH THE VIDEO unless you want to be disrespected more by a factor of 10!
However if you go from writing spaghetti code to something more structured (i.e. loosely coupled, however that is expressed in your language) then you're team mates will hate you less.
The speaker is Clojure creator Rich Hickey, but the talk is about a mental model for thinking about complexity.
Inherent complexity involves tradeoffs.
Incidental complexity you can fix for free.
"And because we can only juggle so many balls, you have to make a decision. How many of those balls do you want to be incidental complexity and how many do you want to be problem complexity?"
The article is about the former. I bet the latter dominates day-to-day line-of-business coding.
Simplicity is often a matter of perspective, a function of a certain perception of a complex subject and the set of expectations that go with this perception. There is no absolute in analysis and in modelling synthetic propositions from the atoms used by the particular analysis.
(E.g., we may analyse and model an action in terms of verb-noun or of noun-verb, with major differences in what may be perceived as "simple" in the respective model.)
Referring to the above example of verb-noun vs noun-verb grammar: take for example the infinitive verb form. With the former (verb-noun) it's just the verb devoid of any context, simplicity in its purest, which is also, why and how it's listed in a lexicon. Looking at this from the noun-verb perspective, you've to construct a hypothetical minimal viable object, which will be also – as you want to keep things simple – the object every other object inherits from, the greatest common denominator of any objects that may appear in your system. By this, you arrived at the most crucial architectural questions of your system and its reach and purpose. While it's still about simple things, neither the task nor the definitions derived from the process will be simple at all. Nor is there a universally accepted simple answer, as a plurality of object oriented approaches may testify for. The question is on an entirely different scale and level for the two approaches. On the other hand, for a verb-noun approach, similar may appear for anything involving relations, which are already well defined in an object oriented approach. And, as you've arrived at these simple fundamentals of simplicity in your system, what may be simple or not in your systems will depend on the implicit contracts included in these definitions and how well they stand the test of time and varying use and purpose.
Later in the talk, he draws a distinction between inherent complexity (the focus of the article) and incidental complexity (which you can fix without tradeoffs). Tradeoffs can be critically important, but the latter kind of complexity probably dominates my day-to-day life. I find this oddly encouraging, in a free-lunch sort of way.
"And because we can only juggle so many balls, you have to make a decision. How many of those balls do you want to be incidental complexity and how many do you want to be problem complexity?"
wouldn't say simple is the opposite of complex though? especially when talking about software systems or other systems in general. what i am thinking is that some complex systems can be made of very simple components.
the best example is our complex brain being made of simpler components working together. maybe the opposite of complex is chaotic? i don't know...
Simple systems can indeed be made of complex components; however it is a measure of interconnectedness. The key concept is that we can only hold a finite amount of complexity in our heads at any one time, and so if we can minimise that we can be more efficient and effective.
The analogy is a lego castle vs a wool castle. A lego brick is very simple and contained, and from this you can build wonderful structures; in addition if you wish to change out a portion it is easy to do because changing on part of the system (i.e. implementation) doesn't affect the rest so long as the contract between components is maintained.
Contrasting: should you pull on a thread in a wool castle it will affect other parts of the castle. A lot of software is like this, which makes it very hard to reason about.
"Interconnectedness" is also a measure of resistence to hierarchical decomposition (or factoring ax+bx -> (a+b)x); irreducable complexity.
One technique is redefining the problem, to smaller or bigger:
Work on only part of a problem, a subset, leaving something out. e.g. git's data model does not represent renames as first class constucts, enabling it to be disproportionately simpler.
Expand the problem, a superset, to integrate lower or higher level or associated parts that aren't usually included. Previously scattered commonalities may then appear, enabling decomposition.
And the Lego analogy works in particularly nicely considering just how much effort, precision and design work needs to go in to making the blocks simple [0]. This is a nice analogy for how keeping software components simple and making them interface cleanly is a difficult task.
The quote of "If I had more time, I would have written a shorter letter" has been quoted so often that it is arguable who said it first, but it is definitely at least a 300+ year old concept.
Simplicity takes effort. It takes time. And you often cannot write simple software in the first version because you are designing it as you go. Simplicity comes from maturity both in the product design and the development staff, and is a long term goal.
It is not something that you achieve by critiquing individual PRs, but by insightful long-term thought into how to solve your problem with your code, and many refactors over time.
It is wise strive for simplicity in all code you write, but wiser to have the patience to let it develop over the lifecycle of your product.
"Making things simple is hard work." - Rich Hickey
When I think of the simple vs complex bifurcation I think of fractals, a "simple" set of rules expressed in a complex way. The complexity arises from their expression/expansion iterated over time, but the rules that govern that expression are economical and compact. Think of an acorn: the encoding of a tree is all there. Its self-similarity, recusively expressed, creates the complex output of a tree. But nature always prefers the most economical solution because in the physical universe, resources are limited.
Agreed. I learned the most working with a system I built from scratch and managed for 4 and a half years. You can't blame other people when you have built it yourself. You see what areas of code constantly needed updating, and where you could have improved the design. With any luck you will be given enough time to refactor it and improve it.
> "If I had more time, I would have written a shorter letter"
I don't like seeing this quote in the context of programming.
Sure, they're both about writing, but in different settings. A person who writes a letter has one goal: for the intent of the letter to be understood by the recipient. However, a programmer writing code has two goals: not only for the intent of the code to be understood by future programmers, but also to make the code easy to change in response to unknown requirements without causing any regressions.
The article is a bit all-over-the-place, but I do agree with it in general, because I've seen far more bugs in production caused by changing code than by adding it. Therefore, I would rather use a language with advanced features I can use to make my code harder to change in the wrong way. I think that if you phrase it like that, it makes it more acceptable to "avoid simplicity".
I get what you mean but tend to think of them to be so.ilar with in a different respect--that the concepts being expressed are clear. If the reason why code is organized by n a certain way is clear along with its effect then there would not be the motivation to stray from it. The problem is that often the organization is the result of achieving multiple effects rather than behaviorial factoring.
>> "If I had more time, I would have written a shorter letter"
>I don't like seeing this quote in the context of programming.
Maybe you're overthinking it?
Just yesterday I delivered a temporary fix with copy/pasted code inside because I didn't have the time to make the 'correct' code..
This is such a well written comment. Often what starts as 'simple' turns into spaghetti complex over time due to the lack of foresight.
It's a hard task job and requires a lot of experience to create systems that are simple to understand and to ensure the integrity of the system isn't lost when new things are added on top of them. Foresight into what might be realistically added in future is what seniority enables.
> For example, dynamically typed languages like Python (or Ruby, Lisp, etc.) are easy and pleasant to write. They avoid the difficulty of convincing a compiler that you’ve written your code correctly, but they’re harder for a new team member to comprehend: you’re trading fluent writing for more laborious reading.
I feel like this misses the point of high-level languages. In my experience, higher-level code is easier to read and write. For one thing, there's simply a lot less of it.
Lisp isn't more productive just because it's easier to "convince a compiler that you've written your code correctly". After all, if you haven't written it correctly, the user will notice right away, too!
> Short functions, methods, and classes are quick to read, but must either be built of many lower-level functions or themselves comprise parts of larger compositions in order to accomplish anything. You’re trading away locality in favor of concision.
That's how languages work. You used the word "concision" rather than spelling out what it means in tiny words, and count on us readers to know it, or look it up. That's usually a win, especially if we have a standard set of words.
Again, this is where higher-level languages win big. You can encode higher-level concepts. I can write MAP or FILTER and people know what they mean. Or if my language doesn't have those, I can write them.
> I feel like this misses the point of high-level languages. In my experience, higher-level code is easier to read and write. For one thing, there's simply a lot less of it.
High level code shifts the burned of understanding the type system from the author and the compiler to the reader.
Yes, there is more text to read in a strongly, strictly typed language, but that extra syntax conveys a lot of important information quickly and cheaply, and this is information that the reader will need to know.
Is immense. In the later case, I know exactly what this function wants me to provide, what contracts the arguments must honor, and I know something about what the function intends to do with those arguments; it's void, so there are probably side-effects.
In the first case, I have to figure that out by reading the body of the method. The signature could tell me all of this, but it doesn't.
> After all, if you haven't written it correctly, the user will notice right away, too!
My users are not paid well enough to debug my code for me.
In a higher-level language, I tend to have a "cat" which is generic-by-default. In a functional language, functions tend to have no side effects. What exactly is the purpose of specialization here, or mutation?
Arguments like this against Lisp tend to invent examples like this which sound reasonable in a Java program but don't exactly make sense in Lisp.
Yes, static type systems can be handy if your program is based on the concept of defining new types and mutating them through new specialized methods. None of the programs I write are like that. That's not the only way to write a program.
> If we care about human understanding of the contract, types are no better than names.
That depends on your familiarity with the code though. Yes if you are deeply familiar with the code you probably know what data type an "entry" is. But come to it cold and trying to reverse engineer code like that in the middle of a complex system to understand it is frightening (at least, in my experience).
I personally find functional harder to debug and haven't got past that bottleneck yet. Yes, maybe I'm "doing it wrong", but can't find "right". Somebody else once told me, "functional makes it easier to express what you want but imperative (traditional programming) makes it easier to figure out what's actually happening". That rings very very true for me. In terms of productivity, it looks the second is trumping the first. In another forum, the "solution" was "just don't make bugs". Doh! Even if true for an individual, it may not scale to a bigger team.
Do you mean the first question literally? My salary has never been well correlated with my productivity.
Figuratively, in the sense of "if Lisp allows you to be so concise, why aren't your programs small?", I can report that in every case where I ported a program to or from Lisp, the Lisp version was significantly shorter than the non-Lisp implementation. So I'd say, "it is".
As for whether I'm just an amazing programmer, I don't think so. (I don't think I've ever been accused of that!) I know a company here that uses Clojure and most new hires have to learn it on the job. From what I've heard, nobody has a problem with it.
I find C (and imperative languages in general) harder to write and debug. Am I "doing it wrong"? I've found C hard to work with since around 1990 when I first tried to learn it.
Re: "My salary has never been well correlated with my productivity."
I'm thinking more from the owner's perspective. If they hired a bunch of Lisp programmers, then they'd produce more software with fewer programmers (if the productivity claims were correct). They'd then get rich and expand, and other companies would take notice and copy.
Re: "I know a company here that uses Clojure and most new hires have to learn it on the job. From what I've heard, nobody has a problem with it."
There seem to be niches were it works out well, but it has stayed niches since the invention of Lisp. After 50+ years of not catching on in the mainstream, you have to wonder about real-world market fit.
Re: "I find C (and imperative languages in general) harder to write and debug."
C is a relatively low-level language, mostly designed for hardware speed instead of programmer speed. Thus, it's not the best specimen of imperative languages to compare with.
But in general such seems to be subjective: people give widely varying anecdotes. Those who like Lisp dialects gravitate toward them and those who don't go elsewhere.
One of the reasons I hover in the space of semi-functonal languages like Groovy and Scala. Because as much as I love concisely expressing a piece of complex logic as a composition of reduces, folds, maps etc., being able to easily and naturally break out of that to execute a piece of imperative code in the middle of it is still regularly a life saver to me and often an order of magnitude clearer than the pure functional solution to a reader.
I lot of the time it seems programmers are just avoiding using SQL to do query-centric processing (slicing and dicing lists). Aside from performance problems, another downside is that one reinvents query language idioms in each programming language. It increases the learning curve and is anti-standardization.
Wrapping up important concepts under good names is clearly a good idea. But on the other end of the spectrum, factoring out a single line from 2 methods and giving it a bad name does far more harm than good. Probably 90% of DRY I come across is this bad kind, I would guess because refactoring for better semantic clarity takes much more effort than just factoring out the common stuff under a different name. I think this downside doesn't get nearly enough airtime as it should.
There was a wise thing along the lines of "to build a simpler system, start with more powerful (and complex, by necessity) building blocks"... I think it was Alan Kay who said this or something very similar.
Worst thing is that "simple" languages like Go force you to mix up "business logic" with "plumbing/infra logic" with "error handling logic" if you try to be idiomatic... I know that for security crucial code you might want to see all error paths next to a piece of code, for for the other majority of the code I want error handling tucked away in different system, and also the business logic factored out in some other very generic one that doesn't care about actual implementation of infrastructure.
Sure, if infrastructure IS your very product like Go's use case probably is, then it probably makes sense to not have features for such separation.
I experienced a good example of this recently when trying to decide between Bitwig and an upgrade of Ableton Live Lite to Suite.
Suite comes with a full-featured audio programming environment called Max for Live. It has some high level devices for common functions, but you can go very low level and make just about anything. The building blocks are all very simple and basic: oscillators, math functions, routing, etc.
Bitwig's Grid takes a different approach with a small number of powerful building blocks. The range of possibilities is smaller, but you can be more productive if the thing you want to make is composable from the available devices.
Funny enough, one of my favorite talks argues the opposite. "Constraints Liberate, Liberties Constrain." I won't attempt to summarize but it's well worth a watch.
>Worst thing is that "simple" languages like Go force you to mix up "business logic" with "plumbing/infra logic" with "error handling logic" if you try to be idiomatic...
How so? Can you not write modular code in just about any language? I'm not saying programming languages don't matter, but in terms of seperating business logic from plumbing I don't see huge differences.
I also don't think you can seperate errors from the rest of your code on the same level of abstraction. Business logic errors such as "account overdrawn" are business logic. Plumbing errors such as a broken network connection are plumbing.
The only errors that can and should be kept separate are bugs that need to be fixed.
> error handling tucked away in different system
I'm not sure that can ever be done. I agree Go could use some syntactic around error handling but I think even modern software is incredibly flaky and Go is a step in the right direction.
It's tiring to think about all the failure cases but it might lead to a world where the ATM gives me money instead of Windows BOD.
I once worked on a system where the senior engineer on the project had a dogmatic definition of simple.
It was an MVC web application. He insisted that having any form of flow control or logic in views was too complex - it was simpler to regard all logic as application logic and put it in the controller. This meant that instead of one controller being able to drive multiple views, something as straightforward as rendering a table a row at a time required a dedicated controller action.
That engineer's definition of simplicity produced an explosion of complexity in the codebase. That was when I came to understand that hand drills may be simple, but power drills are very useful.
Yip, I've also encountered "architects" who obsess on certain things, and make a mess in the process, at least something that is a "mess" to the majority of coders.
One is left with a dilemma of complain harder or leave. Unfortunately, leaving is probably the only way out. Often such people have "connections" and are thus not correctable. They have outlier minds and want a system that models their outlier brain even if multiple developers complain. Bail out.
The article refers to "the much-abused YAGNI" and links to another blog post that goes into detail. Apparently some people take YAGNI to mean never anticipate anything. Is that your experience? I've always taken it as a tie-breaker when you're unsure whether to anticipate or not, and I've had the impression that others take it the same way.
Overall I like this article and the YAGNI one it links to.
It is easy to look back and say, "I should have made this modular," or "I should have isolated that external dependency." It is easier to see how another level of indirection would have saved you work after the fact, but maybe YAGNI was the right call at the time.
That's the trap that Magic and poker players call results-oriented thinking: You got burned once by making the best decision available given the information you had, and you give that experience a disproportionate weight, like the player that lost big with a pair of kings in his hand, and undervalues a pair of kings after that.
When you look back, you're looking back from the particular world where X became a requirement. If instead, Y had become a requirement, you would be looking back wishing you had structured your code in some completely different way. YAGNI then applies to the extent that you can't predict which of X and Y it will be in advance.
I take YAGNI to mean not developing abstractions until you have enough use cases to back them up.
If you don't have a reason now to make an abstraction, don't do it yet. Wait until you have at least 3 use cases where you're repeating yourself before you come up with an abstraction that collapses them. You'll know better what the right abstraction will be once you have those use cases... any abstraction you come up with now is much more likely to be wrong. And a wrong abstraction is much worse than repeating yourself.
Of course, if this is the umpteenth time you've done the same CRUD app that wraps a relational database, go ahead and use the abstractions you know worked before, because in that case, you've already seen the use cases before and you have a better idea of what abstractions will work.
Yes, I worked in teams where "YAGNI" would come up all the time, preventing any kind of actual code design/architecture.
Every time YAGNI is mentioned, it should be put against code quality (maintainability, readability, testability, etc) because they are often opposite/fighting concepts: code quality, in the end, is all about making (not yet known) changes easily, which means you have to anticipate different possible futures.
The sweet spot between these two principles is not easy to find. Thinking and discussing about this on projects is a good start.
> is all about making not yet known changes easily, which means you have to anticipate different possible futures.
The absurdity of this statement should immediately convince you that YAGNI is fundamentally correct. Anticipating the future is literally science-fiction. More often, it is superstition ; developers are very prone to FUDing themselves.
The thing is, demands create products and products create new demands. Even on small personal hobby projects this feedback loop can happen and sometimes lead to a full rewrite.
More over, code quality is far from being "all about making changes easily" [1]
> The sweet spot between these two principles is not easy to find. Thinking and discussing about this on projects is a good start.
There's nothing to discuss about it at the coding/design level. Anticipating a demand, generalizing it is done at a level above, or by someone else who has a better understanding of the market or the customer. Know your place. What you have is a document that tells you what needs to be done and a lot of freedom as for how to implement it.
The only case where anticipation can play a role is when you have several equal implementation options. There you can choose the one that seems more resilient to imaginary changes in the demands.
I think the confusion around simplicity is in what terms a thing is simple.
For example, 2 pieces of text can be simple in 2 different ways:
First text is long but uses fewer words in vocabulary. So it is "simple" in a sense you need to learn only few words to read it.
Second text is short but uses richer vocabulary to describe things with fewer words (syntactic sugar). It is "simple" in a sense you only need few words to describe stuff, but more "complicated" because you need to learn more words.
"Neat" code can be "simple" to maintain. "Messy" code can be "simple" to implement.
What you are describing here is approaching usability. And when it comes to usability simplicity doesn't matter on its own, there are four other things to always keep in mind together with simplicity: consistency, flexibility, universality and familiarity. And then understanding what users need your piece of text for applying those concept to come up with the best balance to make users think and learn as little as possible throughout the whole lifecycle of the use.
Take, for example, familiarity. If you use a richer vocabulary that your users already familiar with, they certainly wouldn't need to learn more words and the piece with richer vocabulary would be easier for them to go through, than the longer piece with poorer vocabulary.
Simple doesn't mean more usable, but it's a pretty straightforward concept to understand, there is no confusion around it.
>"Neat" code can be "simple" to maintain. "Messy" code can be "simple" to implement.
I was thinking something similar recently.
Good code should take the next developer less time / effort to understand it than it took the original developer to write it (assuming similar levels of ability).
And as I mentioned in another comment, I learned the most by building a system from scratch and maintaining git for 4 and a half years. If I couldn't understand a bit of code that had written more or less straight away, then it was usually time to refactor it.
Go's parsimony is about cognitive load. I've programmed for 30 years and am an expert in complex languages like C++. I find that when I write Go my mind spends far more time thinking about the problem than the language, which I like.
The language is modern and powerful enough to spare the programmer from some of the worst drudgery (unlike C) and is much more inherently safe, but it's not a jungle gym of complicated constructs and difficult quasi-mathematical concepts that (while interesting) detract from the task at hand.
> The complexity of software is an essential property, not an accidental one. Hence, descriptions of a software entity that abstract away its complexity often abstracts away its essence.
- Fred Brooks, No Silver Bullet, 1986
> Complexity is the business we are in, and complexity is what limits us.
"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."
By simplicity, we mean frugality.
If you can implement something without adding a ton of dependencies, abstraction layers, and design patterns, it will be easier to understand and debug.
One analogy I like for software problems is knotted string. You approach the problem, loosen some things up, try some approaches, and sometimes you get lucky and shake things loose.
Other times, you have to cut the string and you lose fidelity in your solution.
Other times, you can only move the problem around so it looks simple to the user; there's still a knot there, but it may be hidden in the back end.
This isn't the only analogy, and it's incomplete, but I still like it.
1) Concise vs. Verbose 2) Straightforward vs. Confusing 3) Flexible vs. Inflexible 4) Efficient vs. Inefficient
The important thing about these 4 is _they get along horribly_. The most concise solution is often confusing, inflexible, and/or inefficient in many respects. I should also note that straightforward-confusing is the hardest one to measure objectively.
The two perspectives that I often find at opposite poles when proposing simplicity is that of the developer vs. that of the user. Generally what's simple for the user drives less simple solutions for the developer and vice versa. It's not always possible, but when it is, I strive to find the solution that ends up being dead simple for both.
I'm in the opposite camp from you — I like generics, lack of null, and syntax highlighting — but that's fine! I don't think this is something we have to 'agree' or 'disagree' with. It simply means that you're within Go's target market. If you want to use a simple language, and the software you're writing works just fine without these features in your code, then of course it's OK to jettison them and stick with the simple language. I need those features because of the software I write, so I need to stick with a more feature-ful language.
Acting like there needs to be some 'one true way' when we're writing completely different pieces of software is how pointless internet arguments get started.
The problem is not all of us get to choose the language we work in and sometimes someone picks a language for all the wrong reasons and then you're stuck with it because a rewrite is costly and usually not an option.
You're right, and that is unfortunate. But "people sometimes make the wrong decisions" is a much tougher nut to crack!
I agree that we should have these debates — but on a "is this language acceptable for my use case" level, not a "is this language acceptable for any use cases" level.
This. So much. I would really hope that folks could accept that different languages have different objectives, and that this is good. The problem I've seen is that there's always a feature creep by folks that like more expressiveness; "This is a solved problem and not an issue at all! You must make it a part of the language!". So there's some kind of inherent pursuit of conformity given enough adoption.
That’s true, but there are a lot of beautiful and featureful programming languages out there, and possibly only Go in the “simplicity above features” camp. I disagree with the move towards Go 2 by introducing genetics and whatnot. If I really want those features, I know where to get them.
> In my experience, most harsh criticism comes from those who either barely used Go if at all or tried to build something Go wasn't aimed for.
How do you know the harsh critics of Go you read barely used Go at all? What things did harsh critics try to build with Go that it wasn't aimed for, and/or what is the most frequent wrong thing those critics try to build with Go?
I think it's worth pointing out that the current objections to Generics in Go have nothing to do with them being too complicated for programmers. The summary is [1]:
> The generic dilemma is this: do you want slow programmers, slow compilers and bloated binaries, or slow execution times?
Here "slow programmers" means, "do you want a language without generics". Both quick compilation and quick execution are first-order goals of Go, so people are loathe to give them up.
You mean who are we optimizing for. For the guy who has to understand and maintain this code 2 years from now. (This guy could in fact be you yourself).
Thats why I like Go. And C, now that I think about it...
I agree with the first part -- which is why I don't like low-level languages like C.
I can't count the number of times I've encountered old C code that uses pointers and bytes and arrays, and I discover a bug, and on further inspection I can't even figure out what they were trying to do. It's all just pointers and bytes and arrays. The language is too low-level to encode intent, except at the lowest level, so you're putting all your trust in comments.
I agree with the author that the simplicity of learning the language is not the only type of simplicity. There are other types of simplicity like the simplicity of how consistent the language is. For instance, I've felt for some time that Python has become too complex. I started out quite simple, and there are now like 3-4 different ways to write a method signature.
Steve Jobs created many things and also destroyed a many. One of them is the openness of platforms and another is what I'd call delightful complexity. Simplicity is not always a joy and lot of overjealous wanna-be-Jobs seem to forget about that. I do want all those buttons and choices in my music app that let's me be expressive and opiniated about my taste. And I definitely want generics!
A bigger issue is that a many times simplicity for you is not simplicity for others. This often comes back to haunt you in software design. You might think that complex design is not good for you and you may strive to simplify internals even though it might make life awkward for users of your creation. I feel a lot of such decisions have been made in designing Go. Overjealous pursuit of simplicity has led its designer to keep compiler less complex, internally beutiful and work of art but at the expense of external usability for its users. This article captures the sentiment pretty well.
I don’t think it’s a good idea to introduce complexity to the definition of simplicity and I think this is what the author is trying to do.
It’s completely backwards. The whole point of calling for simplicity is to make you think about whether there is a simpler way to express yourself that still solves the problem at hand. This might be a hard task, but it’s not a complex one.
I think this piece risks being interpreted as saying that simplicity is not important.
The fact that simplicity is hard to achieve will be obvious to anyone who has ever truly tried to achieve it.
Personally I would worry far more about people thinking simplicity is a fools errand and so not trying hard enough to achieve it, rather than people thinking it might be easier than it is.
TFA seems to build upon confusion of how "simplicity" is defined.
It's easier to compare it against "complexity" which is the sum of parts; e.g. a complex machine, so "simplicity" must be attributed to an atomic design of "the smallest part which cannot be broken any further"; i.e. think of a simple machine like a lever.
I think Ousterhout’s pragmatic definition of complexity is the best one when discussing actual code bases: complexity is anything that makes the system hard to change, taking into account how often the relevant part needs to change.
Yeah, with this definition, complexity is something you notice as a working team, not something you can directly measure with syntactic criteria (of course syntactic criteria may correlate with the complexity).
I’ve actually been writing a post about just this concept. When I think simple, I think solving the problem in a way that adds value and doesn’t have unnecessary overhead. I think it’s important to highlight the differences between simple, complex, and complicated (like from the Zen of Python). The basic idea is that if possible you want a simple program, but most problems aren’t that’s simple. So the solution needs to be complex with lots of parts. Those parts can and should be simple though. Complicated means adding things that don’t any value. Maybe it’s using containers on a CRUD app that doesn’t need it because it looks good on a resume.
I'm usually in favor of the good old KISS principle, but I've experienced few situations where "simple" ready made solutions were chosen on the basis of few demos showing "how simple it is to use this or that feature" over writing in-house scripts in general purpose language like python. In all of those unrelated situations there was trouble later when it transpired that indeed it is simple to use feature A, B and C that works exactly as original author's wanted, but the moment you want to tweak something manipulating the whole thing is terribly inefficient, and/or complex. In the end tweaking those simple ready made solutions required more work than re implementing the whole thing from scratch would take in the first place...
I would define simplicity as "the approach that minimizes cognitive load while meeting all other objectives"
I would hope that people with the same objectives would agree on simplicity for that domain.
Examples:
Need a lot of machine control, direct memory control (page alignment, cache alignment, shared memory, numa, etc) interact with system calls directly, interact with hardware, and security is not important in the domain (e.g. private hpc cluster behind a firewall)
C is probably the simplest
Need to quickly prototype something medium sized that involves non-trivial data structures or needs to interact with an http server. And you don't care about performance.
Something like Python is probably simplest
Etc etc etc
By contrast.. try doing the former with Python, Java, etc and see how much additional cognitive load you have to add.. ever try the later with C.. no fun.
> I would define simplicity as "the approach that minimizes cognitive load while meeting all other objectives"
That's one dimension of simplicity, but it doesn't take into account some of the other benefits of simplicity, such as, eg: robustness. Another definition would be "the approach that minimizes the total number of possible interactions" (akin to "moving parts" in a physical machine). In that sense, the fewer total possible interactions, the fewer places something can break. There might be MORE cognitive load to a human in direct sense, but the system is far more constrained to only do the "right thing".
I sometimes think of the generalization of it being "the approach that minimises the total state space of the system, including the brain of the human trying to understand it".
I like to take an information theoretic, model-problem fit stance on simplicity.
It's useful to reason about it in terms of overfitting vs underfitting, over-abstraction vs under-abstraction, high resolution fit vs low resoltion fit, and in terms of cross entropy between the knowledge embedded into a software system and the information relevant to solve the corner cases of the problems (https://medium.com/@b.essiambre/product-market-crossfit-c09b...).
Well tuned knowledge is not an easy thing.
My take on simplicity is how strong the abstraction layers are. Using a web framework to show "hello world" versus making a video card driver and make it show "hello world" on the screen.
A world class athlete makes something hard look easy. So just because the code looks simple, doesn't mean it is easy.
Remember when you first learned how to code, or learned some new concept, and now you can't remember what was so difficult about it?
Just one line of code might touch several computers, hardware components, with several software layers. Everything has to go right, all the way down to atomic level. Nothing is simple. It just appears to be simple through strong abstraction layers.
Recognizing simplicity requires familiarity with complexity.
- - - -
FWIW, Category Theory provides a mathematical formalism for determining when your system is "as simple as possible, but no simpler".
- - - -
> “average programmers” have difficulty understanding complex languages
That's not pejorative IMO, that's practically tautological: If "average programmers" have difficulty understanding a language, isn't that an indicator that it might be too complex? Anyhow, trawling though e.g. Stackexchange provides adequate evidence, IMO. Or a thread about Haskell with all the people complaining about how hard it is to understand FP, etc.
There are several key "rules of thumb" to keep in mind, and they are often contradictory. Knowing how to weigh them well against each other takes experience in both the domain and IT in general.
Another rule of thumb to add is "don't obsess on a single rule of thumb; they are all important."
In my experience "keep it approachable" often overrides "less code" and DRY. Maintenance staff changes and you probably cannot control the quality of the next maintainer such that if your code requires an abstraction guru to figure out, you may put your org in a difficult spot.
>There are several key "rules of thumb" to keep in mind, and they are often contradictory.
Rules of thumb in software engineering is bullshit masquerading as wisdom. Don't keep them in mind. They will hog your cognitive bandwidth which you could use to actually reason about problems.
Real engineering involves balancing tradeoffs. To balance them, you need to understand what they are. Sloganistic "rules of thumb" do not help with that at all.
For example: YAGNI. "Your ain't gonna need it." It could be used to advocate for deferring a decision for which you don't have enough info or it could be used to advocate for ignoring a critical design flaw that's guaranteed to bite you in the ass later on.
Good engineers don't spew out slogans. They explain their reasoning.
Re: "For example: YAGNI. "Your ain't gonna need it." It could be used to advocate for deferring a decision for which you don't have enough info..."
Another rule of thumb is "know your domain".
Re: "Good engineers don't spew out slogans. They explain their reasoning."
That is true, but it's also good to have summary reminders. People won't remember most 2 hour lectures unless refreshed over time, and rules of thumb are one way to do this without re-attending the same 2 hour lecture over and over. In the ideal world they'd re-attend the full lectures (or close to) to get a refresh, but that's not the way most humans do things.
"There are two ways to design a system: make it simple enough to be obviously right, or make it complex enough not to be obviously wrong" -- Charles Hoare
Simplicity cannot be a goal in itself. Is the dashboard of a F35 worse than the dashboard of a car -- because it's less simple? It's the same thing with programs and programming languages. Computers are universal devices, they can be F35s or just basic cars. The important thing is 'fit for purpose', which is more elusive and not so amenable to simplistic ideological statements.
i can't remember the author but there was a disucssion on usenet about comm. protocols years ago and someone said "it's not finished when there's nothing left to add, it's finished when there's nothing left to take away". To me, that's what simple means and i think it also equally applies to "fit for purpose". A perfect fit for purpose is an implementation with nothing left to take away. I feel like when you someone sees a perfect fit for purpose they would also think "this is really simple" whether it be an F-35 dashboard, nuclear reactor control room, or the controls on a lawn mower.
edit: after thinking about this for a few min. there's larger than normal holes in my point heh. However, i really like OPs point.
I took the point to be that there isn't always a clear "simpler" way to implement something, and that when someone is hesitant to accept what to you is the obvious simple solution, listen to their concerns.
Simplicity is objectively measurable. Here is a practical suggestion, though directly applied to mechanical engineering is readily adaptable to application design.
One way to think about simplicity is to accomplish the greatest number of business objectives with the fewest number of abstractions. As others have said, that isn't easy. It takes increased effort to achieve simple.
The metric chosen here is still arbitrary, why would the number of interfaces be more important than the number of the amount of interconnected interfaces? It is still subjective and hence simplicity is still subjective too
The problem is that no matter which metric you choose, I can pick another one leading me to have opposite outcomes in a manner that you cannot argue with. The importance of your objective metric will still be subjective based on how important I think each metric system is. Making the entire thing subjective.
There is a lot of noise in the article (like mentioning Rob Pike's take on syntax highlighting), but mostly it seems to be a "I like Rust and Haskell, and don't like Go" type of articles. Could've been a single tweet.
Attempt to explain the dangers of simplicity is lost in vague sentences like "Rust has different concept of simplicity -> Rust authors wanted to protect from memory errors" or TBU (True But Useless) phrases like "every choice has a cost"/"everything is a tradeoff".
> mostly it seems to be a "I like Rust and Haskell, and don't like Go" type of articles
The mentions of Rust, Haskell, and Go are limited to a single section of the article, so I'm not sure what you're talking about. Most of the article seems to stay abstract with concrete cases brought as examples.
I might be biased, as I had to read every Go rant back then when writing a "How to complain about Go" article, and this is clearly the same narrative.
Author did a good job to hide it, but if you look closer you'll see indirect Go mentioning throughout the whole article:
- "“What do you mean, you think Go’s error handling is bad? It’s simple!”"
- How can we articulate a position more powerful than “syntax highlighting is juvenile?” <- refers to the Rob Pike's opinion about syntax highlight
- "Go’s error handling has become a meme,"
- "The creators of Go have said, sometimes in pejorative terms,1 that “average programmers” have difficulty understanding complex languages,"
- "It’s inflammatory, but the article “Why Go’s design is a disservice to intelligent programmers” is worth reading just for the quotes from Rob Pike."
The link about Go with quotes by Rob Pike lets me think that maybe Abc.xyz made Go like they did because they wanted to be able to have some sort of AI generate code eventually.