Hacker News new | past | comments | ask | show | jobs | submit login
Do the simplest thing that can possibly work (2004) (twasink.net)
226 points by TacoSteemers on Feb 28, 2023 | hide | past | favorite | 90 comments



This write-up is too light to provide any real insight. In particular, how do you assess simplicity?

From an example I'm currently working through on a hobby project... do I use a RS-485 transceiver with a custom line code, or do I use a 10base-T1 PHY? Ethernet, especially one pair ethernet, is undoubtedly more /complex/, with echo cancellation, a complicated line code, etc; but if I use the PHY, then /I own/ that complexity.

(For pure software folks, the equivalent question is internal implementation vs external dependencies. Do you implement a 1D barcode yourself, or do you import a third-party dependency for QR code reading?)

The problem is that answering this depends not on some objective notion of simplicity, but on a realistic assessment of /where time will go/ during the development process. If development time dominates, then exporting complexity is a win. But if testing time dominates, and since exported complexity still needs to be fully understood during testing, then in-house simplicity wins.

And which of these cases dominates is very much a project-by-project and team-by-team decision.


Actually Rick Hickey's talk (https://www.youtube.com/watch?v=LKtk3HCgTa8) is amazing because he talks about your case by going back to the original meaining of words:

,,Complex comes from the Latin complecti, which means “to entwine around, to embrace''

Simplicity requires layering, so in your examples the main requirement for simplicity is about how intertwined your hobby project is with the transciever code or ethernet code.

As long as the abstraction layer works well for you without getting too much into the details of the implementation, it's a simple solution.

Of course it's not a clear answer whether you should do things yourself or use a third-party, but if the third-party works perfectly for use case without significant tradeoff in your system, of course it's better to use it.


> As long as the abstraction layer works well for you without getting too much into the details of the implementation, it's a simple solution.

But this is where the engineering intuition has to come in. "As long as you will not end up spending more time debugging the system than implementing it" is an equivalent statement -- and that requires prediction of the future. If I'm going to spend hours staring at signals on a 'scope to debug the system, I'd way rather they be RS-485 than 10base-T1, for reasons of simplicity -- but I don't know, today, if I will or not.

Layering works /great/ during implementation. Layering is a strong impediment to understanding during testing and debugging. Debugging a system efficiently requires being able to bridge between layers of the system and see where your assumptions break down. And once you're going between those layers, you're exposed to the complexity within them.

So: simplicity in implementation, or simplicity in debugging?


Then comes the engineering maxim, that you can only componentize things that have standard features and quality.

Software engineering gets the shorter straw, because there's a strong force stopping standardization and pushing components into a single implementation. It then becomes a judgement of trust, not of requirements satisfaction.


I like to use SMBC's take[1] on the "Watchmaker Analogy" - complexity comes from, in order:

(1) number of things interacting

(2) complexity of interaction

(3) complexity of thing

So simplicity is then an inversion of that. You can "maximize simplicity" by:

(1) minimizing the number of things

(2) minimizing the complexity of interaction

(3) minimizing the complexity of each thing

This ends up reinventing many of things you find elsewhere (think SOLID; same-level-of-abstraction principle, etc) although I also generally find it's the first one - the most important one - that gets fucked up the most (one example: "type explosions", when you end up with just a bazillion different slightly different types).

Also, on a broader level, there really do seem to be two kinds of systems: Engineered systems, which (notionally) attempt to minimize those things, and "evolved" systems, which somewhat maximize them - both economies and ecologies have (1) many different interacting things, (2) with complex interactions, and (3) which are themselves complex.

You're right that it's an intuitive sense, but, I do think the right advice and perspectives can give you a leg up on learning and applying that sense.

[1] https://www.smbc-comics.com/?id=2344


What's interesting in Rick Hickeys video is that he talks about prioritizing minimizing what each thing does over minimizing the number of things (that you can ignore anyways).

Having more things doesn't make systems more complex in itself if they can be combined differently as requirements change.


I agree and disagree! That talk is a favorite - and it's why I say "number of interacting things".

If we're weaving together three strands (basic braid), that's fine - we've got three interactions. If we take that braid and two more and weave them together, IHMO we're only adding three more interactions (now we're at 6), but if we take all nine original strands and weave them all together, we're up to, what... at least 72 "interactions" (each of the 9 has an interaction with 8 others), and that's before asking if any of the "interactions" themselves become "interacting things" (and then we get a combinatorial explosion).

If instead we take those nine, and, say, braid three together for a bit, then swap one strand out for another, braid for a bit, repeat until we've gone through all nine - each strand is interacting with, hmm... 4 others? (two, then a new one, then a second new one) So then that's "36".

It's not really a precise measurement, but I do find it useful question both when investigating a system, and when designing one: "how many things are interacting, and how can I reduce that?" (systemic complexity), followed by "how can I simplify the interactions themselves?" (abstraction leakage), followed by "how can I simplify the things?" (cleaning up well-encapsulated code).

A practical example: If I want to create a test factory for an object, how many other related objects must I create for that first one to exist in a valid state?

A practical application: I can get away with complexity in well-encapsulated code, because it's easy to come back to and fix; I won't have to modify anything "outside". But I can't get away with complexity between things, because then in order to come back and fix it, I have to deal with chunks of the entire system.


,,If we take that braid and two more and weave them together, IHMO we're only adding three more interactions (now we're at 6), but if we take all nine original strands and weave them all together, we're up to, what... at least 72 "interactions" (each of the 9 has an interaction with 8 others), and that's before asking if any of the "interactions" themselves become "interacting things" (and then we get a combinatorial explosion).''

You're totally right in it.

But the huge mistake I made just recently is to create a very simple interface that hides lots of different features with a few elegant flags. Although it's a super tiny interface that's easy to understand, the interactions became very complex.

Instead of using my library, people started to create another that just does 1 thing, and can't take advantage of my hard work even if they wanted.

Have I created 10 different totally independent components that use the same basic data structures (with a bigger total API surface), people could have used just the 2-3 that they need in their own system, and would have been able to understand (and even report / fix / debug) the interactions.

And actually everybody wants something a bit different, and nobody wants really all those 9 features.

This experience is what resonates with me right now when listening to the video.


Yup yup! It's like Asimov's Three Laws; you want to end up with a balance between the principles, and the "more important" ones just get higher weighting. It's totally possible (and common, I'd say) for the "weight" of the third one (complexity of the things) to reach the point where it's better to shift the complexity onto the "number of things".

(actually, on that note, a piece of my life philosophy is to have "opposing principles", since it's only through forces in opposition that balance is possible).


easy =/= simple

While I get Rich's epistemological framing -- composing with coherent, independent, units -- "embracing" certainly does not ipso facto imply 'complex'. As a matter of fact, that line of thinking smells like a tautology.

Let's assume that if X is complex in one embodiment (say as software), it's analog will also be complex in the mapped domain. The most common occurance of this is when we describe a system. As it happens, our brains are much much better at assessing language constructs than material constructs. Simply describing system X will go a long way in gauging its complexity. A comparative description will make it crystal clear.

p.s.

Fully embracing simplicity:

https://architizer.com/blog/inspiration/industry/japanese-ar...

Description: Traditional Japanese joinery is made entirely without the use of metal fasteners or adhesives, relying on compression forces and friction of interleaving pieces.


The question is about simplicity of core implementation.

Easy debuging is different goal and you can have infinity of such additional goals, but solution obviously can't be equally simple at everything at the same time (because of the conflict of responsilities).


From my experience most of the complexity doesn't come from adding stuff (where intuition is the only thing you have, and this rule doesn't help), but when removing/refactoring stuff, or the lack of doing it.

A recent known example is Elon Musk removing a lot of services in Twitter that were built over the years. Every addition probably improved the system's functionality, but the more complex a codebase gets, the harder it is to change separate pieces (by definition of complex).

I believe it was a big business mistake of him buying Twitter (especially as Tesla is getting competitors, like BYD growing by 100% a year), but removing services in itself probably makes the code more manageble by a smaller team.


Seeing as how the service is way more buggy and unreliable since then...

If I compared it to monkey with a wrench in server room, I'd be doing the monkey a disservice.


> As long as the abstraction layer works well for you without getting too much into the details of the implementation, it's a simple solution.

You can have very large number of layers, and to understand inner goings and interconnetions of all becomes very hard.

I highly doubt you can equalize "set of superb interfaces" with simplicity.


You're totally right. Just watch the video, I can't compete with Rick Hickey. I just rewatched it and would probably modify what I wrote, but the main point is: it's better to not write a summarizing article or comment when that video is so great, so I won't try to write something smarter (rather try to apply the things he said in the video).


I always point to this talk when engineering debates around simplicity & complexity come up. To me the key point is that "simplicity" and "easy" aren't synonyms.

Many people, when they said "do the simplest thing" they really mean "do the easiest thing". That's fine if that's what you want, but if you find yourself talking past someone else who means "do the simplest thing", that's why.


Simple is something that either can't be reduced further without changing the output (ideal, simplest), or it is very hard to do so (real world).


But this ignores, as in my example, who pays for the complexity.

I want a bagel. Is it simplest for me to start tilling the land and looking for wild wheat relatives to breed, or to drive my incredibly complex car built in centuries of industrialization to the corner store and buy (using money, one of the most complex concepts we've developed!) a bagel, bring it home in a plastic (!!!) bag, and stick it in the toaster?

If I should, during my lifetime, succeed in completing a bagel with the former, I have reasonable confidence it can't be reduced further without changing the output.

But I disagree that it's the simplest way /for me/ to get breakfast.


I don't think the person you're replying to would consider "till the land" as fitting what they're describing as "simple".


It's a requirement for making a bagel… the question is whether I do it, or someone else. Part of the irreducible complexity of bagelness is the production process of wheat.


I'm not sure how you arrived at "every single step involved must be considered" from "can't be reduced further".

If a step available to you is "buy a bagel", that's all you have to consider.


At the risk of beating an analogy to death... this is exactly the mental model that gets us leftpad. Outsourcing of complexity is /not/ elimination of complexity. I may go my whole life without having to debug the lower-level steps in the process that put a bagel on my table, and if that's the case then treating dependencies as zero-cost makes sense -- but also, I may not be so lucky. When the supply chain collapses, I have to go without my bagel, or dig deeper. Either of which may be fine, depending on my requirements -- but the exported complexity is now present and impacting my experience.


There's nothing wrong with leftpad as a library concept. The problem was that it got hacked, not that it does something useless that everyone should write on their own.

leftpad was incorporated into ECMAScript in 2017 for a reason.

> Outsourcing of complexity is /not/ elimination of complexity.

This is technically true but functionally false. You're literally arguing against the concept of abstraction layers, while writing text into a form built on top of hundreds of them.

There are two thoughts I have regarding this:

1) Leaky Abstractions seems like what you're trying to point out, and accurately so [0] but,

2) you can still rely on even leaky abstractions. You must, in fact, to function in this world. In some ways the quote, "In preparing for battle I have always found that plans are useless but planning is indispensable" [1] applies to abstraction as well.

[0] https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...

[1] https://quoteinvestigator.com/2017/11/18/planning/


> You're literally arguing against the concept of abstraction layers, while writing text into a form built on top of hundreds of them.

Sorry, I see how what I said could have been interpreted this way. That's /definitely/ not my intent, and I'd like to clarify.

I am arguing that abstraction layers have cost, and (independently) that third-party dependencies have cost. (Abstraction layers are a great tool for isolating third-party dependencies, so they tend to go together, but they're independent things here.)

Also, lack of abstraction layers has cost, and building everything in house has cost.

These costs need to be traded. In some cases, leaning on external dependencies is the right choice. In other cases, it's not.

If complexity (opposite of simplicity) under an abstraction layer is neglected (estimated at zero), using simplicity as a guideline for making engineering decisions will lead you to the wrong decision some of the time. Similarly, if complexity under an abstraction layer is treated as just as non-simple as complexity above the abstraction layer, using simplicity as a guideline will lead you to the wrong decision some of the time.

Therefore, "the simplest thing that can possibly work" is too naïve a metric to be used for making decisions (as opposed to just for justifying decisions you already want to make). It takes a more nuanced discussion of types of simplicity, and whether complexity is being eliminated or just hidden, and if it's hidden how likely it is to stay hidden, to make this rule useful.

For the record: I buy my bagels from a store. When the roads are closed due to snow, I don't have a bagel. That's the right decision for me, unsurprisingly, for this problem.

Finally, I'll argue that there /was/ a problem with leftpad as a library concept. There is an inherent minimum complexity to a dynamically linked external dependency. The reduction in complexity of the implementation must be at least as large as this cost. One can argue about where the line lies (and it depends on the maturity of your ecosystem, etc), but I'd take the stance that leftpad is too simple to implement directly for pushing it to an external dependency to ever e the right choice.


If your point is, "abstractions don't have zero cost" point taken. But if your point is, "abstractions have meaningful cost", I'd rephrase that as, "The better an abstraction is, the closer to zero its cost becomes."

Can we agree on that?


Absolutely.

And in particular, I'd like to rephrase my original statement of:

> Outsourcing of complexity is /not/ elimination of complexity.

to

> Outsourcing of complexity /reduces/ complexity, but not all the way to zero.


Woo, agreement! And a good point, too!


Irrelevant point of order: leftpad didn't get hacked, the owner took it down to prove a point and broke every package that depended on it as npm allowed packages to just be completely removed by the authors at that time.


It is a good way to consider the extremes of the problem-space, which often is a good way to come at a problem.

But a more practical, analogous situation might be: should I buy a nice, warm bagel at the local bagel shop, or should buy it a store and toast it myself? In that trade-off I can take for granted that I'm getting a bagel, but the delivery mechanism, the quality, the integration options, the cost, are things I need to consider.

The decision will depend upon your requirements. If you are organizing an event, maybe I get some bulk catering from the bagel-shop. If you want to use your aunt's bespoke berry jam, maybe you use the store-bought bagel so you easily can use your home spreads.

Identifying the optimal simplicity can be a hard problem, but that shouldn't preclude narrowing down choices with some rough heuristics so that you don't need to investigate the combinatorial explosion of all possibilities, or rethink the system dependencies back to: "first we have a big bang."


You are not producing a bagel. You are obtaining it. There is a big difference. Once nobody produces a bagel, your method is meaningless.


The problem is that answering this depends not on some objective notion of simplicity, but on a realistic assessment of /where time will go/ during the development process.

I don't think anyone mentions time as a proxy for simplicity. At least, the article certainly doesn't. You're right that the author doesn't objectively define simplicity, but I don't think anyone can. What is simple tends to be different to different people/teams, based on skills, tools, etc., available.

I know what's simple to me. I know it may not be simple to you. I know what's simple for a team in my org and I know it may not be simple for another team in another org. But, I do know what skills someone in my position and in my org is expected to have, and I know what tools are available to us, so I can make some real assertions here about what is "simple". Get worried beyond that, and you get bogged down on unknown unknowns.


I agree, the article is entirely focused on semantics. In the real world, outside of research and education, no one ever attempts to make something more complicated than it needs to be. A software project consists of thousands of different problems with solutions that must be mutually compatible through a web of compromises. You have known hard requirements, known soft requirements, known future requirements, unknown future requirements, and you're searching for the simplest possible solutions for each of them that result in something like a "minimum net complexity." The problem of "over-engineering" comes when a solution that optimized the simplicity for one concern becomes incompatible with another concern. It's inevitable in any system where requirements are subject to change over time.


Does simplicity equal time? In my mind it doesn't. As for your example I'm a software person and bringing external dependencies feels like adding layers of complexity. Simplicity is minimalist, if I need an external dependency I generally try to extract the actual part I need and understand it and have it my own code to streamline what I need. From my view external dependencies are the epitome of complexity.


Comparative simplicity requires you to accurately imagine the entire lifecycle of each alternative.

This is a lot of work. And your prediction can end up wrong anyway (by your mistake or by the world changing).

How are we then to make choices? Perhaps just, if one solution seems clearly simpler (to you), then choose that. If one looks unnecessarily complex, don't choose that.

Simpl-est derails us perfectionist programmers. So maybe "Do the simpler thing that can possibly work"

"You don't need to know a man's weight to know that he's fat" - Benjamin Graham.

EDIT "Could possibly work" also implies a lack of foreknowledge as to its actual simplicity, or whether it will function correctly... or at all.


I like to distinguish complexity from complication: the former requires cleverness to understand more of; the latter time and effort.

In the simple case of a solo project, as much complexity as you understand is fine; in a team you obviously need some idea of a threshold, not that you could quantitatively define it. Complexity isn't necessary, though isn't a problem - complication on the other hand is always bad, it's just making things hard to reason about, but may be necessary if the only alternative is adding unacceptable complexity.

The problem with discussing 'simplicity' is that it's an antonym for both complexity and complicatedness.


For me, the simplest thing would be using a serial interface instead!


RS-485 is a serial interface, at the physical layer.

8N1 as a line code introduces all sorts of other issues, assuming you're passing messages instead of byte streams over it. In particular, how do you do packetization? How do you synchronize? So many "serial interfaces" have implicit timers (corresponding to interpacket gap for ethernet) used for sync'ing, or play horrible games with embedded CRCs… there's a huge amount of hidden complexity here, especially if you do it implicitly without understanding the dependencies.

By the time you've solved reliable packetization over 8N1, you're going to have something that looks a lot more like an Ethernet-level-complexity line code.


Why reinvent the wheel? Just look at some older protocol, like SLIP.


SLIP uses byte stuffing to reserve its end-of-frame sequence, which leads to data-dependent packet transmission times, which is not acceptable in my application.


Is this a big deal? Say, if character 0 is reserved you can encode everything in base255 and transmit encoded bytes shifted by 1. (Or, for simpler encoding, transfer an appropriately encoded bitmask of which characters are 0, then a copy of that data where 0 is replaced by anything else.)

Edit: this HN comment by KMag suggests a much simpler encoding https://news.ycombinator.com/item?id=12550584 (you'd need to process your packets in 254 byte chunks)

> replace first null with 255. Every later null, replace with the index of the previous null. Make the final byte the index of the last null (or 255 if no nulls were replaced). In this way, you've replaced the nulls with a linked list of the locations where nulls used to be. To invert the transformation, just start at the final byte and walk the linked list backward until you hit a 255.

Looks like this is https://en.wikipedia.org/wiki/Consistent_Overhead_Byte_Stuff...


Yeah, COBS works. In my case, I can even go simpler, since messages are fixed size. But:

1) This is now part of the line code. And "uart + slip but modified" starts losing some of the "simplest thing" charm of "just do what everyone else does."

2) Looking at this without reference to previous work, it sure seems unlikely to be the simplest thing. Magic numbers everywhere -- 8N1 uses 8 bit bytes to support ~5% clock skew, which isn't reflective of the application; COBS forces sub-packets at 255-ish byte intervals, which doesn't match any inherent concept, etc. It can work, but does it make sense in isolation?


Sounds like you have an answer to your question, so I don't see the problem.

Yeah, it'll be a project-by-project and team-by-team decision, and that's as it should be.


That said, can't barcode be replaced with something easy to generate and read like braille?


It’s too light to generate real insight because he took his own advice lol


Obligatory Simple Made Easy link:

https://www.youtube.com/watch?v=SxdOUGdseq4

Simple is a matter of intuition, and that can't be transmitted to others easily, or with a single class or book.

At one particular job we got punished by the business for calling things 'easy' when what we mean is that we understand the problem and all of the steps are (mostly) known. Our boss coached the hell out of us to say 'straightforward' when we meant 'understood', instead of using 'easy' as an antonym for 'quagmire' or 'scary'.


Agreed. But I also think that "simple to implement," "simple to debug," and "simple to test" are different metrics -- and that one has to choose which one to optimize for. This is independent from assessment of "simple" varying with intuition -- "simple" alone isn't a coherent concept.


That's part of the section in Programming Perl that sticks in my memory.

From my copy...

> Efficiency

> ...

> Note that optimizing for time may sometimes cost you in space or programmer efficiency (indicated by conflicting hints below). Them’s the breaks. If program- ming was easy, they wouldn’t need something as complicated as a human being to do it, now would they?

> ...

> Programmer Efficiency

> The half-perfect program that you can run today is better than the fully perfect and pure program that you can run next month. Deal with some temporary ug- liness.1 Some of these are the antithesis of our advice so far.

    • Use defaults.
    • Use funky shortcut command-line switches like –a, –n, –p, –s, and –i.
    • Use for to mean foreach.
    • Run system commands with backticks.
    ...
    • Use whatever you think of first.
    • Get someone else to do the work for you by programming half an implementation and putting it on Github.

> Maintainer Efficiency

> Code that you (or your friends) are going to use and work on for a long time into the future deserves more attention. Substitute some short-term gains for much better long-term benefits.

    • Don’t use defaults.
    • Use foreach to mean foreach.
    ...


I've been dealing with a batch processing task that's written in NodeJS (partly because it was the tool at hand, partly because it does offline a process that can be done online so it's reusing code), and global interpreter locks are definitely introducing some new nuances to my already fairly broad knowledge of performance and concurrency. Broad not in the sense that I am a machine whisperer, but that I include human factors into this and that explodes the surface area of the problem, but also explains quite a lot of failure modes.

In threaded code it's not uncommon to analyze a piece of data and fire off background tasks the moment you encounter them. But if your workload is a DAG instead of a tree, you don't know if the task you fired is needed once, twice, or for every single node. So now you introduce a cache (and if you're a special idiot, you call it Dynamic Programming which it is fucking not) and deal with all of the complexities of that fun problem.

But it turns out in a GIL environment, you're making a lot less forward progress on the overall problem than you think you are because now you're context switching back and forth between two, three, five tasks with separate code and data hotspots, on the same CPU rather than running each on separate cores. It's like the worst implementation of coroutines.

If instead you scan the data and accumulate all the work to be done, and then run those tasks, and then scan the new data and accumulate the next bit of work to be done, you don't lose that much CPU or wall clock time in single threaded async code. What you get in the bargain though is a decomposition of the overall problem that makes it easy to spot improvements such as deduping tasks, dealing with backpressure, adding cache that's more orthogonal, and perhaps most importantly of all, debugging this giant pile of code.

So I've been going around making code faster by making it slower, removing most of the 'clever' and sprinkling a little crypto-cleverness (when the clever thing elicits an 'of course' response) / wisdom on top.


> Programming Perl

That book is one of the most underrated and overlooked works on the philosophy of programming I've ever read. It's ostensibly about best practices in programming Perl (which some people consider a complex language), but in reality this is a very deep book about the best practices for programming in any language.

Note the above excerpt is pretty much universally applicable no matter what the language. Much of the book is written at that level.

https://www.oreilly.com/library/view/programming-perl-4th/97...


I could say a similar thing about Practical Parallel Rendering. Officially it's a book about raytracing CGI in a cluster, but the first half of the book explains queuing theory and concurrency concerns in tremendous detail. It's a thin book to begin with, and you've more than gotten your money's worth if you read the first half and give up when they start talking about trigonometry.


The rules of Chess aren't that hard. The rules of Go are even easier. You can literally spend your whole life unpacking the implications of the rules of either of those games.

Ultimately both are 'too simple', resulting in a combinatorial explosion of states, and at least a quadratic expansion of consequences.

We often write software to deal with consequences of something else. It's possible and not that uncommon for the new consequences to be every bit or more onerous than the originals. I call this role a 'sin eater' because you're just transferring suffering from one individual to another and it sounds cooler and more HR appropriate than 'whipping boy'.


And, to add a bit more nuance, simplicity can also depend on the stage a project is at... It may be really simple to implement core functionality to demonstrate an idea, but developing on that code can add a lot of complexity later. For example, adding security late in a project is almost always much more difficult than adding a small amount up front. Even the simple to implement metric can be a difficult judgement call.


I haven't been able to distill it to first principles yet, but I do have a practice of writing code in such a way that it invites the next step.

I suspect that at first I did this in an attempt to hack my own sense of motivation, like putting the books you need to return next to the front door. But it turned out to be quite handy for seducing junior developers (and sometimes senior developers) into finishing an idea that you started.

They are so proud that they've thought of something you didn't think of, rather than something you were looking for a maintainer/free cycles for.


I am taking my own advice and re-watching this presentation. I'm being surprised enough by parts I don't remember that I've decided that I need to watch this video at least once a year.

Certainly there are some things I've just forgotten, and others I just wasn't ready to hear.


> the equivalent question is internal implementation vs external dependencies.

Liabilities. Take the Windows EULA, its a contract that states MS is not liable for anything, standard software contracts state the same, so if boils down to being able to prove negligence, which can be sued for.

For example, do you trust the suppliers? IF they are in a different country, what's the chance of legal recourse if negligence can be proved, knowing about political interference if the entity is valuable enough?

So yes I agree, how do you assess simplicity, and as Billy Gates would say... it 's complicated!


Rich Hickey's Simple Made Easy presentation is a fantastic introduction to this philosophy: https://www.infoq.com/presentations/Simple-Made-Easy/

Simple isn't the same as easy, and it isn't always obvious where the complexity is. One should beware of "simple" solutions that either hide the complexity, or shove it someplace else. The skill is to identify and minimize unnecessary complexity, which is another way of phrasing "Do The Simplest Thing That Can Possibly Work".


Thanks for this, it's great. I've never explained it as clearly as these two do, but this has always been my philosophy and what I try to aim for when developing software. I find that a lot of times people opt for easy, thinking that it's simple, but down the road they find out it is actually complex. I wonder if we will every see a real shift to focusing on simplicity and the gains that come from it?


I wish I could convince product teams that the MVP is often just a single feature. Like a search bar + results page.

Something we can ship very fast, then we can add the banners, tracking for marketing, account creation, user ratings, community forums, results commenting and sharing, image carousels and a mobile app with push notifications that the results changed. You know, the regular MVP stuff.

So many people think agile means waterfall using sprints.


My voice is hoarse from saying this so many times. It's a constant battle trying to explain that it's not perfect, but we can't improve on it based on feedback, if it's not done.


It often seems to be the illusion (confusion?) of adding enough features will somehow make this product useful because 1) I know about facebook/amazon/google and 2) it's successful and 3) has all the stuff.


MVP: Minimal Viable Product.

The greatest example of this is Unix.

Multics was a huge produce that failed (initially). Bell Labs washed their hands of it, and didn't want anything to do with Operating Systems again.

Ken Thompson wrote an initial scrappy version of Unix in 3 weeks. Re-writing to C was a tremendous move because it meant that Unix could be ported easily to many other systems.

I heard someone say that the genius of Dennis Ritchie was that he knew how to get 90% of the solution using only 10% of the work.

I'm working my way through Unix Haters Handbook [1], and it's a good read, even for someone like myself who really likes Unix.

Unix and C are the ultimate computer viruses -- Lawrence Krubner

[1] https://web.mit.edu/~simsong/www/ugh.pdf


I thought c was primarily developed for the initial purpose of being the language used to write Unix and that their developments were practically one after the other and that Ritchie and Thompson were colleagues at Bell? C was designed for portability in mind?


I could be wrong, but I think Unix was originally written in assembler, which isn't portable.

Unix first appeared on a PDP-7 (not PDP-11). PDP-7 was pretty old even by the standards of the time.

"Originally, UNIX was written in PDP-7 assembly, and then in PDP-11 assembly, but then when UNIX V4 began to be re-written in C in 1973 and was run mostly on the PDP-11.

So far as I can tell, there is no Ancient C compiler that targets the PDP-7, nor any provision for running UNIX V4 or later on the PDP-7" [0] The link also contains some other interesting commentary.

I seem to recall that Thompson wanted to write code in Fortran.

I'm probably getting a few details wrong. The systems were extraordinarily constrained, something like 4K of RAM. "++" exists because it was more concise that "+= 1" (although K&R C uses "=+ 1", I think). They really wanted to make every byte count.

[0] https://retrocomputing.stackexchange.com/questions/6194/why-...


Thanks for painting a mire elaborate picture of how it all went. Of course c had to be compiled on some system, and there were probably a good variety of systems around back then.


As a relatively newbie software developer, I'm going to ignore this advice and just try to cobble together something that works and I'm deliberately not going to worry about whether or not there is a simpler, cleaner solution to the problem. The rationale is, if I keep searching for the simpler cleaner solution I'll keep falling down rabbit holes and never get to the point of having a solution to the problem at hand. After the fact, if someone comes along and says, 'hey, here's a simpler solution' that's great, but if I don't at least have a working project, nobody will even bother to deliver that helpful input.


> As a relatively newbie software developer, I'm going to ignore this advice

As a relatively senior software developer, I'd say don't worry about it too much. The article accepts that reducing complexity is hard, and it's ok if you can't make it any simpler. Try not to add intentional complexity when you can, because statistically speaking, YAGNI.

This industry is full of clowns trying to upsell things that nobody needs, just don't fall for it.


Then launch that MVP into production, and never be given an ounce of time by your PM to fix any of the shortcomings that you thought were not part of "Do the Simplest Thing". Iterate on that principle for new features, too: what's the simplest way to get this new feature out? And this one?

And then find yourself surrounded by tech debt an a system that was cobbled together, not designed.


This is a very common case, sadly. But this is due to people failing to use MVPs correctly. Instead of being a tool for the sole purpose of rapid learning and iteration, it is used to falsely accelerate delivery. When done well, you build a series of prototypes/MVPs with the sole purpose of learning faster what customers really need. You should then put all your effort into building that really, really, well, and kill off anything that didn't work out. Ideally, you should always have new, minimalist code for new features you are exploring, and lots of old, extremely well designed, implemented and well-tested code for all the areas you already know are critical for your users - and nothing in between (no "nice to have" features, no failed experiments that linger on and contribute to your tech debt...)

This takes a ton of discipline, but in my experience the only alternatives are to either build up a ton of tech debt, or build things extremely well from day 1, only to end up dying due to low velocity (even if you get some critical decisions spot on in the beginning, no PM or engineering team that I've ever seen has been able to make only good decisions over several years...).


I think “simplest” needs to be applied to the whole system, not just to the change at hand. If you keep the system overall as simple as what could possibly work overall, you’re probably already reducing tech debt. Tech debt usually implies that things are getting unnecessarily complicated (accidental vs. essential complexity).


Old but relevant article. Figuring out the intersection of "simplest" and "useful" features is the trick.


I agree, but I'd add in the caveat of "... in the context of a fleshed out bigger picture for what you're trying to solve".

A lot of software lacks a clear plan. A big patchwork of local maxima commits that won't get you where you need to go.

So I say go ahead and draw some pretty pictures. What's the overall vision here?


If this is applied to programming, then just be aware that "doing the simplest thing that can possibly work" integrated over time typically won't result in anything good. For any given task, the simplest thing that can possibly work will often have other effects that are hard to quantify on the spot (like increased tech debt).

If you're working on things that are intended to be short lived, then just do whatever is needed to get the job done and move on. If you're working with something where you know there's a good chance it'll be around for some time, then every once in a while, someone will have to take on the role of saying "no, we're not gonna do the simplest possible thing right now".


Maybe. But it's often a lot easier to get from having something simple and working to something more complex and also working than it is to spend the whole time with nothing working until the complex part is completed.



Don’t draw diagrams? Seems like not using a very valuable tool. Diagrams help you think and thinking can save you time in building the wrong thing or the wrong way. The diagram shouldn’t take ages and pencil/paper is fine.


The sentiment is nice but ceases to be useful when people have trouble distinguishing what could possibly work from what appears to work for a bit and then breaks down horribly.

And it's not really about the fallibility of people. Often in engineering you can be designing in a space with a lot of unknowns, that simply can't be resolved without building out a bit to explore the space more. In such case some level of future proofing is warranted.

I'm kind of suspicious of adages like these that assume perfect information.


I always look at adages like these as something to keep in mind for the future. We can choose the simplest thing now and make it easy to swap it out for the more correct and more time-consuming thing later.

Sometimes the difficulty in distinguishing what the simplest thing could be comes from being in a group setting where people have equal say in the matter.

I think everyone has personal anecdotes to support the idea of doing the simplest thing suitable for that moment. But how to convince the group? I'm not sure, I don't always succeed.

A situation where I did do just the simplest thing is when I was asked to use project management software and a build server for a very early stage project with only myself as a developer. I declined. Instead I made a script to compile and package everything and emailed that to the others. We used an instant messenger for communication. It worked great for the early stage when the focus is on the MVP, though the project didn't go anywhere due to business reasons.

It will always still be possible to use the project management software and build server later. But it wasn't necessary at the very start.


Yeah sometimes you have to do the complicated thing, the saying "you can't build a ladder to the moon" comes to mind

I do think that many people make the wrong tradeoff in terms of complexity to features ratio though.


That is, I think, why a lot of people add "that can possibly work"


Yes exactly. The simplest thing that "works" is string and duck tape.


This is effectively meaningless (and the article even recognises that) because it delegates all meaning to the definition of "works".

Even "passes all the tests" isn't a great definition. What are you testing?

For example think about build systems. "Works" could be "builds everything correctly" in which case the simplest thing is just a shell script with all the commands written out.

That's obviously terrible, so then "works" becomes "doesn't unnecessarily repeat work" and you end up with Make.

But then Make doesn't scale to large monorepos with CI so then "works" becomes "and doesn't allow undeclared dependencies" and you come up with Bazel.

So the same meaningless advice can justify wildly different solutions.

I think better advice is just "try to keep things simple where possible". It's vague because it requires experience and design skill.


I far prefer the original question "What is the simplest thing that could possibly work?"

It is far more active than the imperative version.

One might say the interrogative is the simplest thing that could possibly work...


I've learned that when I ignore things, they often go away. That is the kind of simplicity that has served me well for decades as a developer.


I feel like this advice works really well in some places and really poorly in others.

I think those using safe languages and broad frameworks have a much greater ability to execute on "keep it simple" than those who use something like C and build 100% of their code in-house.


Yes, the old days

https://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.ht...

"XP" almost, but not quite, became a real cult.


I think we can't avoid all difficulty, and the difficulty then becomes determining exactly which bits we do need.

But it is a nice counter to some people's decisions to go for overly complex or risky (unproven?) technologies or designs.


What I frequently see is "the simplest thing that could plausibly work"


In practice, in most situations, the simplest thing would be to cheat.

So, grain of salt, and all that.


Sometimes.

Othertimes do simplest thing that will most simplify similar tasks in the future.


The acronym should be simpler. KISS

Keep It Simple Stupid




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: