Hacker News new | past | comments | ask | show | jobs | submit login
No one wants simplicity (lukeplant.me.uk)
215 points by todsacerdoti on Aug 23, 2023 | hide | past | favorite | 212 comments



"Simplicity is a great virtue but it requires hard work to achieve it and education to appreciate it. And to make matters worse: complexity sells better."

-- E.W. Djikstra

If you want simplicity you have to do hard work. It doesn't come by simply removing libraries. Some of that may be incidental complexity and you will get better performance from it. However the core problems themselves, the essential complexity of the problem: to make a solution that solves it in a simple way requires thinking. And if there's one thing I've learned in my career: people hate thinking.

Businesses tend to prefer quick solutions and will pay whatever cost to have it now rather than later. Elegant, simple solutions require time and thought. That is expensive and slow. Since businesses pay our salaries we do what we can and we slap on a library or use a framework and move on.


The road to hell is paved with products that pursued simplicity but ultimately couldn't afford it.

* Just Doesn't Work

* Your simplicity becomes my complexity (fussy compatibility, batching creates latency, etc)

* "One-button" interfaces where you have to learn a morse-code dialect and timing to access the necessary features

* Bonus points: the software is laggy and the one button is mechanically unreliable, compounding the difficulty of getting the timing correct

* Useless error messages that make simple problems complex

* Similar: Necessary complexity (e.g. radio reception, authentication, updates) hidden behind a featureless spinning wheel for maximum time wasting when you can't tell what the holdup is


> "One-button" interfaces where you have to learn a morse-code dialect and timing to access the necessary features

Flashlight enthusiasts, I'm looking at you. Doubly so when there isn't a way to know the current state without already knowing the current state.


I feel attacked! :)

I mean, it's a hard problem - for many reasons you only have one button, with no way to display information. And you have to cram possibly many functions into that one button.

It's a really interesting exercise in UI, and there are benefits to different approaches. I actually like comparing the different ways manufacturers have chosen to solve this.


I really want more Bluetooth support in flashlights. Not only would you always know where they were, and their state of charge, but if you leave your bag in a corner somewhere at a location, it could ping you if it gets moved or opened but isn't within a few feet.

BLE mesh means you could link them in groups for lighting large areas that don't have power yet.

You'd never have a mode you didn't like, just turn of the dang flash mode or map it to a long press. The dim mode could be as dim as you wanted it, down to sub milliamp for when you're working in a dark theater and need to read your notes.

You could set it to turn on when unplugged for an emergency light, glow all the time faintly to find it....

I wonder if there's a market for boutique handmade one off lights... Making a few a year might be fun but I sure wouldn't want to start my own production scale anything.

Although these would really be at their best if they were mass produced and cheap enough to have several.


That's an interesting idea. I'm not very versed in the flashlight world, so I don't know if it exists, but I at least haven't heard of a "pocket" flashlight that has bluetooth support.

Olight does make these "bulbs" that have Bluetooth support, you can configure color settings there through an app.

As far a market, there probably is if you're good. There are a bunch of people who collect all manner of flashlights, and boutique one-off things are usually a great thing.


Adding bluetooth to a flashlight probably adds at least 2 dollars to the BOM, cpu upgrade and bluetooth module (for reference one of the cheapest might be an espressif chip in the $1.5 range, but they don’t do low energy very well when bluetooth is involved). So it’ll cost roughly $18 more than the next flashlight. So maybe? I’ve worked on BLE mesh, and it’s enough of a pain at the moment that I wouldn’t be willing to implement this on something as cheap as a flashlight, but then again I was just poking around and found flashlights easily selling for over $100. so there may be a market for this.


Unless it got popular enough that you were making bazillions. It really shouldn't need to cost much more than a normal flashlight plus the $2 Bluetooth SOC. An existing manufacturer would probably charge a premium but a new company could be like "2 buck more and you get Bluetooth".

Actually seems like something Pine64 would do really well.

(I could also be talking out my ass here though, I know nothing about business, and Bluetooth anything might attract technophobia and make some people actually less interested).


>I mean, it's a hard problem - for many reasons you only have one button, with no way to display information.

Yes, there is a way to display information: put a screen on the device. My bicycle headlight has this: https://www.niterider.com/products/6780-lumina-oled-1200-boo...

Of course, this costs more. The OLED screen version of this headlight costs significantly more than the plain one-button version.


> Your simplicity becomes my complexity (fussy compatibility, batching creates latency, etc)

Forth is simple. (It is not that hard to write a simple Forth system that implements the language in a workable fashion.)

Python is simple. (It is a good choice for a first programming language, and plenty of people use it to rough out ideas because it allows them to ignore some kinds of resource management details and comes with a large standard library.)

Equivocating on 'simple' is a neat trick if you can pull it off.


Python is easy, not simple. Lisp is simple.


> Lisp is simple.

Look at the Common Lisp FORMAT function and say that again.


[R5RS][1] is simple.

To get any work done with it, though, you need to write your own Python.

[1]: https://conservatory.scheme.org/schemers/Documents/Standards...


That is admittedly a bit of an abomination (very powerful though) but the language itself is very simple. It just allows the creation of DSLs which is what format really is.


Not sure you're using simple in the same way other people use that word :)


I believe GP is using it in the literal etymologically correct way in the vein of Rich Hickey: https://youtu.be/LKtk3HCgTa8?feature=shared

Pedantically, Common Lisp being a Lisp-2 vs a Lisp-1 arguably makes it a duplex vs a simplex…


I don't think your examples feel like the simple options to me. For example:

Batching should solve a specific problem, it's not really complicated to implement but without a need adds unnecessary complexity to the system. Simplicity in this case would be no batching and just handling the inputs as they come.

One-button interfaces while minimalist are not simple, you've already covered why they're complicated. They can be simple though, if used correctly, but you're intentionally over-complicating to make a point. Even no button interfaces with gestures can be simple an generally intuitive.

But you're correct that my simplicity may be your nightmare. I use DWM because I don't need or want the complexity of a full desktop environment. Most of what I do use the same vim keybindings, and runs in the terminal. The settings for all of those are in a git repository and I install a new machine or VM with a single line that I've memorized at this point.

To you that may seem like a brittle hell, to me it's the simplest most stable system I can design.


> And to make matters worse: complexity sells better

Yes!

The article claims people think they want simplicity, but aren't willing to sacrifice anything to get it. But the reality is much worse.

People, at least business people (people making decisions, not end users), hate and loathe simplicity, and love complexity. Or, if not complexity per se, all the benefits it brings. Esp. bragging rights.

You have to be able to boast the number of engineers working on the project, the millions spent... and even the bad performance!

Bad performance is proof of the difficulty of the problem. If something actually works, it means it wasn't that hard to begin with.


> If something actually works, it means it wasn't that hard to begin with.

This one affects a huge proportion of modern software developers too, and has two close relatives where we wildly over reward those that fix things heroically at 3am while under appreciating those that prevented it breaking at all, or my favourite "what I do is hard, important and valuable, what you do is trivial, so why don't you just give it away?"

Making things that appear to work first time has been one of the defining repeated errors of my career.


feynman wrote about this. He famously, already having figured out the first 2 digits of a locked filing cabinet over the course of months, sat down and read a magazine for a length if time before “cracking” the lock when someone lost the key.

Said he learned from a locksmith that people hate it when he makes it look easy.

For some reason people value effort over expertise.


> The article claims people think they want simplicity, but aren't willing to sacrifice anything to get it.

I live CNN lite: https://lite.cnn.com

I guess I am weird but the way websites are designed confuses the crap out of my brain.


> I live CNN lite: https://lite.cnn.com

Such a refreshing site to see. It is like early 2000s internet before the introduction of the dynamic web.

https://www.w3schools.com/asp/asp_ajax_intro.asp#:~:text=AJA....



Wow what the hell! How am I just now seeing these for the first time? They're so fast, simple, clean, and to the point.

Every news outlet and blog should have something like this. Seems relatively quick and easy to put together compared to their main sites, and I'm sure tons of people will use & appreciate it.


There was some discussion of text-based news sites on HN a while back. You may enjoy the coverage.

https://news.ycombinator.com/item?id=35313232


oh gosh this is amazing


Not only that, but horrible maintainability is also proof of difficulty. If something requires multiple engineers to babysit, 24-7, then certainly that's not something inherent in the design, it's inherent in the problem.


And this is the essence of the issue: complexity in one place often makes simplicity elsewhere.

Did it make sense for Microsoft Word back in the day to have a Fax option? Of course not. Unless you had a fax machine and needed to do a lot of faxing, then it was super convenient that your word processor knew exactly how to do that.


That seems like a feature, not complexity? It's not like it made deploying MS word any harder to add the fax button.


The underlying OS could already do faxes (by treating them as a special case of printers and popping open an OS-controlled dialog to accept phone number &c). But Word bypassing all of that with its own fax drivers and protocols meant it could provide a full UX for the end-to-end of faxing (including, probably most importantly, scripting the fax send so you could merge from a phone number database)... At the added complexity of replicating an entire feature the OS already offered a slightly different way.


Features interact combinatorially, which introduces complexity.

Faxing isn't a great example, since from the perspective of most other features, it's probably just a special case of printing, but you can imagine it made it harder to build a web-based version of Word (what number are you faxing from? How does the computer in the MS data center contact or emulate it?)


Heh, there is some saying along the lines of "Any one customer only uses 10% of the features your software provides, the problem is each customer users a different 10%"

It's almost always a better bet to add more complexity to your product to capture more market than to make it as simple as possible.


Having been involved in some of those enterprise product evaluation decisions, that isn't how it really works. What they do is gather and prioritize a huge number of requirements from various internal stakeholders (some of which have the power to block the project if they don't like where it's going). And many of those requirements really are "must haves" to run the business, or even achieve legal compliance. If a product doesn't meet the key functional requirements then it's a non-starter regardless of usability or simplicity.


A fair few nice-to-haves get hidden amongst the must-have lists too, just to make matters worse. This is how you end up with a product that has features literally no one ends up making much use of.

Also if someone particularly wants a piece of software for their role they may try sell it to the rest of the business with ideas of what it can do more generally. This is why kitchen sink products sometimes do well (but ultimately make no one happy as they don't do anything perfectly while trying to do everything well enough to claim it is supported).


If some employees are unhappy that is an acceptable outcome as long as the work gets done. We can't afford to make everyone happy.


Which employees would those happen to be?

Never seems to shake out that it's the ones at the top.


Exactly this!

My first principle in building software is:

Avoid Writing Code.

Code takes time to run. Code is habitat for bugs. Avoid it. Do everything possible before writing it to minimize it — think, plan, select better algorithms, data structures, process models, etc. Write and test toy versions to select the most streamlined. Of course some code must eventually be written, but doing the work to avoid it pays off.

Car analogy lesson; my coach in sportscar racing once asked me what things I do as a driver that slow me down on the track. I started to say something about "when I initiate the turn too harshly it generates more scrub and....". "No, No, I mean the big simple things...". "Well, that would be braking, turning, lifting off the throttle?". "Yes, so always avoid braking, turning, and lifting.". Obviously, doing all three becomes necessary at the end of the first straight going into Turn 1, but the point is that there hundreds of times where we'll do those things without thinking, and eliminating the unnecessary things that slow us down is one of the primary keys to fast lap times. The fastest drivers make FEWER moves and are much calmer in the cockpit. That simplicity takes a LOT of dedicated work.

Similarly, eliminating unnecessary code takes a lot of dedicated work, but it is critical to software that performs.

Spend the effort; it's worth it.

Avoid Writing (and including) Code.


But no company would give you time to think that much. In opposite, we have to hurry to code and show something by the end of the day


> Avoid Writing (and including) Code.

What's your thoughts on no code platforms, such as:

-bubble.io

-airtable

-power apps

-nintex/appian/etc.

I find myself looking at these platforms because the client server model we all have been been doing, still has a lot of boilerplate that needs to be setup.

At least with these, the selling point is we can focus on the logic and look, which is all the paying clients really care about.


I haven't tried any of them, but they could be fantastic for prototyping, first throw-away versions, and projects that don't really matter.

For real r2.0+ production versions on projects/products that matter, I'd be suspicious that there are huge layers of libraries that get included, lots of code to run slow and provide bug habitat. But, it could also be that the developers have some optimization passes that strip out unnecessary code. It's something to check out carefully before committing. Of course, using one for a first throw-away version and examining it in detail can provide much info.


...In defense of boilerplate, think about what you're actually doing.

You're plumbing a pipe of tangible meaning through networks of functions as implemented in electronic signaling devices.

Creating and propagating that significance takes work. Embrace the boilerplate. Also acknowledge that every no-code tool has about a gazillion engineer hours lying in wait because what you want to do is hidden and sieved through somebody else's abstractions.


TBH if we could boil software metrics down to a immediately gratifying metrics like lap time or race position, it would make things a lot easier to figure out what simple means.


First thought: 'Ya, that would be nice'.

Second thought: 'Wait, we have metrics on execution time, time-to-load/display, time to download, file size, etc, and can even count clock cycles required.'....

And, it's kind of like auto road racing, where the times only matter in comparison to other times of that particular car class, at that specific track, in the configuration for that race, on that day's weather conditions, etc. Software is similarly comparable only to software of the same type/class, and how well the earlier versions of that software performed.

It's almost like we can gather more performance data about software than we can about sportscars...?


Which is better, a car that completes a two lap race with laps of 1:00, 1:00 or with laps of 1:30, 0:15.

Which is better, software that used the twice has a time to first paint of 3 seconds, 3 seconds or 4.5 seconds, 1 second? The second obviously benefits from caching.

There, it's no longer obvious because there are competing goals.


Said like you are proud to have found some kind of "gotcha"; it's a cute example, but I don't see the relevance.

The goal the article, GP and I mentioned were simplicity and reducing amounts of code, and the comment was about how we have the ability to measure our code 's performance.

Yes, immediate fetch every time vs caching is a question of competing goals. The default approach would be to avoid adding the code and complexity of caching. BUT, and that is a big "BUT", if the context of the software's use requires it (e.g., cached uses are far more numerous than fresh and using cached values will not screw up the results), then add it, and be efficient about it.

What's the big deal, what am I missing?


Other than basic politeness (that's a rude and dismissive opening on your post) my point was it's easy to know what "optimal performance" means when racing, or generating a profit. Software requires optimizing some aspects at the cost of other aspects, which is a business decision.

Like the best race car races the race the fastest, the best company makes the most money over the time period you care about. The best software could be optimized to load for new users fastest to reduce new customer bounce rates or to work best for returning loyal users to reduce churn or several other metrics. It's a legitimate question of what you want to optimize for


Yes, it is clear that software can have different options or questions of what to optimize for.

That still seems orthogonal to, or at least a separate consideration from, the question of minimizing the code.

Of course what to optimize for should be as carefully considered as any other factor. It is kind of the core point of a design effort.

The developer or team should figure out what to optimize for, then figure out how to implement that with the least possible code. Of course, some optimizations will require more code than others, and that should be one consideration (e.g., "yes, feature XYZ is cool, but is it worth the amount of code — slowness & bug habitat — that it will require?") in deciding whether or not to implement it.

So, I'm still not seeing how your point is an argument for writing more code, or invalidating the principles in TFA or the above posts? It just seems an offtopic distraction?


It's not always like that.

Take for instance washing machines. They aren't conceptually complicated from user perspective. More or less, they have well-defined configuration parameters. You don't need a complex model of a washing machine in order to successfully operate it by any stretch of imagination.

That is, unless you buy a modern washing machine... where instead of having separate controls for water temperature and time you get a single knob. In most cases it doesn't even have a line on it to show which direction it points. It's just a round knob you can rotate endlessly. When you rotate it, in seemingly random order the lights near cryptic icons go on and off. Even if you can sort-of guess what a particular icon might mean, you will have very hard time finding a combination of the desired temperature and time.

And this is not unique to washing machines. Try toaster-oven, or water boiler, vacuum cleaner, air conditioner... modern day household electronics have the most idiotic design in their entire history. And they aren't conceptually hard to use. People who came up with the interface made it unnecessary complex and convoluted. And it's not a one-of event. This keeps happening all across the board, different brands, different kinds of equipment...

My explanation to this is not that it sells better: I'd absolutely buy a washing machine with text labels instead of icons. It's some kind of bizarre trend where manufacturers think it's more "stylish" this way.

Hell, take modern furniture or bathroom equipment for example. There's a bizarre trend to avoid showing the screw caps. It makes installing stuff much harder and the final installation more fragile and suspect to accidental damage... and yet it's nigh impossible to find simple(r) alternatives which would be durable and easy to assemble.


The weird thing is that what you are describing is a very American washing machine. What I'm used to from Europe looks more like [1].

You select the desired program with the knob based on what you loaded, then adjust temperature and spin speed with the touch inputs (some programs will block out some options, e.g. wool won't let you do more than 40°C or 800 rpm), you can add some options like more water, and based on that the machine does its thing, running a program specialized to achieve the desired outcome. It tells you how much time it needs, but it's not something you have manual control over.

And of course everything is labeled ... printing different text on a $200-$2000 machine doesn't break the bank for non-american manufacturers.

1: https://media.miele.com/images/2000018/200001856/20000185612...


>The weird thing is that what you are describing is a very American washing machine. What I'm used to from Europe looks more like...

Absolutely true. Machines in Japan don't resemble his description at all. My machine has a touchscreen with a ridiculous number of options and settings, all printed in Japanese, along with 4 physical buttons (on, off, start/pause, and back). The cheaper machines don't have a touchscreen, but instead a bunch of buttons that basically do what yours does. Knobs don't seem to be a thing here (on washing machines; they are on the fancier microwave ovens though, strangely).


Refrigerators are another one. A top freezer is simple because on the coolant loop that’s the coldest area, keep it cold until it produces refrigeration level temp and then as it warms return it to the compressor at the bottom using gravity. But everyone wants freezers at the bottom, or french doors so complexity is added to fight physics.

Last air conditioner I bought (and returned) had Alexa built in but not the ability to keep the fan running all night.


That old school simpler design you refer to has its own problems, relying on a fixed ratio of cooling where you often get either a warm freezer or cold fridge. See https://www.youtube.com/watch?v=8PTjPzw9VhY

Modern fridge/freezer combos want to control the ratio of cooling. This is often done as simply as controlling a fan that moves cold air between the compartments.


There is a difference is a freezer on the bottom or French door isn't just aesthetic. It changes how people use the fridge. Bottom freezer drawers are easier to random access, for instance, and don't lose as much of the cold when open, and thus are a good solution for people who freeze a lot of stuff.

That it, it makes the physics harder but produces a payoff.


It’s probably true of most complexity, it’s added for a functional purpose and payoff, but takes away from the simplicity.

People choose the complexity of a bottom freezer, but in this case the complexity of the choice isn’t even apparent to them. It’s unlikely to keep running as long as a top freezer fridge from the 80s or 90s, but they don’t know that when buying.


These designs are usually added to show the executives of the company that a branch of the org chart and its Important People can "add value".

Conversely, the fastest way to turn around a failing company is to search through company e-mail archives, find everyone who has ever used the phrase "add value", and purge them from the company.


> the fastest way to turn around a failing company is to search through company e-mail archives, find everyone who has ever used the phrase "add value", and purge them from the company

but you'd purge the entire c-suite if you did that!


I fail to see the problem with that


> That is, unless you buy a modern washing machine... where instead of having separate controls for water temperature and time you get a single knob. [...] When you rotate it, in seemingly random order the lights near cryptic icons go on and off.

Maybe washing machines are different in your market to in mine? In my region washing machines [1] have a dial, labelled in english, listing programs like 'cotton' and clearly labelled push buttons for temperature and spin speed.

I could only see one that doesn't have such an interface [2] and it comes at a substantial premium.

The UI situation with washing machines is much better than for things like smart TVs.

[1] https://www.johnlewis.com/browse/electricals/washing-machine... [2] https://www.johnlewis.com/miele-wer865wps-freestanding-washi...


This is the one similar to the one I have: https://www.lg.com/us/washers-dryers/lg-wm6500hba-front-load... . I also had one before it from a different manufacturer, but cannot find it at the moment.

Also, I really don't want programs like "cotton" either. This just doesn't mean anything really. It's just noise. I want two dials, for time and temperature.


I want two dials, for time and temperature.

For me a washing machine without a wool setting and where I cannot control the spin cycle is an instant no. On the other hand I have no use for time dial, as I would expect the washing machine to work that out for me. Which is the problem manufacturers have, your must have feature is useless to me and my must have feature is useless to you. So either they have to make two different models of washers or they have to try to combine all the features into one machine.


A modern washer will self-select time based on the amount of clothing. This often cuts down the time needed from what the user thought was needed.


The icon thing is so they can sell the same appliance everywhere without localizing it. (Of course, they still have to localize the manual but their inventory management is simplified.) That doesn't help you though.


You really think it would make it very difficult to localize "time" and "temperature"? -- because these the only two labels that are actually needed for a washing machine. I would even take icons for both. Something that looks like a clock face and something that looks like a thermometer would work just fine.

The problem is that it's not even close...

Also, I worked in printing, specifically flexo / silk / tampo, i.e. the stuff that gets printed on all sorts of curved surfaces, stickers, souvenirs, mugs etc. Yeah, it adds a bit of complexity... but compared to the price of equipment, at least the printing part is nothing. It might have to do with packaging and managing inventory for different languages... but then again they also need other localized stuff... so who knows.


> It might have to do with packaging and managing inventory for different languages

Yeah -- it's not a localization problem, it's a supply chain problem. By removing the need to localize, they can manufacture and deliver a single SKU to any location. It's the same reason manuals are printed in 12 languages.


Precisely. If you want text labels and you're trying to sell the same washing machine in Japan and Germany, you've just given the user interface designer the challenge of writing labels where there's a 10x difference in the average length of the symbol.

Making up novel hieroglyphics and letting the user memorize them is just cheaper.


>If you want text labels and you're trying to sell the same washing machine in Japan and Germany, you've just given the user interface designer the challenge of writing labels where there's a 10x difference in the average length of the symbol.

No one sells the same washing machines in Japan and Germany. In Japan, the washing machines are made only for the Japanese market, not for export (some are imported from China, but these too are specifically made for the Japanese market, and are very different from machines sold elsewhere).

The cheaper machines have all text labels, and a lot of buttons, all in Japanese. The fancy machines like mine have touchscreens, with an enormous number of different settings and functions, and again it's all in Japanese with no ability to change languages, there there are also some icons along with the text, but not enough to use the machine without reading Japanese.


Can you match the icon to the icons in clothing labels?


Thanks for finding the perfect quote for what my thoughts were as well.

I'll also add that simplicity also comes from a lot of experience, since the most dangerous developers now seem to be the ones with minimal skill, that don't have the track record to understand that they can't predict what the future will hold, and should better just solve what's required while keeping their options open.


Your comment is also a quote by itself


But all too often people fail to appreciate a tasks’ inherent, essential complexity that draws a simplicity line.

Sure, in the general case people are way above that line, but we also see the reverse (e.g. claiming that systemd is “bloated”, when much of that complexity is required for that problem space).


Too true. I think the quote hits on this part where Edsgar is saying that it takes education to appreciate simplicity. Some problems have an inherent complexity to them and to appreciate an elegant solution that is sufficient to the task requires some training and understanding of the problem. It's a bit subtle perhaps but I think it's important to understand!


It's also a fantastic recipe for getting out of touch unjustifiably with the rest of the world, and shooting down completely valid input.

"You just aren't intelligent/smart enough yo understand/appreciate the elegance of my design" is the cope of many a fragile engineer in the face of a real world bashing their head against something rendered unnecesarily complicated for reasons completely tangential/unrelated to intended use.


The proof of the pudding is in the tasting in such cases. If you can't explain it to someone else then you definitely don't understand it. Ask a few questions and fragile egos tend to reveal themselves.


Oh, it can be the case you absolutely do understand it, you can explain it, you can point out line by line, and class by class what you wrote, and why it makes the people whom you wrote it for's life easier. You can also document it, and provide references to external docs you relating to frameworks you depend on.

That still will not convince some that cannot be bothered to track where their bits are.

I bring up the point because I've whipped up a system specifically intended for shipping a declaratively defined DB state, to a particular user specifiable environment, whilst limiting thr actual implementation of said data as close as humanly possible to forms that were already being written.

There are just some times when the needed solution will not be accepted until people have rammed up against the problem they don't even know they have. Thus is the curse of the implementer that prevents rather than remedies.

Make a problem impossible to occur and people think you over engineered it. Sit on the answer til everything is on fire, and the migration cost is maximized, and you're looked on as a visitation from on High.

It...really...sucks.


Oh I get that.

Sometimes people mix up simple with familiar, easy, or shallow. They expect that they won’t have to learn anything or that understanding should come without effort. Unfortunately this is not the case.

I often use the word sufficient in place of simple when trying to convert this to an audience that isn’t familiar with the subject matter. It’s as simple as one can make it without waving our hands about the essential complexity of the problem itself.

The proof of Cantor’s theorem is conceptually simple but there are plenty of people who cannot appreciate it because they understand little enough maths. And yet the theorem serves as a basis for more interesting theorems and practical applications.

I also try to avoid the use of the term, over-engineering. I find it is often thrown around too easily and used derogatorily.


The average HN user has some idea of what is simple and intuitive. This has very little to do with what the average non-technical web user thinks or feels.

I've made many decisions in my career that boiled down to: we're not going to do what the users want in this case because the complexity of maintaining that feature is not worth the value it provides. And many times, that's true. Sometimes, it's not true.

At Netflix, it's common for engineers, especially new engineers, to say, "Who even uses dubs? Just use subtitles." The answer turns out to be "at least 50% of people when watching content not in their native language."

There is a floor on app complexity that naturally results from trying to build something that many people want to use. HN User #82983 may not see the point in a web-based note taking app because he just puts all his notes in a `notes.json` file. But, that app has many users who love it, and the developer doesn't want to tell them to kick rocks just because it would corrupt their precious codebase.


> The average HN user has some idea of what is simple and intuitive.

"Simple" and "intuitive" are rather orthogonal concepts.

See eg Lisp: an objectively simple language which many find unintuitive. WordPress I guess might be considered intuitive, but is not simple in any way.


Something is intuitive if it jibes with your preconceived notions. When you experiment with it, you form certain hypotheses about how it will react or how by what means to get a certain behavior. If those hypotheses are usually true, then it's intuitive.


And the question "who uses dubs?" belies a fundamental misunderstanding of use case: A lot of users aren't watching Netflix. They put it on the background for noise. They need dubs to follow the plot at all.


While true, I don't think this is even close to a majority of that 50% of users who use dubs. (Though I'm just guessing)

I'd say the more common use case is people in the many countries that are used to dubs, culturally, vs. using subtitles. I've found it to be very much a cultural thing.

Also, kids. Kids movies/shows are probably a not insignificant fraction of Netflix watch time, and below a certain age subtitles are unhelpful to them.


> "Who even uses dubs? Just use subtitles."

The Netflix subtitles and dubbing for foreign media are both so bad, we just don't watch any of it. My friends and family are not the only ones. I'm pretty sure we're a dark demographic.


This is very culturally dependent though. In Germany, for example, dubs are typically the default, and people would very rarely use subtitles for foreign media. If you want to be able to show non-German films and series in Germany, you need to be able to support different audio tracks.


It came with age and time for me. I don’t care for bells and whistles nearly as much as I used to. And I would rather pay more for something that does its thing really well and reliably.

I’ve learned that I don’t get as much real world value from the extras and they cost me more up front and are an under appreciated burden for the duration I own it.

As it relates to this article, the last 3 web applications I’ve built have used PHP, are not SPAs and store data in flat json files (I.e https://fiers.co - I don’t think you’d be able to tell). There’s some stuff that are hard to do and those things don’t get done because I don’t want the complexity.


FYI it doesn't render too well on my phone. The image overflows horizontally.


See how well he cut cope to keep things simple? "Works on mobile" was simple unnecessary complexity :)

Only half-joking btw. I thought the article was way more about "cut scope ruthlessly so you can use simpler tools" than it was about "use the simplest tool for the job".


FTA:

> what are you willing to sacrifice to achieve simplicity?

I'm willing to sacrifice a website that tries to keep state up to date without requiring a page refresh so that I can avoid all the complexity and bugs which come from it.


For many, probably most, websites this is fine.


Thanks - needs an overflow:hidden in the CSS. Will fix :)


And the fix was probably editing 1 line of code then running `git pull` on the server. If even.

That's simplicity!


Precisely :)


I like the concept that there are accidental complexity and essential complexity.

Essential complexity is that of the problem you are solving itself. How complex the use case itself is.

Accidental complexity is how more complicated your implementation of it make things. This is the one people want to keep away, and favor simplicity.

If you remove scope, you're removing essential complexity by making the problem itself simpler.

To remove accidental complexity you have to find a solution to the same problem that adds little more complexity over it.

People want to be able to deliver essentially complex use cases, while adding as little accidental complexity over it. But doing so is hard, hard things are hard to achieve, and so most of the time we end up with a lot of accidental complexity added, because it's easier that way. Not because people don't want simplicity.


This is a very important distinction. I think it stands to reason that an overall industry trend towards more complexity must be due to essential complexity. Could you even have an enduring trend that is not due to essential complexity? Wouldn't people get better over time, or the accidental complexity lose out to non-accidentally complex?


A better term is "non-essential complexity", since "accidental" implies that it wasn't intended. The reason it's a durable trend for non-essential complexity to increase is that there are common and powerful forces that push for increased non-essential complexity.

The prime example is short-term technical debt taken on to speed the release of something new. It's not accidental, the team intentionally chose to (temporarily, hopefully) take on the added complexity.


Not to be rude, but these terms are pretty firm at this point.

If it helps, think of "accidental" as "incidental". It's not referring to any actor's intent or lack thereof.

Frankly, you're introducing accidental complexity by trying to refer to it as "non-essential complexity" ;)


It doesn't generally help to think of one word as another word. That doesn't tend to work out well when trying to parse semantics.

"Incidental" is correct here, but not ideal for communication because of the eggcorn with "accidental". The word "accidental" is wrong no matter how you slice it.


The programming language itself is also a complexity on top of the problem's essential complexity and the complexity of the algorithm solving the problem.


"The price of reliability is the pursuit of the utmost simplicity. It is a price which the very rich find most hard to pay." — Tony Hoare

It is impossible in practice to persuade people to pay the same amount for a simple, reliable product that they would pay for a complex, unreliable one with more features. Reliability is a promise, and they have to trust. Features can be demonstrated now, so less trust is involved (or so it appears).

And then the features stop working.


Depends on who your customers are. I have always found breaking up features into a "basic" user and a super user is a fairly good approach. Keep the advanced and complex things on a separate page that requires navigation and keep the 80% use case as the sane default given to most users. The loud few get what they want and the bulk of the users are served reasonably. I have also found consistency to be significantly more important than excellence. Any time someone requests that we take away an option or feature for some good reason I am extremely hesitant to remove as I am confident that a customer will suddenly provide a valid use case for it's existence.


I thought the noveau semi-riche liked complex flashy things, while properly wealthy people tended more towards ultra-high-quality but simple and classy?


It's definitely not impossible to do that, there are a ton of people (on this very website, I dare say) who'd pay good money for a non-smart TV with the latest panel technology and 4 HDMI ports.

However Samsung and LG wouldn't make as much money from those models, and the potential customers for them will probably suck it up and buy a smart TV anyway. So why would they bother?


Perhaps the path to simplicity in that case is not buying a TV


Been there, done that! Then got married. Wife not appreciative of no-TV lifestyle, and no-wife lifestyle is slightly too big a sacrifice for simplicity.


Totally understandable


I find that the vast majority of assessments like this are rooted in the mistaken belief that there are common, well-understood, and widely shared meanings for words like "simple" and "complex".

The post isn't observing a lack of desire for simplicity, they're observing a very common case where people evaluate "simple" relative to their own familiarity with tools or techniques rather than a comparison of perceiver-independent properties of the tools and techniques vs their alternatives.


This is also evident in the extremely overused phrase "simple yet powerful" that software libraries/tools/etc use when pitching themselves. Everyone apparently thinks their own inventions are Simple Yet Powerful.

https://github.com/search?q=%22simple+yet+powerful%22&type=C...


You are probably already aware of this excellent talk that discusses exactly the points you made: Simple Made Easy by Rich Hickey. I assume this has already been linked 30 times in response to this article but here it is again:

https://www.infoq.com/presentations/Simple-Made-Easy/


Let's take cars. Is it "simple" to have one tablet on the dashboard for all controls? Or having physical buttons for commonly used functions is actually the "simple" option?


There are two people involved in this scenario: the car driver and the car manufacturer. For the car driver, physical buttons are simpler (less cognitive load and safer to operate). For the car manufacturer, a single tablet is simpler (less manufacturing complexity and cheaper).

EDIT. I forgot with the Right to Repair issue, there is a third person, the neighborhood general mechanic. With physical controls, the mechanic is more likely to be able to repair the broken switch/electrical relay. With the tablet, the mechanic cannot.


The simple option is no dashboard at all. The AC can be set at the service center.


A significant portion of developers want to dazzle with complexity. First and foremost, it's a flex and a resume builder - everyone wants to have K8S on their resume (even though it's ancient at this point). Second of all, people are simultaneously lazy - when you tell them to put the customer list in the database, suddenly it becomes a later issue. So you get "solve it in a really complicated way, but because you made it so complicated, barely make it work, and when it breaks, no one will be able to figure it out."

This is a pattern I see again and again. I constantly get downvoted here when I suggest that new devs should be able to get up and running within a couple of hours and that should be one of the priorities (at least on the "core" project that they will be working on). And yes, I "get it" - the company I work for just acquired a bunch of other companies with various stacks, on various clouds. But at the end of the day, people are way more worried about introducing a 1000 new libraries and writing ABSOLUTELY useless unit tests where everything is mocked than actually creating one source of truth for the data.

As you can see, I am a bit bitter about all of this.


Always, always, always - when I'm faced with a new project using a new technology, I have two choices:

1) spend an indeterminate amount of time reading documentation until I actually understand what I'm doing, and have nothing to show for the time I spent gaining a deep (or at least decent) understanding.

2) Just jump in and start coding, bugging people to look at error messages for me, google when that doesn't work, skim the documentation when that doesn't work, cut and paste examples when that doesn't work, but have something to "show" at the end of the day which is far more complex, slow and unpredictable than a well-thought-out solution would be.

Every year of my 30 year software development career has been spent under software management fads that insist on the second approach because it provides an illusion of productivity and predictability. I actually had some hope when XP came out in the late 90's that the tide was turning here, but XP became Agile, which was "meet the new boss, same as the old boss".


the way to go is do 2, read what you wrote, then throw it out and do 1. now you know better what docs are worth reading, and you'll digest them faster with context gained from your throwaway.

it will be only a little bit slower than 2 on its own and it will be written better because it's not just accumulated debris from your learning


Just as long as you avoid the trick where you complete the POC, and then your manager moves you onto something else and reassigns a junior to productionize the POC.


> reassigns a junior to productionize the POC

At least until the deadline passes and he still doesn't have it working, then you have to do the something else you moved on to _and_ productionize the POC that was already overdue last week.


I 100% agree with this. Have always called it `throw-away development` where you build a prototype just to understanding what it is that you are actually solving/building. Then, once you understand, implement the _actual_ thing you need to, taking the learnings from the throw-away. It takes a _tad_ longer than if you had magically known what to do from the get-go but the learning is invaluable.


The problem when you do 2 'just enough' for it to (barely) work - if you show it to your manager, they'll Ships It and you'll get put on a new project to hack together.


I've come to understand that the most important part of any software project is the Critical Twenty, the first 20% of the project's timeline where developers' need to do things like gain a deep understanding from reading documentation and lay down the infrastructure and architecture and foundation necessary to support features above.

On a 10 month project, this translates to reading docs and putting down foundation stuff, and delivering no features, for the first two months.

Attempts to abrogate The Critical Twenty are what causes the majority of problems later on in most software projects.


How common is having 10 month long projects ? I've always had the never ending milestone marathon. At best we had a 2 month wide sprint.


I have regular pain with the team because they don't know anything beside option 2. No middleground to dig and plan at least to some amount. It's a strange swamp because they justify that because that's what agile is (according to them) but they feel angry every month, and then do less and less work because they think it's too hard and requires a higher TC.


Well - are you breathing down their neck for status reports and time sheets and closed tickets? If you are, you're forcing option 2.


I'm the lowest pawn on the ladder. But we have time to adjust our ways, so I asked them, but they just refuse. It's all block and nap then rush (with some "can you skip lunch maybe?").


Method 1 is definitely a culture that needs to be encouraged and nourished (and sometimes people need to be trained). And it's fragile: If there's even a hint of "What have you coded in the last week/month" from anywhere up the management chain--poof! It's gone and everyone is motivated to move to Method 2.


Thanks. I'm very much interested in the psychology of social structure / workgroups. These are indeed very brittle and it doesn't take much to make a race to the bottom.

I'm always thinking about relay-like and continuous improvement (team processes and mastery, not CI/CD) between team members to promote curiosity, high drive, creativity and friendly competition.


The problem with #1 is that it's only useful if the system is conceptually simple... and the article in question is spot on.

If there is nothing deep to understand, the only thing you can gain from the docs is encyclopedic knowledge of all the toggles and switches. So, there's no reason to ever study them instead of just start using.


"The management question, therefore, is not whether to build a pilot system and throw it away. You will do that. Hence plan to throw one away; you will, anyhow."

Fred Brooks.

Edit: Now I'm sad, I hadn't noticed that he died last year.


> 1) spend an indeterminate amount of time reading documentation until I actually understand what I'm doing, and have nothing to show for the time I spent gaining a deep (or at least decent) understanding.

Tough to explain to management or colleagues sometimes. (On the other hand everyone seems to "know" it's the right approach because even today just looking up something on SO is not seen as the highest standard)


I usually end up doing it anyway and taking my "lumps" for the week that I spent learning but not producing. Get frustrating when the #2 crowd starts demanding that I drop everything and help them since I'm now the "expert", though.


I've found that even if you start with #1, you end up doing #2 anyways, whether it's because the documentation wasn't complete or you didn't realize what you were trying to build until you started building it. I always start with #2, since in my mind, it often encompasses #1. I've also found #1 leads to analysis paralysis.


I remember a time when HTTP was used to transfer HTML _documents,_ and not these "progressive web app" monstrosities. It felt a bit unusual to visit blogs or news sites and be bombarded with megabytes of JavaScript from over a dozen domains. Isn't the focus of these sites supposed to be on the textual, written content? How much JavaScript is actually required to deliver what could have been (if you squint hard enough) a Word document? I mean, that's what HTML documents _were;_ after all, it's called the document object model.

And then I remembered that I wasn't the actual user of these sites; no, I was the one being used. The reason all of this JavaScript is necessary is to load tracking pixels, analytics agents, and advertising networks onto the page to generate revenue and data ("the new oil!") for the site's operators.

Don't get me started on web developers' penchant for pursuing whatever front-end framework is in fashion these days. I get it, I've written my own monstrosities in highly tortured, purely functional TypeScript with fp-ts with ADTs and monads and Tasks; sometimes it's too difficult to resist from scratching that intellectual itch. But after a few years of bouncing from prototypeJS to jQuery to backbone to angular to react to vue to svelte - goddamn, does it ever end? Is anything really gained from this never-ending, ever-accelerating treadmill of newer and newer frameworks which seem to pop up faster than they can be replaced in extant codebases?

All of this at the expense of the end-used, er, end-user.

Having spent a little while exploring the metamath site [0] and its associated program, I am stunned at how snappy, simple, and _long-lasting_ the whole thing is. There are theorems here dating from fucking _1994,_ and the site is still being actively updated. There is no framework, only one binary that generates static HTML documents, which are then served to your browser, no need for client-side rendering or XHR or advertising or cookie popups, or hell, even JavaScript entirely.

But I am just shaking fists at a sky that will never fall, thanks to the shoulders of the distributed Atlas of hordes of developers keeping this digital house of cards standing.

[0] https://us.metamath.org/mpeuni/mmtheorems.html


> not these "progressive web app" monstrosities

And these progressive monstrosities came from somebody's attempt to "simplify" things down to where somebody who didn't actually understand what they were doing could produce something that was usable.

> resist from scratching that intellectual itch...prototypeJS to jQuery

See, I don't think that in most cases this comes from developers, but rather from non-technical (or worse, formerly technical) managers who read somewhere that "this new jQuery thing makes app development get done really fast". At the end of the day, the only thing that ever matters, the only thing anybody ever measures, and the only thing anybody ever remembers is "meeting the date".


So true for these people, and all they have is NOW, this moment on their way to meet that one date they will not remember.

however it's not all wasted so don't worry, be happy and remember, the best times as a developer are when you ride your bike or take a walk or swim in the lake, preferably in summer time :)


> Is anything really gained from this never-ending, ever-accelerating treadmill of newer and newer frameworks which seem to pop up faster than they can be replaced in extant codebases?

It slowed noticeably after React took over the market. Svelte and SolidJS are the first entries since 2013 that offer something new.

Meanwhile large organisations often use Angular and have been since its introduction.

But to answer the question: yes. Svelte in my view is one example of how complexity is slowly but surely dialing down thanks to the lessons that were learned along the way - for the first time in almost a decade you're getting a legible stack trace and this is a huge, unexpected boon associated with the approach used there.


You might like https://no-js.club/


Simplicity is not about making complex things simple, it's about making it simple to do complex things.


This. Let's not ride the hype train and overengineer anything. Just use any simple solution that can get the work done.


I read the article as promoting "let's use a simpler solution than that, and cut scope from the work to be done instead". If you are not willing to cut the scope of the problem to the absolute minimum, you are prioritizing something else over simplicity. That something else might be higher profits, ease of onboarding, keeping your boss(es) happy or even things like better error logging and observability. Most software projects, both for-profit and not, have a huge laundry list of things they have rightly prioritized over simplicity.

I don't even think the article tried to argue that it was wrong to want those other things over simplicity, just that most people out there only claim to want simplicity but then make another statement entirely with their actions.


I feel like a point the author is trying to make is that oftentimes "simple solutions" have their tradeoffs.


Like Perl's: Make the easy things easy, and the hard things possible.


Or, at least start by keeping simple things simple.


People want different things.

Someone could build a simple tool to do what I want. Someone could build a simple tool to do what you want. They would be different tools, though. Nobody wants to create different tools for me and you. So they create one tool, and it has to be more complex.

As the article says, the same thing happens with business units. Every VP wants something that does what they need (or think they do). And it might be simple to do any one of those. It's not simple to do all of them put together, though. (In fact, I wonder if it doesn't happen more because of different VPs than it does different customers.)


Exactly this. Much like the tale of people only using 10% of Microsoft Word, yet everyone uses a different 10%.

Strange the article doesn't address this tension between all the various stakeholders and tension among their priorities. I want the simplest thing that satisfies my need and want, yet no simpler. If others are using it then it'll already be more complex or one or both of us must compromise.


> Strange the article doesn't address this tension between all the various stakeholders and tension among their priorities.

I thought it did:

> Of course we all claim to hate complexity, but it’s actually just complexity added by other people that we hate — our own bugbears are always exempted, and for things we understand we quickly become unable to even see there is a potential problem for other people.


Fair, I was hoping it'd go deeper because even an individual has tension between different priorities that can require complexity.

This whole point kind of invalidates the conclusion that no one wants simplicity despite their claims. They really do want simplicity, despite struggling to understand both the depths of what's needed to satisfy their less visible constraints and the needs and priorities of everyone else using the thing.

Subconsciously or consciously folks may have a sense of the tradeoffs, which is why they keep choosing more complex solutions. Not because they're being dishonest or are hopelessly ignorant.


True but why? "Plea for lean software" by wirth https://cr.yp.to/bib/1995/wirth.pdf explains how complexity sells. Playground IT attempts to explain the "psychology" of that complexity: https://bitslap.it/blog/posts/playground-it.html


My god, that first article is from 1995, and almost 30 years later, it is still true.


I told my boss the following during my interview:

"If I'm doing my job right, you'll see me come up with solutions so obvious and clear, you'll wonder why they took so long." It is hard to come up with really good answers to questions. To make sure that the obvious, is really correct, etc.

I've given code reviews where I've told people to remove 100's of lines of code and use a module we already use.

My way is not the easy way today. My way is the easy way over the years.


If it was simple, they wouldn't be paying us for it.

The people who want simplicity don't need experienced professionals to do those simple things. So we don't tend to hear about them, and we don't get to do them.

I've been asked to do things so simple I don't even want to go to the trouble of negotiating a contract, I just recommend an off the shelf solution.

I've been asked to do complicated things that should be simple, so I've recommended doing the simple thing. They just ask someone else to do the complicated thing instead.

If someone reading HN is involved, so is complexity. The best thing we can hope to do is wrangle it into isolated and manageable little chunks.


The article is talking implementation complexity.

Say a client asks for features A, B, C, D which reach the bar of complexity where someone from HN has to get involved. Now two things can happen, and in both cases the same feature set is delivered:

1. All of those features are implemented as a server side web app, with little to no JS a simple CSS stylesheet, and each page is under 100K and loads almost instantly. There is also a simple caching strategy (but not necessarily an obvious one) that speeds the application up further, and allows it scale. The DB design is good, proper indexes have been added, the queries have been thoughtfully planned, etc.

2. The features are implemented as a SPA, and a large framework has been pulled in. Many UI components also pull in other large dependencies. A CSS framework has also been added even though only 1% of it is used, and no attempt to tree shake the CSS or JS has been made. The page size is over 3MB. No DB optimizations or caching has been added, etc.

Note this is far from a straw man. 2. is by far the most common approach, even among "professionals".


Option 2 would of course be a SSR React MPA these days, not a SPA, so we have to worry about NPM supply chain attacks server-side too.

Option 1 can be done by one sensible person with intermediate PHP/Python knowledge, and maintained fairly easily and cheaply forever.

They're different jobs. You can take the interview for that option 2, and you can push for doing option 1 instead. What will end up happening is that they employ someone else.

(I speak as someone who spent a couple of months in the mid-2000s mostly writing a partial XPath implementation in Flash ActionScript, to be used in a single product to interface with one other 3rd-party system. It was a stupid idea from day 1 and definitely not mine, yet that was the job! The insanity is not new.)


> Option 1 can be done by one sensible person with intermediate PHP/Python knowledge, and maintained fairly easily and cheaply forever.

This a vast underestimate of the amount of knowledge it takes to do 1. well, for a project of any complexity. Doing the simple thing well is hard. Indeed, one of the reasons that option 2 has skyrocketed in popularity is because a larger number of mediocre devs can "get the job done" using that method.

> They're different jobs. You can take the interview for that option 2

What you are saying is that there is a lot of non-technical pressure to do option 2. I couldn't agree more. It doesn't make it any less absurd.


I agree to an extent, but a person who's managed to end up doing the simple thing (option 1) has a superpower! They're able to say "no". That makes simplicity much much more achievable. If you can't refuse to keep adding crap then yes, simplicity is very very hard.

I honestly think most people who are currently doing an option 2, and are capable of some restraint, would find it easier to do option 1. And if they were allowed, and if it paid as well or better than option 2, and if their career prospects were improved by doing it, they would.

Absurd is the right word, but that's just the world we live in. Why am I paid more than a doctor? Madness.


Arguably, option 2 soft-caps their technical progression because their day is spent untangling things, keeping up with dependency updates, and other makework that can spawn from not corralling complexity. However, in some sense, they have consciously chosen this approach, and associated with like-minded people.

Using a big brush here, but webdev has a LOT of this “just throw tons of libs at it and hope for the best.” I ran away screaming once I got a taste of that. It is antithetical to what I want out of my career. And it seems to breed a lack of rigor, exemplified by how many people complain of “slow tests” but refuse to refactor in such a way to make them faster. IOW, you don’t want fast tests as much as you want to bitch on Twitter.

Plus, long-term I believe option 2 is somewhat dangerous: there’s a huge pool of applicants looking to employ the same approach. Why pay a premium for those skills? What’s your differentiator?


Well said.

> Using a big brush here, but webdev has a LOT of this “just throw tons of libs at it and hope for the best.”

My experience matches this. And not only that, many of them will passionately argue in favor of this approach, under mantras like "I ship" and "it gets the job done", and dismiss attempts at corralling complexity as fussy and wasteful.


Yep. I feel like I’ve watched the web lay waste to any sort of collective technical aesthetic. It’s a combination of huge numbers of new devs entering, SEO, and everyone chasing the short tail of devs who are 1-5 years into their career.

It almost feels like it is a bit gauche to actually talk about programming itself. Instead, you’re supposed to be talking about microservices vs monoliths, k8s, monorepos, or some other inanity. Though this is not exclusive to devs: guitarists write books on gear and then a few sentences on actual technique in most forums. Something about our platforms seems to actively resist deep content.

It’s been pretty alienating for me. I work with other technologists in R&D and that is great, but when I glance at industry, I see a ton of self-inflicted wounds paired with denial that there’s even a problem.


You're just talking about a completely different thing than the article, and many of the commenters. You're talking about a simple problem. The rest of us are talking about simple solutions. The ability to come up with a simple solution to a complex problem is possibly the most valuable skill a human can possess, so of course they should be paying us for it.

Bad developers write complex solutions for simple problems.

Good developers write complex solutions for complex problems.

Great developers write simple solutions for complex problems.


I think one of us has read the post.

Points noted:

* "17 javascript trackers"

* "sacrificing complexity"

* "control how a checkbox animates when you check it"

* "adding an abstraction with thousands of lines of codes"

* "whether you are able to remove things you have added"

* "add even more stuff to fix it, or is it to remove and live with the loss?"

* "learning to say no"

It's not about simple solutions to complex problems. It's about avoiding complex problems and doing something simpler instead.


17 javascript trackers is a solution to a problem, not a problem in itself. You don't need 17 javascript trackers to creep on your users. In fact you don't need to creep on your users at all. So the real problem is the simplest possible problem: there's no problem at all. So the 17 javascript trackers are 100% accidental complexity in the solution to a nonexistent problem.

Controlling how a checkbox animates when you check it is also a solution, not a problem. Problems are things like: users don't know whether they've selected this value or not. Users think our application is boring. Users are completing their tasks too quickly and we need to find ways to annoy them and destroy their productivity. All of those are the possible problems that micromanaging a checkbox's animation might solve. There are almost certainly simpler ways to solve those problems.

Adding an abstraction with thousands of lines of code is, obviously, not a problem, it's a (probably poor) solution. I don't understand why I need to explain this one.

The rest of it is suggestions on how to let go of your overly complicated solutions and find more simple ones.

The article, and most commenters, are basically assuming a fixed, relatively complex problem. Your point seems to be that experienced professionals should not work on simple problems. Okay? And you should wear your seatbelt while in a car. That has nothing to do with finding simple solutions to a fixed problem.


No, 17 javascript trackers IS a problem - because somebody in your org or your client's org wants them there, and somebody else has accepted that request and is taking money for it and thus it has become your task.

A "simple solution to the problem" could be to abstract away the tracking code into a system of well documented and typed generic internal events with a plugin architecture. Rather than scatter hundreds of tracking calls everywhere you just need one in each place. Then write a plugin for each of the damn trackers to translate those internal events into whatever is required for each tracker.

Getting rid of all the trackers is avoiding the problem entirely. I agree, that's what we should do! But it's not a valid answer when your boss comes to you and says "now we're adding 17 javascript trackers, and yes we do have to do it".

The checkbox animation is your problem, as the developer, because the designer has decided that's what should be done and has already sold the concept to the customer. The designer is an idiot, but an elegant technical solution to the task of animating the damn checkbox is not "I've decided not to do that becuase you guys are idiots". Not doing it is just doing something simpler instead.


"Conservation of complexity" is a useful concept in my experience. You can shuffle the complexity around, but usually you can't eliminate it.

In practice, most software solutions I see that emphasize "simplicity" are really putting the complexity elsewhere, typically on the user. More complex solutions have more features because the user needs to get stuff done. The simple solution doesn't have those features, so the user now needs to either do more manual work or rely on additional tools. The simpler tool has made the user's life more complex.

My pet theory (that I just made up) is that what you really want to do is start with something ugly that gets at the real shape of the solution, and then simplify it over time. So "simplicity" is a refinement process like moving from the rough cut of a sculpture to the finished version. But I'm skeptical of simplicity as an up-front design requirement. If you value simplicity so highly that you're never willing to have ugly complexity, then you miss so much of the design search space that you're unlikely to find a good local maximum.


There’s accidental complexity and essential complexity. The former can in principle be eliminated.

The web tech stack is brimming with accidental complexity.


That's a Fred Brooks distinction, and not one that's (AFAIK) used outside of software consulting literature. In his essay, he basically uses "accidental complexity" to mean complexity from the past and "essential complexity" to mean "current complexity". He also says all software complexity is essential:

> The complexity of software is in essential property, not an accidental one. Hence descriptions of a software entity that abstract away its complexity often abstract away its essence.

This is pretty close to what I was saying.

There are, of course, more standard notions like Kolmogorov complexity. You could argue that the essential complexity of something is its Kolmogorov complexity. But I don't think anyone means that when they talk about essential complexity.


I’m reminded of Alan Kay’s observation that software is a pop-culture.

Simplicity is great, how can we combine it with the 17 other frameworks we saw on HN this week and have to use?


Simplicity has many fathers while complexity is an orphan. No?

This just seems like a hopeless thing to discuss because everyone says that they want simplicity (or at least: not complexity) but then nobody will admit to being the root cause of some specific kind of complexity; if someone has made something “complex” then it was because they needed something to work with something else that was complex (pragmatism).


The curse of software "engineering". Well, don't think you can call our discipline engineering at all.

If buildings were built the same way modern software gets built, we would never have the Burj Khalifa, the Shanghai Tower or the Tokyo Skytree. We'd probably still be living in holes in the ground, because that's the only fucking 'buildings' we'd be able to build.


I think this article confuses "simplicity" with "less code". Often times when we talk about simplicity, it's about overall conceptual design, and on the note of dependencies, it's often _more_ simple to use something that does the job and brings in some more dependencies than it is to aggressively try to prune and end up costing more maintenance burden.


You might be confusing ease with simplicity. It's /easier/ to use something that pulls in more dependencies rather than write that bit myself; complexity of my software goes up. It's just entropy.


End users often do want simplicity but face a choice between products all optimised to extract value, which is different from giving them what they want. Sometimes that optimised path is simplicity, especially if it’s a new class of product. The result is delightful but rarely lasts as the market matures.


Simplicity usually has a large tradeoff. Complexity tends to mean "we accept a greater set of responsibilities" than the simple version. You might get 66% of the power for the simple version. Which could be a great value if it means you need 50% of the staff, expenses, etc. But you have to be OK with being handicapped in exchange. You will have to say "We can't do that," or "We have to tell the customer 'no'" which will never fly on the business side, and often not engineering.

More often than not I think it boils down to this. Saying no to growth, basically. Which might even be the right choice sometimes. And sure, sometimes people just over-engineer things completely unnecessarily. But that's not why there is an enduring problem of choosing complexity over simplicity.


I think there's different contexts around simplicity and what your goals are.

For example, yesterday I was learning about VPNs in more detail and got WireGuard installed on Debian. It took 3 hours to get it working with no knowledge of WireGuard or how VPNs really work beforehand. Then I came across PiVPN which is a user friendly front-end for WireGuard (or OpenVPN) which works with or without a Raspberry Pi and I got that working in literally 5 minutes.

I'm happy I took the 3 hours to better understand how all of the pieces come together but I'm way happier using PiVPN in the end because the interface to use it almost can't be easier given how much complexity it's hiding away.


There’s a difference between complexity that’s inherent to the problem, and complexity that’s added by developers who have drunk architectural cool aid.

This is an example where all of the complexity is caused by rigid adherence to the most popular architectural patterns of about 10 years ago.

https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...

It looks completely ridiculous to modern eyes, but during peak OOP it was just how you should do it.

If you like simplicity then your fizz buzz implementation would be a few lines.


What? No, this was always supposed to be a parody. Did you find this 10yrs ago and think it was serious?


I’ve worked on code bases like this. It is a parody, but it’s not too far from reality.

Reading my original post i probably shouldn’t have said it’s how you should do it. I mean that lots of engineers created loads of pointless complexity with OOP


Is there really a complex version of that algorithm?

Or perhaps that's the joke?


The problem with the web is the browser, probably the most complex project humans maintain.

I chose to make my own native app (that uses HTTP too) and I can now see these same complexities mount.

The trick is to say no and stop features.


> …when a problem arises with some of the complexity you’ve added. Is your first instinct to add even more stuff to fix it, or is it to remove and live with the loss?

This is a double edged sword. Removing is generally dangerous and can cause things to break, while adding is generally safer… which might explain growing complexity in most software systems

I suppose engineers need to be brave and ruthless to remove code and fight growing complexity…


IMHO the problem is that people don't realize the cost of complexity in terms of labor and cognitive load. Complexity is extremely expensive and the cost tends to increase exponentially rather than linearly. It also greatly increases the odds of bugs, vulnerabilities, and other defects.


A bit ironic that the same quote of Dijkstra appears on both comment sections. He is also infamous for:

>[...] software engineering has accepted as its charter 'How to program if you cannot.'" [0] (1988)

The context is that CS has to tackle two "radical novelties" which aren't reflected accordingly in real world applications and education:

(1) Deep conceptual hierarchies:

>From a bit to a few hundred megabytes, from a microsecond to half an hour of computing confronts us with the completely baffling ration of 10^9! The programmer is in the unique position that his is the only discipline and profession in which such a gigantic ratio, which totally baffles our imagination, has to be bridged by a single technology. He has to be able to think in terms of conceptual hierarchies that are much deeper than a single mind ever needed to face before. Compared to that number of semantic levels, the average mathematical theory is almost flat. [EWD1036-7]

(2) Large-scale digital device:

>It is possible, and even tempting, to view a program as an abstract mechanism, as a device of some sort. To do so, however, is highly dangerous: the analogy is too shallow because a program is, as a mechanism, totally different from all the familiar analogue devices we grew up with. Like all digital encoded information, it has unavoidably the uncomfortable property that the smallest possible pertubations - i.e. changes of a single bit - can have the most drastic consequences. [For the sake of completeness I add that the picture is not essentially changed by the introduction of redundancy or error correction.] In the discrete world of computing, there is no meaningful metric in which "small" changes and "small" effects go hand in hand, and there never will be. [EWD1036-9]

[0]https://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1036.PDF [EWD1036-10]


> For another, the ability to control how a checkbox animates when you check it is of course a valid reason to add another 50 packages and 3 layers of frameworks to their product

To be fair, this sort of complexity usually results from "UX" requirements. Just think about what it takes to materialize those damn figma designs. Google's Material Design is not a simple "skin".

When it comes to business applications (forms, reports, requirements strongly influenced by end-users and evolving in ways hard to predict), my most successful experiences consisted of multi-page apps + vanilla js for validation.

My last project in this field failed though. The client was infected by the figma virus and I failed to compatibilize this methodology with the UI fluff they so fiercely demanded.


Simplicity sucks. Functionality rules.

Reliability: I don't want a single host EC2, I want a multi-AZ Kubernetes cluster.

Security: I don't want a EC2 behind a firewall, I want a zero trust architecture.

Performance: For a auth service, I don't want a boring Spring Boot app, I want a performant Rust server.


Complexity and simplicity have a duality. Cars of sixty years ago were much simpler, but required more complexity to own and operate. More skill, more discipline, more maintenance. New cars are FAR more complex, and are far simpler to operate.


Though I've found that I'm far more of a digital minimalist than most people, I don't entirely agree that complexity == slow.

Today's web technology is perfectly capable of doing very dynamic and complex things without being slow. The reason for slowness, while entangled with complexity, is almost always founded upon poor decision making; such poor decisions are usually the result of bad leadership, inexperience, laziness, programmer dogma, and of course the business putting too much pressure on developers to rush things out. What you get out of these conditions is an app that "delights" the user, but takes several seconds to merely load because someone thought it was a great idea to load all the user's records on the frontend at once, and is laggy because a crapload of elements are being inserted even when they aren't visible. As far as the dynamics of the web app, lagginess or weird behavior is often caused by the developer not actually understanding the runloop or the process that leads to painting things on the screen (ex. how merely observing a property of the DOM or changing something seemingly innocuous can cause a redraw).

> Of course we all claim to hate complexity, but it’s actually just complexity added by other people that we hate

Yes and no.

Of course we prefer complexity we intimately understand to complexity we don't understand that well.

That doesn't mean all complexity we don't like is just a matter of "RTFM, bro" or "hit the ground running, bro." For example, I've come to believe that most OOP principles lead to unnecessary complexity most of the time. When I see complexity caused by OOP, it's not that I don't understand it because someone else wrote it, but because it's a pattern that I believe adds more complexity than is necessary. I know this because I've gone back to code I've written using OOP and immediately became confused, whereas my more functional code is usually easier to follow and has fewer gotchas. It's a matter of taste, as many people prefer OOP, but it doesn't just boil down to someone else having written the code.


I'm not sure about what is really 'simplicity'.. If I program in assembly languages I don't need a huge compiler to generate the executable so the system is simpler but for the programmer it's also more complex because assembly languages are complex and any mistake is really hard to find..

Same for API, the worse is better debate, an OS which retries automatically in case of interrupts is simpler for the users? For most users yes, for those who care about low latency no!


Simplicity is the utmost sophistication. But nobody notice your hard work.

To keep your job secure and well paid, the better strategies are by introducing tons of complexities into the system which look good to those who's not maintaining it.


Wow. Most comments in this thread get it completely wrong. The article is about simplicity of implementation not simplicity of features.

Given two different implementations of software, that gives users the exact same features, you will often find that one implementation is simpler than the other. Using less libraries, a simpler build system, less code etc. That’s the kind of simplicity we should strive for. Not software with less features.


Complexity and simplicity are... Complex. Brainfuck is simple in its own design but using it to do anything in particular is not.

Some Java code bases are borderline patronising in their apparent simplicity but are actually built on many layers of complexity which is then hidden.

So when you say you want "simplicity" I have to ask what you mean by that. Where do you want simplicity and where can what you think of as simplicity arise as complexity elsewhere? I think there are more nuanced trade-offs than unqualified notions of "simplicity" and "complexity" don't capture very well.


The main issue is that when you start a new project almoust any approach is "simple enough". The problem is that an approach that is good long term will look like an overkill complexity on the first stages of a project. I have my creations like SwarmESB and swarm communication idea as a good example... could make the microservice systems simpler but only after a level of complexity and then it is too late to invest in a new approach. Therefore such technologies will not be addopted instead other complex workarounds become succesfull..


Survivorship bias. The current set of people who elect to work on web frameworks don’t want simplicity.

Everyone else does, including the people who never entered the field because it is now incomprehensible.


One line that demonstrates the agony and the ecstasy of simplicity: $ man ls | grep "\-\w," | wc -l

On the one hand, beautiful, recursive composability of small, universal tools.

On the other hand, the tool we're introspecting is 'ls'. If we truly valued simplicity, ls would have no flags - you would pass its output to awk to extract the data you want.

A more thoughtful writeup: https://danluu.com/cli-complexity/


What is simplicity? Each micro-service in and of itself is simple, but the network of message passing is complex. Ok. So, a monolith is simple, until it’s been maintained for a decade or more and has been patched and re-patched by generations of engineers who didn’t really ever have the ability to maintain the original simplicity in the face of feature creep and practical development issues. Ok. Maybe the problem isn’t simple. Or worse, there is a simple and easy to understand general solution that makes “everyone else” think they can do better, but the reality is that the problem is chock full of edge cases and special considerations. Leading to endless criticism and “bike shed”ing. —— An aside about what engineers want. I do feel that the desire for a “simple” or “elegant” solution to problems is as out dated as The Enlightenment itself. We seem to want to force order onto a disorganized world and exalt and marvel at the rare situation where it actually works out that way. But, it seldom does work that way.

I can understand that the article seems to be focused on “unnecessary complexity” introduced by large frameworks for creating seemingly simple websites… but, this sounds a lot like folks complaining that Java code is less efficient and/or larger than assembly. Like, of course you can write “simple” code for your project, but there are many concerns that engineers hold in high regard beyond simplicity: correctness, reproducibility, maintenance, to name a few. And, we cannot be calling ourselves engineers if we don’t adhere to the most basic engineering principles of “standards”. That last one especially rings for me, if ever problem requires a scientific exploration than we’re not doing engineering work, we’re doing research. So, the use of frameworks, even “bloated” ones, brings us much closer to standardization than one-off boutique, yet (possibly) simple, solutions. —— Anyway, back to complexity: A perfectly fine website may require a 30-second load time… try walking to a store and finding the good you want and standing in the checkout line and missing your bus and having to carry the thing all the way back home and opening the box to find out that it’s broken and hauling it back to the store and then they tell you that that was the only one and then having to wait 4-8 weeks for a new one to be ordered and delivered and repeat the process all over again. Perspective is important on what “complexity” even is.


Simplicity is like performance or good design. It's valuable but it takes a disproportionate amount of effort. But it's worth it regardless if you want your product to become hugely successful because software popularity follows a power law curve.

It can take 5x longer to make a product 20% simpler and 30% faster. But that might result in your product becoming 10x more popular. Power law dynamics are very counter-intuitive that way.


Some what related: cookie banners. What value do those optional cookies really generate, and is it enough to be worth annoying all of your visitors? After all, if they had only "essential" cookies, they could skip the banners.

Do businesses ever consider the cost of maintaining and serving huge, dynamic web pages, when simpler ones would do? What savings in hardware (or cloud services) and staff time would be possible?


> The reason that modern web development is swamped with complexity is that no one really wants things to be simple. We just think we do, while our choices prove otherwise.

Tangential, but I've repeatedly found this to be a very useful and generalizable insight.

If you want to know what's really important to someone (including yourself!), ignore what they say and focus on what they actually do (or don't do).


The Ineos Grenadier SUV/Truck is about to put this to the test in the automotive space

https://www.thedrive.com/car-reviews/2023-ineos-grenadier-pr...

Do people want straight axles, reliable engines, and big physical knobs and dials?


One part of this is that everyone wants something that is bespoke. They want their system to unique and custom, which inherently adds complexity. I've worked on lots of projects where a customer likes what we're doing, but has a specific way they want the system to work. This leads to lots of the complexity in products I've worked on.


> We all claim to hate complexity, but it’s actually just complexity added by other people that we hate — our own bugbears are always exempted, and for things we understand we quickly become unable to even see there is a potential problem for other people


This x5 for security.

As someone in software security, I tell people I'm the fourth or fifth priority for any product. Function, Speed, and Simplicity are all ahead of me on the priority totem. I get it, but it's frustrating.


Simplicity is a word with a lot of meanings, in OP I'm witnessing a million poor souls trading low time-to-market, immediateness for users, for maintenance over the years on a few thousands moving parts.


Look up Sunk Cost Fallacy.

No, you can't remove all that code. Think of all the person-months we sunk into it!

There's almost nothing harder, organizationally, than taking something out once it's in.


Sometimes tools are deceptively simple. Small examples magnify benefits. However, in the large problems become intractable. But the sunk cost fallacy is there to prevent change.


The Apple ecosystem is somewhat insulated from this logic, though I find myself wanting more complexity than I am offered, particularly from Apple's cloud services.


i want it, though.


So what are you going to cut from your current software project then? Or was this a "I want this, but my manager won't let me" type of post? :)


at work? none. I have no attachment to my work apart from the business perspective. non-work, I don't even have a personal computer anymore.


I do think simplicity is good but it is interesting how complex nature is while working so well.


Two major problems with this:

1. It's a very naive idea that our actions can always be construed as evidence of what we want. The fact is, humans aren't purely rational beings, and we don't always do things that lead to what we want. Particularly, short-term dopamine-rush type desires can override our more fundamental wants and even needs. The person who wants to lose weight gives in to a short-term craving for a cookie, the guy who wants a relationship gives in to a short-term fear of rejection and doesn't approach the girl, the person who wants to save money buys the cigarettes to deal with their stress. The idea that anyone is immune to this is arrogance, and it's uncompassionate to not allow for this in our understanding of others.

2. Simplicity is defined in a lot of ways, many of which don't make much sense. For example:

> A lot of developers want simplicity in the same way that a lot of clients claim they want a fast website. You respond “OK, so we can remove some of these 17 Javascript trackers and other bloat that’s making your website horribly slow?” – no, apparently those are all critical business functionality.

This is assuming simple == fast, but that's not in evidence. One might plausibly improve speed here by loading the 17 Javascript trackers asynchronously after the visible parts of the page have loaded, thereby creating the visual impression that the site is faster, which is likely what the client means when they "claim they want a fast website". By any reasonable definition this is more complex, but it's faster by the definition the client is likely using.

In optimization this is super common: quicksort is faster but more complex than bubblesort, hash tables are faster but more complex than lists of key/value pairs, etc. "Fast" is a really poor approximation of "simple" in this context.

Really this is an example of really poor listening on the part of the author. They're not taking the time to understand what the client wants to be faster, and are instead incorrectly telling the client that they need to remove core business functionality in order to do that[1].

[1] As a side note, I've managed my entire career to avoid working for companies that have 17 Javascript trackers installed, and in all likelihood you can too. If you don't like working on sites that have trackers installed, then I think it's more mature to be honest about that with both yourself and your clients, rather than setting up artificial roadblocks like "this can't be fast because of trackers" which is passive-aggressive if you're saying it just because you don't like the trackers.


Obligatory Worse Is Better Link for those who haven't seen it: https://dreamsongs.com/WorseIsBetter.html

Sounds like Richard P. Gabriel never really figured out an "ideal strategy" after decades of trying..

Simplicity may be the kind of thing that feels close but is ultimately impossible due to varying tastes, interests/priorities. You can get far within little bubbles of coherent taste/interest, but then as the bubble expands, entropy (aka some value diversity metric) breaks things on you.. Like a kibbutz vs. capitalism. You're always tempted to think you can have just a little bigger tent/bubble if only...


htmx for the win.


I really wonder why htmx hasn't caught on. Everyone should be using it at this point.


Oh, it’s catching on


I cannot speak to the modern Web development, as I can only have a glimpse over the shoulder of someone else doing that every once in a while. But, I see a similar problem with development environment configuration and management. It's common to have multiple configuration files for multiple utilities that are supposed to do something in development environment: run tests, lint source code, spawn a mock-up of the system being developed, perhaps some VCS automation and often all sorts of integrations with other tools used in the project.

Projects become cluttered with various "dot-files" with various configuration formats, and because those tools need to cooperate somehow, you also see bits of configuration for one tool embedded (while escaped) in configuration for another tool, and sometimes even multiple layers of escaping.

There are usually straight-forward alternatives that simplify the process, make it more uniform. My explanation for why those alternatives aren't used is vanity. Users of projects with unwieldy configurations pride themselves on knowing and using a bunch of (unnecessary) tools, they like to appear savvy to their peers by discussing advantages of and new developments in these tools. And because the explanation is so simple and so unflattering, few will dare to acknowledge it.

It's also usually the case that one would need at least some years of experience to accumulate all the useless knowledge of development infrastructure tools, and so these "opinions" about tools will often come from more senior programmers in the group, often driving the junior programmers to unquestionably accept the unnecessary complexity of their environments as status quo and come up with all sorts of rationalizations for why it's normal to have this many cross-referencing configuration files and various infra tools.

I've lost count of arguments I've lost when suggesting a simpler automation solution which didn't use (an unnecessary) tool. Here's just one example:

A project aims to deploy a Python application with external dependencies (packages) in containers. Using pip to install stuff in containers was problematic because it took forever (due to solving dependency constraints, downloading from PyPI, installing a bunch of stuff irrelevant to the target application s.a. scripts for dependency packages, or tests etc. finally, poor network utilization). I suggested having a CI job which pulled vetted versions of packages on-demand, then have it strip those packages of unneeded cruft, put them in a tarball and use that for building containers.

You had to watch the guy who was a level above me in work hierarchy and was somehow responsible for CI get red in the face and start making saliva bubbles as he was getting ready to destroy and ridicule my "insane" idea... all because it didn't use pip.


if you can’t see it, is it real? the boss is probably right here - simplicity is a mirage, it takes a decade to move the needle (like in science), this blogger’s perspective is thoughts and prayers. I say this as an entrepreneur who has dedicated an actual decade of my life to trying to move this needle.


I don't agree with this at all. It's some kind of eat your own dog food thinking. These web complexities exist because the web+browsers is simple - it's a host platform. The group's creating the complexity make it their jobs. The web is still simple. Allot of these complex web things don't exist in C/Java because the people who create it are not clever or too lazy to go to an actually complex platform to honey pot Devs. Their business model would fail.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: