Hacker News new | past | comments | ask | show | jobs | submit login

> As long as the abstraction layer works well for you without getting too much into the details of the implementation, it's a simple solution.

But this is where the engineering intuition has to come in. "As long as you will not end up spending more time debugging the system than implementing it" is an equivalent statement -- and that requires prediction of the future. If I'm going to spend hours staring at signals on a 'scope to debug the system, I'd way rather they be RS-485 than 10base-T1, for reasons of simplicity -- but I don't know, today, if I will or not.

Layering works /great/ during implementation. Layering is a strong impediment to understanding during testing and debugging. Debugging a system efficiently requires being able to bridge between layers of the system and see where your assumptions break down. And once you're going between those layers, you're exposed to the complexity within them.

So: simplicity in implementation, or simplicity in debugging?




Then comes the engineering maxim, that you can only componentize things that have standard features and quality.

Software engineering gets the shorter straw, because there's a strong force stopping standardization and pushing components into a single implementation. It then becomes a judgement of trust, not of requirements satisfaction.


I like to use SMBC's take[1] on the "Watchmaker Analogy" - complexity comes from, in order:

(1) number of things interacting

(2) complexity of interaction

(3) complexity of thing

So simplicity is then an inversion of that. You can "maximize simplicity" by:

(1) minimizing the number of things

(2) minimizing the complexity of interaction

(3) minimizing the complexity of each thing

This ends up reinventing many of things you find elsewhere (think SOLID; same-level-of-abstraction principle, etc) although I also generally find it's the first one - the most important one - that gets fucked up the most (one example: "type explosions", when you end up with just a bazillion different slightly different types).

Also, on a broader level, there really do seem to be two kinds of systems: Engineered systems, which (notionally) attempt to minimize those things, and "evolved" systems, which somewhat maximize them - both economies and ecologies have (1) many different interacting things, (2) with complex interactions, and (3) which are themselves complex.

You're right that it's an intuitive sense, but, I do think the right advice and perspectives can give you a leg up on learning and applying that sense.

[1] https://www.smbc-comics.com/?id=2344


What's interesting in Rick Hickeys video is that he talks about prioritizing minimizing what each thing does over minimizing the number of things (that you can ignore anyways).

Having more things doesn't make systems more complex in itself if they can be combined differently as requirements change.


I agree and disagree! That talk is a favorite - and it's why I say "number of interacting things".

If we're weaving together three strands (basic braid), that's fine - we've got three interactions. If we take that braid and two more and weave them together, IHMO we're only adding three more interactions (now we're at 6), but if we take all nine original strands and weave them all together, we're up to, what... at least 72 "interactions" (each of the 9 has an interaction with 8 others), and that's before asking if any of the "interactions" themselves become "interacting things" (and then we get a combinatorial explosion).

If instead we take those nine, and, say, braid three together for a bit, then swap one strand out for another, braid for a bit, repeat until we've gone through all nine - each strand is interacting with, hmm... 4 others? (two, then a new one, then a second new one) So then that's "36".

It's not really a precise measurement, but I do find it useful question both when investigating a system, and when designing one: "how many things are interacting, and how can I reduce that?" (systemic complexity), followed by "how can I simplify the interactions themselves?" (abstraction leakage), followed by "how can I simplify the things?" (cleaning up well-encapsulated code).

A practical example: If I want to create a test factory for an object, how many other related objects must I create for that first one to exist in a valid state?

A practical application: I can get away with complexity in well-encapsulated code, because it's easy to come back to and fix; I won't have to modify anything "outside". But I can't get away with complexity between things, because then in order to come back and fix it, I have to deal with chunks of the entire system.


,,If we take that braid and two more and weave them together, IHMO we're only adding three more interactions (now we're at 6), but if we take all nine original strands and weave them all together, we're up to, what... at least 72 "interactions" (each of the 9 has an interaction with 8 others), and that's before asking if any of the "interactions" themselves become "interacting things" (and then we get a combinatorial explosion).''

You're totally right in it.

But the huge mistake I made just recently is to create a very simple interface that hides lots of different features with a few elegant flags. Although it's a super tiny interface that's easy to understand, the interactions became very complex.

Instead of using my library, people started to create another that just does 1 thing, and can't take advantage of my hard work even if they wanted.

Have I created 10 different totally independent components that use the same basic data structures (with a bigger total API surface), people could have used just the 2-3 that they need in their own system, and would have been able to understand (and even report / fix / debug) the interactions.

And actually everybody wants something a bit different, and nobody wants really all those 9 features.

This experience is what resonates with me right now when listening to the video.


Yup yup! It's like Asimov's Three Laws; you want to end up with a balance between the principles, and the "more important" ones just get higher weighting. It's totally possible (and common, I'd say) for the "weight" of the third one (complexity of the things) to reach the point where it's better to shift the complexity onto the "number of things".

(actually, on that note, a piece of my life philosophy is to have "opposing principles", since it's only through forces in opposition that balance is possible).


easy =/= simple

While I get Rich's epistemological framing -- composing with coherent, independent, units -- "embracing" certainly does not ipso facto imply 'complex'. As a matter of fact, that line of thinking smells like a tautology.

Let's assume that if X is complex in one embodiment (say as software), it's analog will also be complex in the mapped domain. The most common occurance of this is when we describe a system. As it happens, our brains are much much better at assessing language constructs than material constructs. Simply describing system X will go a long way in gauging its complexity. A comparative description will make it crystal clear.

p.s.

Fully embracing simplicity:

https://architizer.com/blog/inspiration/industry/japanese-ar...

Description: Traditional Japanese joinery is made entirely without the use of metal fasteners or adhesives, relying on compression forces and friction of interleaving pieces.


The question is about simplicity of core implementation.

Easy debuging is different goal and you can have infinity of such additional goals, but solution obviously can't be equally simple at everything at the same time (because of the conflict of responsilities).


From my experience most of the complexity doesn't come from adding stuff (where intuition is the only thing you have, and this rule doesn't help), but when removing/refactoring stuff, or the lack of doing it.

A recent known example is Elon Musk removing a lot of services in Twitter that were built over the years. Every addition probably improved the system's functionality, but the more complex a codebase gets, the harder it is to change separate pieces (by definition of complex).

I believe it was a big business mistake of him buying Twitter (especially as Tesla is getting competitors, like BYD growing by 100% a year), but removing services in itself probably makes the code more manageble by a smaller team.


Seeing as how the service is way more buggy and unreliable since then...

If I compared it to monkey with a wrench in server room, I'd be doing the monkey a disservice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: