Hacker News new | past | comments | ask | show | jobs | submit login
The Efficiency-Destroying Magic of Tidying Up (florentcrivello.com)
594 points by cryptozeus on Nov 17, 2019 | hide | past | favorite | 190 comments



> When computers design things, they look very different.

Yes, because the computer assumes they're not going to change. The "tensile structure" looks cool now, but throw it in the back of a truck for 3 months and see if it's still algorithmically perfect. Parts get beat up. Tabs get bent a little. Maybe we'll want to grind off one of those tabs that we're not using because it's in the way, or weld on a new one. With the old structure, I can see which parts are relevant to maintaining the integrity of the system.

It looks like an un-optimized binary, which is very close to its source code. That's a feature. I can reason about the parts on their own.

The "topologically optimized" one is like an optimized binary. It's great for saving a few grams as long as you never have to change anything. The downside is that it's impossible to reason about. If one tab gets a little bent, that may be harmless, or it may cause the entire structure to lose its strength. You don't know.

Similarly, the truss on the catwalk over the crazy-looking Wendelstein 7-X is made of nice even rectangles and right triangles, and I guarantee that's not because it's a topologically optimal shape for this truss.


There are ways to make such "organic" structures more resillient. Life is one huge proof of that. One of the reason biologists and medical researchers have so much trouble figuring how anything works in living organisms is because in biology, there are very few clear boundaries; every process is mixed up with a lot of other ones. And yet the final result is incredibly resilient.


Sure, but there's some downsides to that. If a part in my car fails, the mechanic pulls it out and bolts in a new one without much consideration for the rest of the system. If a part of my body fails, there's a slim chance I might be able to get a replacement from another person, and I'll have to be on immuno-supressents for life, and very expensive people will be needed to perform the installation, and things often go wrong even so.

The house I'm living in has been around 30+ years, but in the last 5 years, I've witnessed several trees in the yard die.

I guess what I'm saying, is there is a trade off between easy to understand and fix (orderly) and resiliency(biological), and even then I see a lot of orderly things outlast the chaotic systems.


A good point, and cars are a perfect example. Older cars tend to be less efficient than new ones, but there was a point in time not too long ago when you could repair anything in your car with a welder, some elbow grease, and a free afternoon. New cars of today tend to require expensive somewhat expensive specialists to fix, and I've been hearing that the future is in cars that refuse to start unless the ECU can cryptographically verify that each and every part is authentic - that's like an immune system of sorts, except one that isn't protecting your car, but the manufacturer's ability to take your money. Unfortunately, it seems that more and more complex products are heading this way.


> New cars of today tend to require expensive somewhat expensive specialists to fix

Not for any practical reason. The tolerances on some parts are tighter, but in general anyone who takes the same interest that the average man took in the '80s can repair the vast majority of a current vehicle, safely, as long as they're not deliberately impeded by a computer.


> as long as they're not deliberately impeded by a computer

But I meant exactly that - at least from the stories I hear (I don't own a new enough car), half of the breakage in modern cars seems to require interfacing with the computer to at least clear an error flag. I once helped a guy with a software project, and learned that he's operating a workshop fixing a specific car brand. He showed me the device he uses to interface with the computer, and explained to me how the official software costs such ridiculous amounts of money that he instead hired some Chinese company that would remote-connect to his laptop and do some trickery to keep the software work without the license. Neither official nor the "unofficial" route seems to me to be accessible to a regular car owner.


>half of the breakage in modern cars seems to require interfacing with the computer to at least clear an error flag.

Yes, you need an odbII tool. they are not expensive by the standards of decent '80s automotive tools (the cost of 'minimum viable analog tools' has dropped precipitously during my lifetime. )

The cheapest ODBII tools are bluetooth, and there is a cornucopia of apps to interface with them in the app store. You can get one that is easier to use that doesn't require a phone for $100 that will work for most problems on most cars.

(Of course, the more expensive ODBII tools are better, I'm given to understand, and allow you to do more, but you can do a lot with the cheap junk; and resetting the codes is generally the most basic functionality; unless you've got a fancy car, even the cheapest one that fits your brand should work for that. )

Having come of age at a time when I was driving and repairing (older) carberuated vehicles, I personally think that fixing a carb is like a thousand times harder than interfacing with the ODB system. Modern injection systems just solve so many problems without trying.

My experience of the modern diagnostic systems is that it's actually way easier. The scan tool saves you so much time and effort vs. the old manuals "go to page 5 if it doesn't X, 32 if it does" A lot of the time, the cheap scan tool gives you a code and description; you punch that into a search engine and you get a goddamn video of someone doing the repair. It's amazing compared to screwing around with an exploded parts diagram. (I mean, from the perspective of someone who isn't really a car guy) - I mean, there's always problems the scantool doesn't catch, but... I mean, I'm talking about all this from a shadetree perspective, there have always been a lot of automotive problems I couldn't fix, just 'cause I'm not an automotive specialist.


An ODBII is helpful for basic stuff but it doesn't get you very far with modern cars. Most manufacturers now have specialized proprietary scanners which are needed for any complex repairs, especially for anything electronic. Those scanners are extremely expensive and sometimes only sold to authorized service centers.


Are these manufacturers called BMW, by any chance? They sell this authorized "computer module" replacement part for $400. But if you take it apart, there's a 50 cent fuse inside that was blown. Replacing the fuse does no good without access to the proprietor BMW software, because the module will refuse to work after the fuse is replaced until it is reset using their software. Which means you have to pay for a whole new module. However not all manufacturers are like that. After I change the oil on my Honda, there's a silly little dance to reset the oil life indicator, but nothing too crazy. The Toyota Corolla is a favorite for self-driving car enthusiasts due to the hackability of its lane-guidance system into something fuller. I'm also disappointed in some of the directions the future has taken us (it's frustrating to me how hard it is the replace user-servicable parts inside a lot of laptops, eg Apple), but not all manufacturers are the same and talking about the situation as if they are is too abstract to be useful, say, when looking for your next car.


If you don't want to do complex diagnostics that kind of stuff is unnecessary for the overwhelming majority of repairs on the overwhelming majority of vehicles.

I think you're bother over estimating the complexity of modern vehicles and under estimating the simplicity of old ones. Back in the carburetor days people were complaining about vacuum line spaghetti to run the emissions system. People always complain about new stuff.

The reason new stuff is harder for DIYers to work on is because there isn't yet a body of knowledge on how to work on them without all the stuff a shop had.


As someone who had only ever done oil/battery change level auto work this was surprisingly exactly my experience replacing a bad knock sensor in my truck. OBD tells you the problem (a sensor is out of voltage range) and youtube tells you how to fix it step by step. At one point my family saw the intake manifold on the floor and thought the car was never going to run again, but its really just plugging or unplugging things from a big computer now. If you can write software you can do (most) auto work.


Through a process where things that are suboptimal die and change happens over tens of thousands of years


> Life is one huge proof of that.

No! Life is huge proof that efficiency lowers maintenance possibility.

> And yet the final result is incredibly resilient.

It depends on whether you consider a species or an individual. Evolution is great as having species adapting and surviving. But this also means that the individual is hopeless against major incidents.


> No! Life is huge proof that efficiency lowers maintenance possibility.

What? Your body maintains itself daily in harsh environment and onslaught of other life that tries to eat you (mostly microscopic). Once it stops maintaining itself irreversible damage occurs in minutes and you physically fall apart in days.

Your body can keep maintaining itself for many decades. Any natural non living things that can exist that long are rocks. And that's only because they don't move. Show me a non-living solid thing that can move for 70 years at human pace.

A complaint that you can't swap liver the way you swap car battery completely neglects the fact that the liver can maintain itself without moving anywhere, gradually rebuild itself in place for decades despite being regularly poisoned.


> A complaint that you can't swap liver...

I'm not complaining! I'm grateful to evolution for providing me such a wonderful mechanism called body.

But, I suppose you don't work in software development, do you? I say so because you, rightfully, took "maintenance" in its literal meaning, like "maintaining it the way it always was".

Unfortunately, in software development (and maybe other professions) maintenance mean simply that people changed their minds or discovered a new business case and your product must adapt, WITHOUT CHANGING ANY OTHER INTERACTION!!!

In real life, you can say: "he died because he drank poison", and it is a sad but accepted statement.

In software development, the same sentence will be rephrased like: "he died because the crappy developer left an unfixed bug in his Liver plugin".

Again, life is wonderful thing, and evolution clearly has to optimize for maximum efficiency. But in businness there are several times when you must give up some (or even much) efficiency for adaptability.

It looks like the author of the article does not understand the need for this tradeoff.


> "he died because he drank poison" > "he died because the crappy developer left an unfixed bug in his Liver plugin"

Rather "He died because dna replication wasn't perfect and caused his liver to develop cancer", but I don't fully agree with article author either.

Chaos is a result of optimising for efficiency but also by doing do very gradually in a piecemeal manner. This kind of process can lead to local optima that are hard to get out of if you don't stop occasionally to rethink things and clean up. The thing is that cleaning up must also be very thoughtfully driven by efficiency. When it's done for aesthetic reasons you end up loosing much of efficiency you worked so hard to discover by making a mess.


>Rather "He died because dna replication wasn't perfect and caused his liver to develop cancer", but I don't fully agree with article author either.

I think they meant the "unfixed bug" was that the liver can't process the lethal poison (as it does other toxins).

It would be less of a bug than a vulnerability though; imperfect DNA replication leading to cancer is spiritually closer to a bug, IMO.


The original author is attempting to make a point about software development. It is tempting to use examples like what you just have, but the problem is you've over-extended the metaphor when you do.

The reason your metaphor doesn't work is actually the same reason the original author is wrong as well. Biology is a science that begins with reverse engineering and a lot of guess work, and the human body has no official documentation or handbook.

When developing a new software system it's extremely important to document the business rules implemented, and to use an organized architecture because the guys who built it will very likely not be the people maintaining or updating it.

The frequency of "Phoenix" projects is often the result of code becoming unmaintainable because of how chaotic the design and architecture has become. Even the best documentation and the best engineers cannot make poorly architected and disorganized code maintainable. This is because as a system's code base grows, if the complexity of the architecture also grows it becomes more difficult to make changes without causing defects. This inevitably results in a point where it's faster to completely re-develop the system from scratch than to maintain it in order to add a new feature or even resolve defects.

I've seen this exact scenario play out in every development project I've ever seen that didn't accept the very easy design principal of using and documenting a set of standard design patterns that follow a clear architectural framework.

Losing some efficiency in code is frequently necessary to make the application maintainable beyond the forseable future. Those last 5 words is where everyone seems to get it wrong. A chaotic design is maintainable for the forseable future, but the guys who will maintain the application for its intended life span are beyond the forseable future. They will likely have to work with insufficient design documentation, and a lack of subject matter expertise. Because of these factors, a design which is insufficiently legible is therefore unmaintainable.


Sure, but that’s the result of almost 4 billion years of trial and error. I don’t think comparing that to a couple years of engineering trial and error is quite the same :p


Why not? Our brains let us iterate orders of magnitude faster than biological evolution can. It's how we built civilization.


But our chief technical innovation was factoring problems into smaller ones. This is such a vital innovation that it's practically synonymous with engineering. So even superficially complex, organic-looking, highly optimized solutions will need to somehow factor nicely in order for us to be able to keep iterating on them. So in the end, it will have to just be a more clever or elegant factoring of the problem (unless we augment our intelligence in some way that lets us solve much more complex systems without factoring).


I think they do factor nicely, in the design space. Those highly optimized solutions don't appear out of the blue, but from the set of more or less explicit constraints encoded in optimizing software. We can move forward with treating a complex part as an atomic unit, something just fabricated to computer-generated design, and iterate on the set of properties we're interested in.

(Whether that's ultimately a good approach, I don't know. I haven't thought about this too deeply yet.)


> Our brains let us iterate orders of magnitude faster than biological evolution can.

I think you need to provide citation for this because I do not accept this assertion at face value as there are 10x the number of single cellular organisms on your body than you have human cells. Biological evolution is proceeding on your own body faster than you can write software. And it's all happening in parallel and concurrently. In fact, every cell, even neurons, are providing feedback into the system of "biology" that includes the entire planet.

To assume that our own cognition is somehow superior to life itself feels full of hubris.


There's a lot of hubris in imagining that we can develop adequate models of reality. It's relatively simple in an enclosed space, less so "in a truck bed" as my fellow commenter puts it.


"Adequate" is in the eye of the beholder. We've been developing "adequate" models of reality ever since humans gained sentience; it's our needs and desires that grow, changing the criteria of adequacy with them.


Life is incredibly resilient. Intelligent life is incredibly delicate and brittle.


Or as Frank Vertosick pointed out in his book, mother nature wants life in general to continue, but doesn't give a shit whether any one individual lives or not.

Man-made structures do have some redundancy, but in general we don't build like that. When we install brackets like the one in the picture, we don't install a few extra ones with the expectation that some will fail. We make them much stronger than necessary ("safety factor") so we're sure none will ever fail.

Mother nature can't afford to put all her eggs in one basket. Humans often can't afford not to.


Many things are made with finite durability. Perhaps most. Paint chips, air filters clog, lubrication is scratched away, wood rots, tires and brakes wear away.


I see it more like a rigid grid architecture vs. organically build/evolved structures like in the book "A Pattern Language: Towns, Buildings, Construction" by Christopher Alexander et al.

Here's some pictures https://medium.com/design-matters-4/a-new-approach-to-design...


The other point I find missing is that this kind algorithmic optimization tends to miss solutions that are quite obvious to "system designers" due to local maxima (evolutionary roadblocks).

For instance, nature has not figure out how to evolve wheels even in flat regions.


Counter point: the tumbleweed.

Also, you're assuming that wheels are the most efficient design for traveling, but bullions of dandelions seem to spread their seeds every year along with dozens of other plants.


Nature favors balls over wheels, because wheels are much harder to stabilize.


In our tour at work, they show off a 3D printed, computer optimized part. It looks very similar to that tensile structure; ver organic looking. The customer didn’t like the look, and they had to fill in some areas to make it look less alien.


Once, I found a curious challenge in optimizing a bitwise formula for a particular computation (by hand it took hours; brute-force, a program found the essentially-unique optimal solution in a fraction of a second). I showed it to a fellow mathematician... he was mad because he couldn't determine rhyme or reason in the solution I found. Wicked smart guy, outclasses me as a mathematician by far... but he essentially rejected my result because he couldn't ascertain how to reproduce it by hand (despite me having a relatively simple proof of optimality).


>The customer didn’t like the look, and they had to fill in some areas to make it look less alien

Instead, they should have sold it as more organic and advanced to the customer, and explained why it is so, like the article did...

That would gain them even more respect from the customer, whereas now they merely caved in and looked incompetent of getting it right at first (since the customer nows thinks it's his complains that made them finally do it properly).


You're assuming they didn't


From the end result described either they didn't try or they weren't successful in doing it. If they did there wouldn't be any changes made to appease the customer.


The fact that the customer was buying 3d printed computer optimized parts in the first place probably indicates that "more advanced" was explained to them.

There are any number of reasons they might not have liked it. Maybe it looked odd because it didn't match the rest of the design. Maybe they were worried about their own customer (of the end product that the part goes into) thinking it didn't look sturdy enough.

If you're designing parts for a rocket then by all means worry about every ounce of weight, but for most stuff if the customer wants it to look less weird you can just make it look less weird. And then charge them for the extra material.


I think I have to quibble with this:

> Here, I propose Scott’s Law: never put order in a system before you understand the structure underneath its chaos.

James C. Scott wouldn't probably never underwrite re-ordering of systems from the top down.

Central to his argument is that viewing complex systems from any singular position requires a process of simplification (legibility) that prevents a complete understanding.

The presumption that one has gotten to a place of "understand[ing] the structure underneath [the] chaos" is in fact the false confidence he attributes to most of these ordering projects.

I think if you want to wrangle a suggestion from Scott's book, it's more about making lots of small pokes at a system and seeing how it reacts, and slowly building on positive reactions from the system.

(Also, messiness and complexity are not intrinsically linked to the efficiency of a system — systems can optimize for lots of variables and it's really context-specific. So as much as you shouldn't take order to be innately good, don't take messy to be innately efficient!)


> "I think if you want to wrangle a suggestion from Scott's book, it's more about making lots of small pokes at a system and seeing how it reacts, and slowly building on positive reactions from the system."

I can feel a bit for it. I have worked on a complex multi-million line codebase. There was this predictible behavior when a new guy came to join us. Many times they had never worked on such a large project.

First they rant about how terrible the system is, than tell us how wrong the system is, followed by advice about how we should use this an this method or design patterns to make things right.

Only after months of working with the code the guy would cool down. When a (software) system is really complex, you have to get familiar with it. No matter how you structure it, a minimum number of logic and dependencies will always exist. Restructuring will not take away the complexity. It's just replacing them.

Offcourse a clear structure is better for you're understanding than a messy one, but even in the most clearly written code a lot of complexity can exist. I think Scott is talking about this, the point beyond clear structure.

thats complexity :)


Have an upvote.

By coincidence I just read Scott’s book not too long ago and can attest that it contains not a single suggestion that imposing any sort of order on a system is ever a good idea. Quite the opposite, in fact.

There may be lessons for city planners and agriculturalists in the book, but I’m pretty sure there isn’t a single one for software developers.


Au contraire. The world wide web is justified by Scott's book. As is JavaScript and the crazy things we do with it. Worse is Better. Understanding how distributed systems actually work beyond how we wished they work.


> Here, I propose Scott’s Law: never put order in a system before you understand the structure underneath its chaos.

Previously formulated as Chesterton's Fence [0], among others.

There's definitely a blind spot in software for the general principle that you should understand a thing well before you decide to remove it. Anyone want to propose some theories on why disregard for an extant body of work tends to plague software so extensively?

[0] https://www.chesterton.org/taking-a-fence-down


Because we suffer from the attitude that we don't have to understand a thing, as, say, reading the article might lead to, before attempting to correct it. ;)


It wasn't really intended as a correction -- more just an addition -- but I didn't feel like clarifying just to save a few imaginary internet points. Apologies to anyone who interpreted as attempting to correct an article I didn't even read. :)


The OP explicitly mentions Chesterton’s Fence.


> Anyone want to propose some theories on why disregard for an extant body of work tends to plague software so extensively?

Removing stuff isn’t restricted to software practitioners but there’s no need to reach for a flashy theory for something that can be adequately explained by human nature, in this case hubris.

Hubris is bound to manifest in any situation when a person with incomplete understanding of the situation, acts like they ‘know best’.


Incidentally, I think the producers of Star Trek nailed this with design language of the Borg, perhaps accidentally.

For those who don't know, the Borg are a machine intelligence hive mind species that roams the galaxy in their cubes, spheres, diamonds and other ships shaped like basic solids, destroying or assimilating anything interesting in their path, and speaking a lot about "bringing order to chaos". Yet seemingly despite their focus on order and perfection, the individual drones and the microstructure of the ships both look like a haphazard bundle of lights and wires. This is in line with the article - machine intelligence optimizing for extreme efficiency across multiple dimensions isn't going to create sleek-looking constructs.


I suspect it ain't all that accidental. Each Borgship behaves (ostensibly) as a single giant organism (and the Borg within it its cells), and multicellular organisms are indeed pretty messy inside, even if they look externally simple and uniform.


The drones are messy because they have ugly messy human bodies. The ships are highly patterned and repetitive.


I'm a 'lover of chaos'. My working desks always gather strata of documents, more often than not disheveling onto the surrounding floor.

This article however rubs me the wrong way right from the start. 'Cars running at the same speed' aren't "efficiency-destroying" nor about spurious 'appearance'. A controlled laminar flow is incredibly more efficient than chaotic turbulence.

It does not get any better later on, when the author waxes lyrically about 'beautiful equilibrium that evolved to satisfy a thousand competing constraint' in the absence of planning, casually glossing over the part where communal zoning regulation prevents not just externality dumping races to the bottom and the basis for non-speculative investment to those that can't afford to gamble or force their own "manu militari" regulation.

While self proclaiming "not suggesting all chaos is good", it is exactly what the rest of the article's suggestive language tries to convey. The deregulation agenda, while not explicitly spelled out, is omnipresent in the tenure of the writing.

P.S. I'm not surprised to learn that the author works for Uber.


Very well said, and I think you've hit the nail on the head with regards to the 'deregulation agenda'. The subtext of articles like this is always that interference, in the form of regulations, is a negative action, and by stripping away the messy human meddling we'll reap the rewards. It's never considered that the human meddling is so often done to protect the humans themselves, most often the very weakest or most powerless.


Isn't the trend in cities for more mixed-use space?

And wouldn't Uber be more profitable if people lived and worked in different parts of the city?


Sorry but there's no excuse for crap code. If it's hard to understand then time is lost every time you have to work with it.

You shouldn't defend it by trying to assign positive attributes to it (its effecient. ...yeah). Maybe there's nothing good about bad code and we should use being called out on it as an opportunity for improvement.

No, lets double down on our delusions of superiority by telling ourselves that crap we wrote is somehow effecient in some way.

Im tired of it. Its not effecient. Its bad and you should feel bad.


When applying this article to code, I don't think it has anything to do with spaghetti being good. It could be viewed instead as an argument against "architecture astronautics", the "15 layers of abstraction to print 'hello world'" school of software design.


Exactly. I remember being called into consult on an accounting system for microfinance; the target audience was small to medium-sized. The code had an absurd amount of layering; one path I traced copied the data 8 times from fetch to render, each time into a different set of objects that had basically the same fields, but that were conceptually different.

When I asked about this, I was told it was "best practice" and that if they ever needed to scale, there were now many places they could separate things. I pointed out that for the target audience, they probably wouldn't need to scale. But that if they did, it would be because they were doing 7x the work necessary.

The code was certainly "tidy" from the perspective of the guy who got paid a lot of money to produce architecture diagrams. But it was a nightmare from the perspective of an individual programmer trying to add a feature. They would have been way better off without a lot of quasi-religious design theory slowing them down.


I used to write code like that - architecting whole application up front, creating layers upon layers of abstractions. Experience taught me to do the reverse now - start with the simplest version that works, keep going until writing code gets tough, and then switch to whiteboard and use what I learned along the way to design a proper solution (and if any part of the design process starts getting tough, I switch back to writing the simplest thing that works). Rinse, repeat. It's not about rushing to release barely working pile of spaghetti, but recognizing that programming is an exploratory activity, and you don't know enough to do complete design up front.

And really, it turns out most of the time that not only you don't need the complex abstractions designed early on, you actually need a set of different ones. That's why keeping the design process continuously grounded in reality is important.

My solution for not producing spaghetti code with this method? I don't release the first version that works. I don't mark the ticket as "done", and I don't even push it out of my local repo. Instead, I clean it, or even straight up rewrite it, until it reaches a sleek and acceptably elegant state. It's the responsibility of a programmer to decide when the code is ready, and it doesn't have to be at the first moment it passes all the tests.


Maybe the most astonishing thing I've learned in my career is that it's far better to design your system after the code is written and working!

Because that's the time when you've learned to understand the problem, and you already have working code to move around.


This, absolutely. The time to design properly isn't before you write code. It's when the requirements start to become fixed rather than fluid.


While I do wholeheartedly agree with everything you wrote, I still feel need to point out a limitation I don't see (or feel) solved: how do you scale this approach to n>1 developers? In a large brownfield you can assign quasi-solo projects in different corners isolated by sufficiently wide buffers of legacy code you won't touch, but a multi-developer greenfield needs architectural structure simply to give the parallelly written code something to integrate with.


In one of my previous jobs we faced this issue, and we've approached it like this: we would start with a design meeting for a simple solution, divide it up into pieces that can be worked on independently, with agreed-on interfaces. Then each of us would be working independently on their piece, and we'd revisit the design questions as issues cropped up. Is this the best approach? I don't know. But I don't think it's bad.


Sounds similar to what Joe Armstrong reportedly did all the time with his programs: write an implementation, identify the faults, rewrite it entirely, identify the rewrite's faults, and rinse/repeat until it was good enough.

On reflecting, I've had a similar process, too, though now I'm at least consciously aware of it as an actual development strategy rather than just believing I'm a shitty programmer who's forced to rewrite things because he can't fix his prior horribly-broken implementations, lol.


> It could be viewed instead as an argument against "architecture astronautics", the "15 layers of abstraction to print 'hello world'" school of software design.

It's a long way from printf to framebuffer pixels. There's good reasons for every layer inbetween, too. (I like C compilers, format strings, buffered I/O, I like file descriptor semantics; I like having an operating system and it providing terminal emulation and framebuffer text rendering services!)

So I'm fine with 15 layers of abstraction to print hello world.


It's 15 layers in the system; I was talking about 15 extra layers in your own code, introduced up front, before any meat has been written.

I'm not saying abstraction is bad - just that it's constraining, and prematurely introducing a whole ladder of constraints is going to grind code evolution to a halt.


> its effecient. ...yeah

> Im tired of it. Its not effecient. Its bad and you should feel bad.

You're ignoring that sometimes, measurably more efficient code is less readable, possibly much less readable. If this weren't the case, there'd be no such thing as sophisticated data-structures and algorithms.

This doesn't mean anyone should make an uncommented ball of mud with no comments, of course.

> No, lets double down on our delusions of superiority by telling ourselves that crap we wrote is somehow effecient in some way.

The usual wisdom about efficient code, answers this: if you aren't measuring performance, that means you don't really care about performance.

If you're going to great efforts, and writing less readable code, on the hunch that this is more efficient, then sure, you're doing it wrong.


I actually think having business people who have control over what programmers do is the mistake very often. It leads to very bad decisions for the codebase as a whole and the programmers often don’t know the best way to fit in the random business needs because they are never told why the feature should exist. I think we need a better method to be able to make the right decisions for the code while still achieving the desired outcomes without making quite such a mess. Maybe something like sessions where the business has to persuade the developers to build the features in detail rather than throwing random unclear solutions over the fence and saying we need this.


Huh, the part about software developers having to understand client's needs and work with them is what I've been taught. I'm guessing that in big enough companies, management starts to mess with processes that they should not touch?


Most of the reasons why code is hard to understand will be on a fairly small scale. It is possible to yank that out and replace it, while honoring and respecting any underlying chaotic organization that may have occurred despite the crap quality of the code. (As the article says, this sort of chaotic adaptation requires flexibility and crap code only inhibits that.)

That's pretty much what I've done in this last calendar year, taken two messy systems chaotically (in this article's sense) interfaced with the rest of the company, and upgraded them. I gave them new, hopefully-non-crap code, certainly better documented code, documentation on the system structure, better operational deployment, massive upgrades to security, general speed improvements, and in one case, a fundamental architectural change at the most foundational level even though the surface that the users interacted with hardly appeared to change.

And yet... despite all that, I would not say I "rewrote them from scratch", because I did not simply start with a blank sheet of paper and start scribbling what I think the ideal solution would be. I took the underlying adaptations, respected them, assumed that they likely had a reason even if I didn't know what they were, and built systems that largely dropped into place on top of the old ones rather than being totally foreign bodies that cause cascading requirements for other systems to also be significantly rewritten.

If, in the future, those other systems do get rewritten, I even have some paths prepared (but not yet written) in the code for true architectural upgrades to occur in the future. But in the meantime, I have improved systems that work now.

It's a much better approach to replacing a system that denying the chaotic adaptations. Even "crap code" can be mined for them, and they are not generally the crappiness itself.


I disagree with such a blanket statement. Interfaces must remain simple, while implementations are free to be as complex as needed. Take a look at this article -- clearly the SIMD code is dense to get through, but it's very-much-so worth it for the performance gains.

https://lemire.me/blog/2017/01/20/how-quickly-can-you-remove...


"Interfaces must remain simple, while implementations are free to be as complex as needed."

IOW, local complexity in exchange for global simplicity is often a good tradeoff.


What you’re talking about is the MIT school vs New Jersey school. See Richard Gabriel’s essay: Worse is Better.

http://dreamsongs.com/WorseIsBetter.html


Strange, how did you come to the conclusion that this is a defense of "crap code"? For one thing - crappy code is most often less efficient than elegant code.


The example of the child being asked to clean his room reminds me a great deal of a bad engineer being asked to clean up his code.


"His room" and "his code" is not comparable in the context. "His code" is part of a business's product and can impact the livelihood of many people involved in the product including customers.

"His room" is a private space that doesn't affect other people. I don't tell my co-worker to clean up his side-project on github.


You could tell your coworker (or your direct report if we're taking this analogy seriously) to clean up his work area. A child's room isn't a completely private space, and it's where they do their "work" (getting dressed, homework, play time, etc.).


There is no globally applicable standard for what's crap code, so there's plenty of reason to excuse code some people might label "crap".

"Don't write crap code" is a terribly empty statement. Trying to "tidy up" "crap code" is also a great way to screw things up further if you turn out to be wrong about it.


Code, at least the way we currently write it, is a fundamentally lossy transfer. The coder has the context in their head; they understand what's happening up to that point, what they want to happen now, and what needs to happen later. Writing code is translating those concepts and ideas into a specialized step-by-step list of machine instructions.

In the course of writing the concepts in a dumbed-down format for an instruction machine, context is inevitably lost. You can say "write comments" until you're blue in the face, but it doesn't solve the problem.

Then, someone new comes along, and they don't understand the context, they never knew anything about it before, and you quit 8 months ago. This person must infer the context and determine an accurate conception of the context from the pieces left behind.

The approach taken when one finds themselves in that position is one of the key tells of their skill and experience IMO. Regardless, it seems that if the software got out of the loose demo/messing around phase, it deserves at least some consideration before it's dismissed out of hand as "bad code".


Upvoted. See also: "Programming as theory building" by Peter Naur.


The more I code, the more I like messiness. By that I don't mean bad code. I mean code where all techniques in and out of books are used together.

For example, a mix of exception and error returns, direct access to class members combined with accessors, etc... I like to see best practice rules being broken when there is a good reason for that.

Rule of the thumb: good code is short code. If your "tidying up" makes the code longer, then it is probably better to leave it as it is. What you are going to do is most likely add useless layers of abstraction or try make it abide by some made-up rule that shouldn't apply here.

As for "efficient" code. Most efficient code I see is actually quite good and readable. The worst code I see usually gives no fucks to efficiency. At least premature optimizers show some love to their code.


What about a mvp scenario? Many successful businesses have crap legacy code, but maybe they would not have been able to become successful ever had they not rushed with the code? Are you saying taking tech debt is never good?

To me it seems like a failure to take a look at the bigger picture. In the end it is all about the value provided. You are creating a start up that has 10 percent chance of succeeding and only if it gets to market fast.

Also it is never binary. It is always time invested vs quality of the code. It is a spectrum. From what point can you call code crap?

The closer you get to perfect code the more time it takes to get it better as you are closing near perfect.


I'm a big fan of design two systems. If timelines permit, ship the second, else ship the first. The first code will almost always be crap code because it takes designing the system to better see how the system should have been designed.

Problems arise when the first system gets left in place for too long. It starts to grow developers sometimes whole teams arise, and many people ask why, but are too afraid to touch it


Yes using the 'city' analogy - the 'city' might be the entire of the AWS stack, where as the 'building' might be what is produced by the 2-pizza team. And the code produce by that team (like the building) should be tidy as much as possible, except where there is a genuine need to get performance and that requires making things a bit messy.


Agree but this can also become subjective very quickly. Oh this code is crap according to “me” so lets re write it the way “I” think is better without understanding why it was written that way to begin with. If you apply this article to code then I would say it argues that find out the reason before just removing it.


Often true, but often 'beautiful code' does not impact the bottom line - even in the long term.


It does if it’s a requirement for keeping employees from quitting.


I can't imagine quitting because the code I work on sucks. It's not like I'm getting paid by the feature I ship. If it takes 20 hours to ship an 8-hour feature, it doesn't matter, I still get the same paycheque every two weeks. My job is to show up, and work with the situation that exists.

I can, however, imagine quitting if my manager is a shitty person.

People don't quit work - they quit managers.


I think this is a fine attitude in the short term, and an absolutely terrible one in the long. It's certainly bad for the business; companies that (by your numbers) are happy paying 150% extra in costs will have a hard time competing over time with companies that keep their costs low.

But I think it's also bad for the developer. Giving up and accepting bad code and low productivity means we develop habits and attitudes appropriate to that environment. It keeps us from getting better at our jobs, from keeping up with new technologies and new approaches. And given how our industry keeps changing, I think that's a recipe for disaster.


> It's certainly bad for the business; companies that (by your numbers) are happy paying 150% extra in costs will have a hard time competing over time with companies that keep their costs low.

If most of my income derived from being an owner of a company, I would care deeply about this problem.

Since it doesn't, its really no sweat off my back, either way.

> It keeps us from getting better at our jobs, from keeping up with new technologies and new approaches.

In my experience, tech churn is one of the top reasons for why code has gone to shit. "It's been three years since the last re-write, let's rebuild the product again, in a framework that nobody here knows how to use!"

On the bright side, both the re-write, and the cleanup of the resulting mess means steady employment.


> Since it doesn't, its really no sweat off my back, either way.

Again, only in the short term. A failing company is not a fun place to be, and a failed one even worse. And if you end up with a resume that has a long string of losers as your employers, it's going to get harder to get good jobs.


Do you think so? If you do, and you hire people, you should seriously consider not thinking in that way.

Any manager who can put two and two together should be well aware that the impact that an average IC has on the success of a failing company that's bigger than 100 people is near-zero.

It's just pedigree snobbery, to look at a resume, and go: "Oh, well, he worked for losers, he must be a loser, reject."


So would you work the rest of your life before retirement to dig a hole and then fill it up again, over and over. Let’s say you get to make 10x whatever you are earning now. Oh, and, the manager is a nice guy, he gives you lemonade and stuff on breaks.


Sure.

Most people do exactly that for a living. I don't let my 9-5 define my life. It means I'll retire in two years, and be able to work for a cause that I deeply care about - or, better yet, for myself.

A better thought experiment is to ask yourself how many of your co-workers will come in tomorrow, if all your code became the most beautiful code ever written, with rainbows, and unicorns... but on the flipside, that they stopped receiving paycheques.


I agree, beautiful code is often the other extreme. Plus it's often only beautiful to the person that wrote it.


> Plus it's often only beautiful to the person that wrote it.

From what I've seen, this may be true in some particular cases, but this is not a general pattern.

I've found that beauty, in software (as in art and design in general) very often has common elements. People certainly have different preferences in how the elements are put together, but I think many people can appreciate the underlying principles.

Here are some examples of the principles behind beauty in software:

* A component obeying the principle of least surprise

* A function having a clear purpose

* A function using a small (perhaps minimum) number of arguments for its purpose

* A codebase cleanly separating concerns

* A codebase reusing standard components

* A codebase having appropriate documentation suited to the team working with it

* A codebase having good testability

* An algorithm (e.g., one based on a published paper) solving a problem by doing less work

* A function having fewer lines of code

None of these are absolute. (For example, the last two may sometimes exist in tension with one other.) I see the concept of beauty as relative to a set of goals and values. Beauty almost always often involves a sense of balance and proportion: trading-off principles that are not perfectly orthogonal.

Of course, there are different perspectives on how to achieve a particular balance for a particular situation.

I'd like to add an unsupported claim (that I happen to believe, given a set of mostly rational people working together in a healthy work environment): If you get these people together and say "is codebase X beautiful for purpose Y?" I think you'll find a lot of consensus. In areas where they disagree, I think the resulting discussion will likely be constructive. I would bet that the discussion will lead to a better design in the end -- as perceived by the participants. This assumes that the people learn from each other; e.g. proponents who tend to favor one principle are willing to listen to proponents of other principles.


Maybe by aiming for "simple", you can as a side effect eventually achieve "beautiful".

I suspect aiming for beauty gives you neither.


Depends on what you think is beautiful. I've seen code that was beautifully simple, and 'beautiful' code that was unnecessary, impenetrable wank.


Code that is easy to work with is usually that way because the author went out of their way to make it easy to work with, often this takes 2-3x as long to write as normal code that just works. Publicly exposed api's typically get this special attention and internal things get the normal spaghetti treatment.


I'm loving these intelligent criticisms of the article.

My own biggest beef with the article is it entirely misses the deep point of Marie Kondo's work, which is about efficiency and joy. If the article is not actually a response to her philosophy and definition of tidiness, and just needed a cute title, well, that's confusing at the very least. The commenters who point out that codebases, for example, need to be reasonable for humans to work in, or that it sure is easier to change source code than a compiled binary, are on the right track.

Kondo's method is all about identifying the clutter _that is actually clutter_, and deleting it. Would you rather maintain a program that's 1000 lines, or 100? If the 1000-line program was written by you, for you, over the course of your life, and by your own admission is extremely messy and moderately unpleasant to deal with, containing lots of unused stuff, yet you run it, and modify it, every day, no one is going to force you to clean it up, but you can find joy in doing so.


Based on the content, it seems to be primarily a clever title. I don't think the author is trying to refute a hypothetical group of engineers who are trying to apply Marie Kondo's philosophy to software. (Not that there aren't devs who are doing that, but that doesn't seem to be the author's goal.)


This made me smile (“refute a hypothetical group of engineers...”). But even if the author is not refuting Kondo’s methods specifically, is the author not saying that acts of intentional tidying should be viewed with skepticism for putting form over function? When Kondo’s approach is a perfect counterexample of the claim that decluttering is putting aesthetics over efficiency.

Actually, I think this article is not about software at all, or decluttering a room, it’s about “ complex systems — like laws, cities, or corporate processes,” and then uses the word “systems” as shorthand for this, and then talks a bit about managing software projects but it’s really still about good corporate process, and then mentions tidying a room but mostly as a joke.

I think the actual point is more about large groups of people or organisms that function well together, and how you shouldn’t assume it is better to impose uniformity. Even then, though, it is easy to argue both sides of this point, pick apart the examples given in the article, and use a codebase as an analogy to an organization.


Kondo is much more than just throwing away trash. https://www.rd.com/home/cleaning-organizing/marie-kondo-fold...


This article makes some good points. In particular, it emphasizes understanding why there is a “mess” in the first place — appreciating the role of things in a functioning system is an important step forward to any kind of constructive change.

Some elements of Konmari’s method actually are consonant with that, since evaluation of your self and the place of objects in your life are a continuous part of the process. Just throwing all your stuff away every week, whether you need it or not, would definitely be efficiency destroying although it would also be very tidy. Speaking from experience, however, this is not a behavior that developers struggle with. The heedless piling up of trash, code and tickets is far more common.


"When a God-level AI takes over in a science fiction book, it often remakes the world in its image: full of straight lines, smooth acceleration rates, and lots of chrome (AIs love that stuff). But as we start using algorithms to design things, we get results that look a lot more chaotic than that..."

It would be interesting to see a movie where an AI takes over and turns reality in to a Rube Goldberg machine.


Huge corporations and governments are kind of like hybrid silicon/meat AIs and that's exactly what they've done. I think this is the most likely outcome.


No, corporations and governments flatten and simplify everything until they can understand it. The next step, once we have powerful enough systems, is to embrace the chaos and use it.

E.g. older houses were hand-built, but nowadays houses are built with straight walls and standardised heights for everything so that mass-produced furniture can fit in them. But if in the future our furniture is made on-demand by AIs, then there's no reason to do it that way; you can have a non-standard height for your kitchen counter (for example) and if/when your dishwasher breaks, the AI system will fabricate you a new non-standard dishwasher to go under it.


I work for a large size corporation but one which is a quarter the size of many fortune 100s. I guarantee you almost no one knows how things work or are done beyond their immediate scope. Much of my life is spent just trying to glean an understanding of some adjacent function or trying to convince people above me that things they believe about how things work are not actually reflective of how they actually work.

A Rube Goldberg machine usually accomplishes one task. Corporations at scale don’t resemble them mostly because, as there are a multitude of concurrent tasks in flight, the Rube Goldberg machines are neither complex nor absurd enough.


Maybe that's how Musk's ventures are different - he knows how everything in his corporation works?


I think the primary difference is that for Musk, his companies are actual means to a non-monetary end (electrification of transport, Mars colonization). This creates a tremendous levels of focus. Most companies exist to make money for their owners, and any actual useful work done is only a side effect.


Everything? No way, that's not humanly possible. But he does know a lot and more importantly is a fantastic engineer and can understand any part he needs to.

The majority of companies are run by people who would have trouble understanding the details of what their company does.


You might like James C. Scott's Seeing Like a State, if you haven't already read it.


Hopefully with more advanced stone-cutting technology, I could have a house with Inca-style stone walls. No reason for bricks to be rectangular.


So, "Brazil", then.


Nice!

That means we already have AGI, right? The corporation.


I don't think AI would turn reality into a Rube Goldberg machine, but rather that it would create so much complexity that we can't see through it.


The Matrix.

I always thought it would be interesting to have a prequel about the first person to escape. Office Space meets X Files, and the order of the complexity slowly becomes more apparent, until the movie turns into Playtime.


There was no first person to escape, there were people who survived the war with the machines and were never "in the matrix" to begin with. You should check out The Animatrix.


MORPHEUS: When the Matrix was first built there was a man born inside that had the ability to change what he wanted, to remake the Matrix as he saw fit. It was this man that freed the first of us and taught us the secret of the war; control the Matrix and you control the future.


At the end of the second movie we learn that this is a creation myth, that Zion is actually part of the larger system of machine control over humanity, and that it needs to be "restarted" periodically. The problem is choice.


I think people who never got captured didn't know that the matrix could be changed, so they just hid from the machines. Then someone figured out that the matrix could be controlled, and the fight to free all began. I think the "freed the first of us" means "freed to believe that the matrix could be defeated".


"man born on the inside"

i take that to mean someone who was born into the matrix, and over time was able to see through its facade, until its rules stopped mattering to him.


I am not sure the story suggests a first person escaped. More like there were always some people who never got captured and added to the matrix who freed others.


It's been implied that the machines might have built Zion on purpose. Including freeing first people to live in it in every iteration.


I thought that it was implied Zion was still part of the simulation, part of the equation to balance things out and give the illusion of choice. Like a nesting of VM containers or Inception dives.


Yes, but I think that is probably a backsplanation from later movies. I you only watch the first (xkcd:566) it felt more like there was a first person escaping.



Having seen AI play Mario, I'd expect to see it try to achieve its goals by exploiting bugs in reality based on whatever simulated reality it was trained in.


Yes, that AI is called Life, and some surprising optimisations have lead to flying eight limbed creatures, floating lizard-like beings, and a machine within the simulation that is supposedly capable of creating a simulation of reality itself.


> It would be interesting to see a movie where an AI takes over and turns reality in to a Rube Goldberg machine.

How would that be distinct from actual reality?


The movie Annihilation kind of evokes this, albeit without the AI (and a lot more slowly)


I mean that image of the 'before/after 'topological optimization' seems like a poor analogy.

>confirming that our intuitive preference for “straight line” designs has nothing to do with performance

As the first is made to be made because of its 'straight lines'. You can see the welds and logic in it, as something easily made.

While the second is something only achievable via 3D-Metal printing or possibly very complex 5 axis CNC.


That's the first thing that struck me as well; a person like my dad (a retired blacksmith), could make the first one ...


It's for that reason that I consider the first one to be the more efficient and 'chaotic' design despite the straight lines. It's optimised for production, repair or replacement in the field using nothing but metal stock, a welder, saw, and a drill, or indeed can be manufactured by hand by a blacksmith. It screams form over function.


This article is wrong on so many levels:

1. Theoretical: chaotic and complex are not the same thing. Organized and tidy are neither the opposite from or excluding complex systems. Some very tidy systems can still be mind-boggling complex. Furthermore Chaotic systems have a single underlying order (it might just be impossible to discern), but they are not complex. Complex systems might have multiple orders, none at all or everything in between.

See for an in-depth discussion: The Collapse of Chaos; Discovering Simplicity in a Complex World, Jack Cohen & Ian Stewart, Penguin, 2000.

2. Practical: The amount of tidiness in a complex system does not say anything about its efficiency, but neither does its untidiness. Complex systems might be untidy through sunken costs, like most cities are, revolution, evolution, historical accident, or even attempts to make an ordered system (like a system of law) always leaving some gap. The fact that a complex system exists does not say anything about it being efficient or its fitness for purpose. Like the three year old who thinks the mess is beautiful, but cannot find her favorite plush toy.

Changing complex systems is indeed hard, because different subsystems will interact, have feedback loops, create externalities, exclude external information or are near impossible through sunken costs. But that still leaves the calculation of how much the new system or altered system might be more efficient or better than the current system and what the risks involved are.

Tiding up complex systems has risks, but also benefits even if you don't believe the second law of thermodynamics applies to human created systems. Tidied up systems are more easy to reason about and thus can be more easily fixed, adapted or expanded upon. The act of tiding up has the additional benefit of adding to our knowledge about how a system actually works.

3. This guy works for Uber. The cab company with a computer that wants to uproot the current (complex) systems and replace it with something simpler. Yet fails to turn a profit. Do we need to read something into this article, or did he just not realize he contradicts his employers mission?


All true, and I would also add a lesson from software design. Spaghetti code might, in some limited sense, be more efficient in some cases, but it is difficult to understand, therefore difficult to fix when something is wrong. Making things orderly, even from a top-down perspective, may be more important than being as efficient as possible, if you need to be able to understand what's happening in order to use (or fix) whatever it is.


3. I don't think Uber is less complex, just complex in different areas. Uber's sole purpose is to make its owners more money, currently that means providing a upheaval in the industry and providing easy ride access to humans. When those humans are no long the best (or most obvious way) to make money the capital and investors will move on.


I once cleaned up my sister's room. I thought I was doing her a favor but she was mad at me for a week! In hindsight I should have known better. Spacial memory and all that, she knew where everything was even though it looked messy.

The article is not talking about the kind of tidiness that is the hallmark of a well-run workshop, IMO. That kind of tidiness is part of what makes more efficiency in that context.


>Spacial memory and all that, she knew where everything was even though it looked messy.

This is something my wife and I have figured out over time - If it's your own space and nobody else uses it, it's fine to descend to 'chaos' because you can just remember where you last put something.

That model breaks if the space is occupied by more than one person, because where one person put something isn't witnessed by the other person. So, having designated places for things and returning them to their 'home' makes the space most usable by multiple people.

Personally, even for solo spaces, I still favor everything having a home because I don't have a good memory for where things were placed.


I've been mad at my parents plenty of times for cleaning up my room as a kid for the same reason - even though it might have looked like a mess, everything naturally ended up in the most obvious place for me; a kind of mind-environment feedback loop setting on an equilibrium.


Joel has a post about working in a bakery: https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...

His idea is that there is a pretty high standard of cleanliness in a bakery, just not the obvious one. A similar "hidden rule" presumably applies to cities etc.


I'm almost certain I've read this article before, but I think I've read a lot more about types since then. In particular, this line is hilarious:

> I’m using the word kind on purpose, there, because Simonyi mistakenly used the word type in his paper, and generations of programmers misunderstood what he meant.

It sure doesn't sound like Simonyi was the one making a mistake here! It sounds like Hungarian notation is what you use to compensate for missing types; it got its bad reputation from people using it for types that the language already provides, because that's easier.


I strongly disagree with this. Software should not be written to suit the machine. Software should be written to accommodate the weakest link: humans who are going to need to figure it out 6 months from now when its current programmer departs for the greener pastures. If there's no rhyme or reason to it, it fossilizes immediately and costs a lot of money to rewrite (often into another fossil). So whatever effort you spend to reduce complexity and cognitive load, will pay for itself several times over. As a rule of thumb, the hallmark of a good system is when your guesses about how something would be done are right almost 100% of the time. You don't arrive in this state by accident.


Yep. What this article skips over is maintainability in all of the things it discusses, and in most cases also flexibility.

The optimization it praises only has single focus (operational efficiency), whereas if you take into account all of the aspects of the things in question, suddenly you see that the way things are are usually a solid compromise between the aspects it's really optimizing for. That includes human interactions such as analysis, repair, modification, and so on.


Ditto testability. For most messy code I’ve cleaned up, no one could say if it worked correctly. (Spoiler: no.)


1. I'm sending this link to my partner to justify my messiness, not sure she will buy into it... but....

2. It's not too surprising given the way ntural selection finds efficiencies in quite complex structures. But trying to reason about how they work is tricky. When engineering things, being able to reason about a system is often a lot more important than that system being close to maximally efficient. Main point is, I think you should always be aware of what trade offs are you making.... much like my point #1, if you use this to justify being messy, then it's probablly the wrong tradeoff, but shhh, my partner doesn't read this :)


Clutter and tidying is simply a time debit/credit ledger.

If you had unlimited time, you would tidy and organize everything in real time. However, we don't have unlimited time and so we borrow time from the future by leaving things slightly messy.

The behavior to avoid, as in money ledgers, is not using the ledger but never paying it back.


You gain back some efficiency by cleaning up all in one go at a later time! Think of trips to the sink with the dirty dishes.


Like all things, everything in moderation. Excessive clutter is choking and restricts the creative and productive workflow. Allowing yourself the room to make a mess while in a project though lets you work without constraints. Also I find myself tidying when I'm trying to avoid a problem. Which can be a waste, or an opportunity for your mind to work on a problem while your hands are busy (like a shower thought).


Very interesting examples provided in the article, the "topological optimization" one [1] made me think of the Vauban fortifications. As can be seen from even a cursory look on the dedicated Wikipedia page they're mostly pretty symmetrical and especially well-organized, as can very well be seen for the Alsatian town of Neuf-Brisach [2]. I'm wondering if the "ideal" solution for the problem Vauban was set out to solve (protecting towns against artillery fire) isn't a lot more less symmetrical, like in that topological optimization example where the "ideal" AI-found solution is a lot more "messy".

[1] https://twitter.com/jo_liss/status/674332649226436613/photo/...?

[2] https://en.wikipedia.org/wiki/Neuf-Brisach#/media/File:Neuf-...


Author mistakes order for purity. The end of excessive purity is the same storyline as aquaman. The dichotomy of chaos and order is still helpful.


I think the point he is trying to make is more about “...mistake complexity for chaos, and rush to rearrange it ”


I learned this by observing wear patterns, in foot traffic routing these are called "desire paths."

You can also see them in wear patterns on old handles and stairs.

Hence why I think before you try and change a system it's important to know what your baseline outcomes are and then instrument the system to measure where these "desire paths" exist in complex systems like computing.


I really like this article. Certainly makes me think about my own biases towards "straight lines" as the most "efficient" way of doing a job.


Hmm, I thought that Amazon was famous for how they standardized their cross-company software? (With AWS as a side effect.) Or is this not the case anymore?


IIRC, they only standardized the interfaces, not the implementations. You could write your team's code any which way you wanted, but you had to expose a consistent REST api in front of it all and publish its specification to the rest of the company.


They didn't standardize the software, they standardized that all software should be accessible by API and that all software should be designed in such way to be openable to third party users if need arises.


Right, the right way to do standardization! It a shame that the author didn't add this very specific requirement though - some readers might think that the interfaces should be messy too...


That's what I was thinking when I read the article.

When I worked at Amazon, everyone worked out of the same repo(s), used the same build system (Brazil), deployment system (Apollo), deployed to the same place (EC2), most APIs were written using the same framework (Coral), etc. This is not that unlike the other large tech companies, and now that I'm back at a large non-tech company where everyone builds their own slightly-incompatible tools and deployment pipelines, I do miss that standardization.

Sure, some people chose to use different tools here and there, but most people wouldn't choose to reconstruct the universe just to stand up a microservice. It's not like all the "two-pizza teams" were all doing completely different things with incompatible tooling.

It's been a while since I worked there, so things might've changed completely since then. But as a shareholder, I hope not.


If standardized means APIs, yes. If standardized means anything to you about implementation, even at the level of “don’t reimplement what another team - or even another part of your own team - already implemented” then no.


Our industry is woefully ignorant of human psychology. We also invest very little in understanding qualitative improvements in software systems - as engineers we don't like anything we can't readily measure.

Anyone familiar with the magical number seven knows that systems need to be simple not for the sake of the computers, but for the sake of the people who work on it. As much as we like to think of ourselves as geniuses, at the end of the day we're animals with a finite amount of brain power. We cannot grasp large complex systems, and we need compartmentalization and simplicity so we can keep building on top of what we already have.

Maybe one day the computers will take over programming for us, but that day isn't today.


> London's tube map only uses 45° angles to aid its human readers. Now can you see the humanness in mainboard design?

What was that russian electronic board design software where nothing is vertically/horizontally aligned, no trace is straight?



Don't about the russian software, but for any high-speed PCB design the trace layout matters and 90-degrees bends are to be avoided. On motherboards you'll often see lines that have an extra squiggle or two to equalise the length (and thus delay) because they happened to have a shorter distance to cover.


Mostly 90-degree bends are avoided (in preference to two 45-degree bends) because it makes the routing easier or because it looks better to the person doing the layout. If you are operating with high-enough speed signals that a 90-degree bend is a problem you are also avoiding layer changes and many other things like the plague.


Interestingly, I contend that the prevelance of 45 degree angles is in large part due to the capabilities of ECAD software. If you look at old manually-routed PCBs, 45 degree bends are rare. If you try to make a design with non-45 degree bends in most ECAD software, prepare for pain as much of the features of the layout engine fail to work properly (I had experience of this after being inspired by some images of TopoR-designed boards).


DOS based?


Take a moment and consider the quest to find the fastest multiplication algorithm. For millennia, we used the classical method. (Shift and add. 3 lines of pseudo code)

Then, along came Fast Fourier Transform based algorithms.


The Karatsuba algorithm was found before Schönhage-Strassen.


> Streets arranged in grids, people waiting in clean lines, cars running at the same speed…

Grids make it easier to navigate for humans, to plan for public transport, and improves the city's connectivity.

Waiting in clean lines reduces the chances of fighting in the queue.

Cars running at the same speed dramatically reduces the chances of a lethal impact.

And what exactly is the cost of these things? Is a grid that much harder to make? Does anyone want to cancel the dedicated highway, or let cars stop in the middle of it?

I think this author just doesn't understand the concept of efficiency at all.

> “Please clean up your room,” asks the mother. “Fool,” retorts the three-year-old with an eerily deep voice. “Can’t you see the beauty in my glorious chaos?”

Sounds like he's still upset about that..


I wrote the following:

> Symmetry underlies almost everything in mathematics and nature.

> It's much more reasonable to assume that our computer programs are not yet good enough to recover that symmetry, than to take the output of current programs as some sort of evidence that asymmetry itself is some sort of ideal.

And then I looked out my window at a tree without leaves, between two buildings, and I'm looking at how unorganized the branches look, and I'm not so sure anymore. It's like the "spherical chickens" joke.

(Sidenote: it's incredible how angry and volatile this comment section is --- my original comment included. Seems this subject can really strike a personal nerve in many of us. Why?!)


Unorganized at first glance, right? And yet a tree (on average) is perfectly balanced in terms of weight distribution around the trunk.


Both messiness and orderliness have their own place in work.

I find that to be particularly true when you are starting to experiment with some product or solution. At that time, you need to be messy (hacking a solution). The need initially is ability to quickly move in trying ideas and discarding ones that don't work.

Then you reach a time when you have figured out a very elegant solution that a user of product would just love. At this time, it is right opportunity to optimize for legibility and orderliness (refactoring) before your solution turns into a mess that works very well but no one can understand or update.


Does it follow from this that while you could train a large machine learning program to do a task then separating it from a monolith into separable parts would be highly unlikely to be successful? For example,you could train an algorithm that does object identification and then train an algorithm that does object boundary identification considering only the red value in a given photo from the original large program. However if you start with the components of an object identifier you might not be able to reach SOA or at least those two methods would look very different.


Chaos is not necessarily so bad, and order is not necessarily so good. Many people thing it is and they are wrong. I think even Principia Discordia mention such thing, isn't it?

And yet, many people today don't like C and assembly language, and hate "goto" and some other stuff like that, but I think that it is good.

(Also I do not always clean the stuff, because where it is, I know where it is and do not have to move it again when using it again.)


Struggling to remember, but aren't the 45 degree bends on a circuit board something to do with etching them consistently? In a lot of electronics the board is optimised for RFI, or trying to have the lowest inductance (necessary for passing high frequencies). The grid pattern is optimised for cheap assembly.

I think the author may be wrong here, and actually the pattern is how it is for very complex reasons.


I just started playing Opus Magnum and so this article resonated with me. In playing, I've found myself throwing out all optimization on the three metrics used: component cost, speed, and size.

Instead, I've been designing for aesthetics, for symmetry or clean lines or linear solutions. When I've given in to the desire to chase the metrics, the results have been much messier in appearance.


I spent much of today doing some woodwork in the garage, and due to its untidiness, probably spent as much time looking for various tools as I did using them. (Not to mention tripping a time or two).

Working in a large code base that has little structure/consistency is another example of high cost/friction.

There is some benefit to organization.


> both these algorithms and nature are optimizing for efficiency.

nature certainly doesn’t optimize for “efficiency”


Its funny how my mom always told me to clean my room where I see her own room as messy. It seems that efficiency and tidyness is also in the eyes of the beholder.


Now we have a reference we can refer people to when they complain about our spaghetti code...


Hmm, point taken. And yet, legibility is itself valuable and efficient too.


Great refutation of functional programming.


Scott Alexander wrote up a review of this book on Slatestarcodex. At over 11'000 words it's pretty hefty and goes quite in-depth.

https://slatestarcodex.com/2017/03/16/book-review-seeing-lik...


Just like my codebase


This is a strawman argument. And I wish people would stop relegating this type of system refactoring to some idiotic attempt to institute order for the sake of order.

Tidying systems up so they are organised "from my perspective" is not done, "for my benefit". It's done to make things easier to change.

Yes, it can make things less performance, no, most of the time that is irrelevant.

Refactoring is not optimisation. They are different things and both need to happen.

They both require you to make trade offs against future difficulties.


this reminds me of that time blowhard Richard Dawkins disected an animal (can't remember which) and proceeded to bang on about how stupid the vagus nerve is, wandering all over the place as it does for no apparent reason. People used to say the same of the appendix and adenoids; that they're pointless vestiges, the blindness of evolution, contingent structures. Dawkins said obviously the nerve should simply connect directly instead of following its ancient evolutionary program up and down and around the organs.

A classic case of Chesterton’s Fence


He said it would have been stupid had someone designed it that way but given that it evolved it made perfect sense why it was the way it was.


he said it was understandable why it was the way it was but implied that it "would be better" if it went directly - his tone was scoffing


Are there any interesting theories? I think a lot can be attributed to just randomness. Some time in the evolutionary process we got infected by a bacteria, the mitochondria is now a vital part in our bodys energy system...


One of the nerves that go from the brain to the eye and is hooked by the top part of the aorta artery.

In the original version, it was in some animal like a fish, and the top part of the aorta was in the gills, so the way from the brain to the eye though the hole that is below the aorta was a straightforward path.

A few million years later, when we lost the gills, and got a neck, the top part of the aorta moved inside the chest to avoid a stupid long trip to the neck and back (and probably lost a lot of speed in the blood). The nerve was hooked, so the aorta got shorter, but the nerve has to go down to the chest, pass below the aorta, and then go back to the eye.

In the case of the giraffe, the trip to the chest and back to the eye is stupidly long.

The problem with evolution is that mutation can do only small changes (most big changes are just deadly). The connections of the nerve and the artery form very early in the development of the fetus, when it looks almost like an ugly fish. You can't unentangle the nerve and the artery later.

More details: https://en.wikipedia.org/wiki/Recurrent_laryngeal_nerve


It was a giraffe, for which that nerve just seems out of place.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: