Hacker News new | past | comments | ask | show | jobs | submit login
Getting Ahead by Being Inefficient (fs.blog)
244 points by yarapavan on Jan 28, 2019 | hide | past | favorite | 76 comments



Most of the commenters here get it.

Efficiency is an outcome of optimization, and optimization is a form of specialization with respect to the environment/assumptions. With any kind of specialization there's a trade off. If the environment or set of assumption changes, one can be worse off than if one did not optimize at all.

Also, optimization at the wrong level of detail/abstraction can be costly. One example is in auto manufacturing.

U.S. car manufacturers have traditionally tended to spec tight tolerances at the component level, with the assumption that everything will fit when assembled. This can be very expensive to get right, and the assumption of perfect final fit is not always borne out.

Japanese auto manufacturers however, despite their reputation for perfectionism, have tended to be looser with component level tolerances, but paid more attention to assembly tolerances (functional build [1]). They understood what tolerances needed to be tight and those that mattered less in the final build. They embraced natural imperfections, and made sure the rest of the system accepted the lower part tolerances. It turns out this led to higher overall quality at lower cost. (Detroit has now embraced functional build)

(The Japanese approach is analogous to focusing on integration testing as opposed to exhaustive unit testing)

[1] Functional Build https://www.adandp.media/articles/building-better-vehicles-v...


Extending your example: I've heard from mechanics that looser tolerances also lead to cars that age much better as the components change in shape due to wear and corrosion. Expensive cars have expensively machined parts to avoid the need for gaskets, while ordinary cars use gaskets to avoid the need for expensively machined parts. This trade-off is counter-intuitive for most car buyers.


I too watch Scotty Kilmer.

https://m.youtube.com/user/scottykilmer


That guy is way more entertaining than he has any right to be. His voice, the way he waves his hands around when he's talking; his love for ancient Toyotas; He just has a great personality for that kind of thing.


Most cheap gaskets age less than gracefully.

Soft elastic but not ductile parts is where it is at, with good aging properties. Well fastened.

Spring steel best but heaviest, certain aluminum alloys, finally few specific plastics.


Gasket material is probably one of those components where you should have tight tolerances.


There are more dimensions of efficiency than are really covered in that article though.

To continue in the automotive space, building a system to efficiently retool a production line will pay longer term dividends when the consumer market shifts, as it often does. Whereas building an SUV cheaper ends up being a liability every time there's an oil shock.

The main negative consequence of building an efficient CD pipeline is dealing with all the pushback at the beginning from people who think it's 'too hard' or 'not worth it'. All that work is still valuable as your customers shift focus, or you change your market mix.

Meanwhile, learning to build the best React apps possible will end up being a problem when React goes out of fashion.

Both can be painted with the same efficiency brush, if you're not very careful with your definitions.


Reminds me of the origin of jitter, planes used to be build with the highest manufacturing precision but they realized the more precise, the higher failure rate. Too much precision caused coupling / node points of failure. Loosening gave opportunity for errors to get through the mechanics.


Sounds almost like a bias/variance tradeoff!


In the computer world, a similar idea was building a highly-available cluster using consumer hardware. This was also counter-intuitive and revolutionary when it was introduced (20 years ago?).


Tom Demarco wrote Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency[1] back in 2002. It talks about how keeping slack in human systems makes them more resilient and humane. I first heard about it from Joel Spolsky, who started Trello and Stack Overflow[2].

Slack is a great read for those responsible for managing software teams and illustrates that there are no new ideas under the sun; there are just repackaged ones.

[1]-https://www.amazon.com/Slack-Getting-Burnout-Busywork-Effici...

[2]-https://www.joelonsoftware.com/2005/11/22/reading-list-fog-c...


That's exactly where my mind went as well.

You want some amount of inefficiency in your system to accommodate sudden changes.


Oh boy, the rabbit hole. Check out this: https://en.wikipedia.org/wiki/Slackware

Yep, a linux based on the idea of Slack, developed '93.

And just as there's the Spaghetti Monster, there's also even a religion around Slack: https://en.wikipedia.org/wiki/Church_of_the_SubGenius

developed '79, short quote from wikipedia: "the group holds that the quality of "Slack" is of utmost importance—it is never clearly defined"


I've been using Slackware as my primary OS for a few years now (even my work computer runs it). In retrospect, Slackware fits the article's mentality remarkably well; instead of the approach taken by most Linux distros (where everything is neatly and tightly integrated with a dependency-resolving package manager and dependency-resolving init system and all that jazz), I instead work with a system that sure, maybe some of the pieces don't fit together perfectly, but they're readily adaptable to all sorts of different situations. It's a less fragile system specifically because it's built around accepting the components for what they are instead of trying to patch them to "perfection". And of course, the conservative component choices certainly help, too.


Efficiency is a sub-goal of getting things done. IF you cannot get the job done you are out of the running no matter how efficient you are at your failure.

Henry Ford got more and more efficient at making the model T. At first his customers appreciated the lower prices he was able to deliver for them. However as time went on his competitors who were not as efficient at building any one car were able to build a new car with features like electric start that were worth paying extra for. The model T could never get that as the assembly line was too efficient to allow the extra steps to add new parts, the jigs had been optimized to the point where they couldn't be changed without a large set of other changes that his line's couldn't handle.


In other words, he overfit his model (T).


Somehow this resonates with the demise of Nokia. Every bit of the manufacturing and supply chain was tuned to the top but the product to be sold couldn't be changed to something customers wanted to have.


Is that a fact or an analysis? Why is it hard to add a step to the line but easier to train all the car assemblers to install/assemble the electric starter That’s the whole assembly line innovation that is going strong till today


Various books report that, but I don't know how true it is (given the politics in Ford it is probably impossible to say).

Ford optimized his assembly line. To add a step you need to physically move all stations on the assembly line - these were bolted (or even welded) in position. Plus you would have to move all the sublines feeding the line after that step over one as well. Before you can start that you need to expand the building because everything was designed to fit exactly what the model T needed. Sure you could add a step, but only at great expense (and you would have to deal with the politics, CEO Henry Ford was against changes to his car)

Today all manufactures know their assembly lines need to last longer than the cars. They design slack in from the beginning so that an additional station can be added someplace. They also make sure that you reconfigure the operations easily. That isn't to say an assembly line will produce anything, just that the line will be built with enough room that minor changes can be made from year to year. Every few years they will tear down assembly line and rebuild it over a month to make a whole new type of vehicle.

Note that Honda does one line for everything, while Ford has assembly lines for only the F150. There are trade offs to flexibility as well. Honda cannot make large trucks because they wouldn't fit in their stations. Honda also is limited in how many models they make because eventually they run out of space for their jigs at each station. By contrast Ford doesn't have to pay for space to store jigs that aren't being used at the moment and all the complexity to get the right one in place as needed. These are complex trade offs and both companies have made their bets. Both companies have been successful.


This is a very good point, I have missed it earlier when I have read about Ford. Will remember for the future. Thank you.


I remember in the late 90s when the place I worked at was all about JIT. Not compiling, but physical stock - having the items they needed just as they were needed. This was the Physical Plant for a large university campus, so considerable money was involved in storing and shipping goods.

The execs were delighted with the idea of saving money.

The grizzled workers just raised an eyebrow, made a few passive-aggressive comments about how, in their day, having the last box of something used because it was needed with no backup was considered a disaster waiting to happen, then shrugged and did as they were asked.

I left before I saw how it played out, but I suspect that there was some good trimming of backlogs and storage of materials that really could afford to wait, but far too many cases where JIT wasn't IT enough. It's like...creating a breeding ground for Black Swans.

The article made me think of that - high efficiency all the time means little-to-no-buffer for unusual needs.


I worked in manufacturing (semiconductors) just long enough to see JIT go out of fashion, and get replaced by bottleneck optimization. The idea is that you distinguish between those cases where it's ok to have extra stock (because it's cheap, durable, and not bulky), and those where it's not. Then, you have a small number of bottlenecks that you have to watch like a hawk, because having extra capacity there would be expensive. You can react quickly to problems at a bottleneck, because your attention is always there, but most of the items you don't try keep the least possible inventory. It was a sort of optimized compromise between the older system (lots of extra stock for all steps) and JIT (minimal stock for all steps).


This is an interesting article from 2002 about Dell and their JIT process. Compaq is mentioned as an aside, but Compaq is a good comparison of what can happen when you have too much inventory on hand.

https://www.chicagotribune.com/news/ct-xpm-2002-07-29-020729...


> If the company receives orders for an unusually large number of a component, such as a certain kind of Pentium 4 chip, Dell goes on red alert, rushing to wring out emergency shipments from suppliers.

> If the components are selling too fast, executives can instruct Dell's Web site administrators to offer customers a deal--a discount on a better component or, in dire emergencies, a free upgrade.

These are the vital parts - if you have no buffer to handle irregularities, how do you handle them? In Dell's case, it's (presumably) paying extra to get emergency shipments and losing some revenue via discounts to adjust demand, and those situational costs are (presumably) worth the general case improvement.

If the math works, great! Neat! Inspirational, even. I think some people were inspired who didn't consider the math for their industries though.


I didn't read the link, but JIT done right means you figure out what kind of buffer you need and keep that as small as possible but no smaller. Dell keeps a buffer, just not a large one. Yes Dell sometimes loses by having to pay extra for emergency orders. However sometimes the alternative loses from having parts in the warehouse nobody wants that have to be disposed of.


The article OP posted was from 2002 and as of 2018 it looks like Dell is still using JIT methods. Looks like the math was right.


Shouldn't an efficient JIT process define an ideal inventory quantity level that triggers restocking?

"Qty=1" is definitely inefficient.


My local grocery store switched to JIT stocking of shelves: they’re now perennially out of 1 in 10 basics, and I just shop online. Amazon Fresh can routinely be out of things too, but at least they don’t waste my time doing it.

The only value in a grocery store was that it was a warehouse of food stocks that smoothed out the irregularities suppliers face. If I have to deal with supplier problems because they JIT stock, I may as well just get deliveries directly from suppliers.

You can’t “JIT” when your entire value is supply smoothing, because you’re increasing the failure modes of your core business for tangential benefits.

People keep showing me math that says it works, and places keep trying it, but I’ve yet to see it do anything but significantly damage their core business for questionable benefit.


It can be done well as seen with Japanese convenience stores. They keep no stock but never seem to be out of anything.

I’m not an expert but I think it’s a combination of insanely good modeling for each specific store and a super efficient and fast restocking pipeline.


I can't seem to find a reference, but I remember hearing years ago someone trying to adopt Japanese convenience store stock strategy. Apparently, the small stores have a good relationship with their peers and exchange goods as needed. The distributed nature compensates for deep stock.

Similarly, when I worked for Pizza Hut years ago about once a month we'd run low or out of something and would phone up 1 or 2 nearby stores and send a driver out to make the swap.


Walmart has JIT down to a proactive science. If they see a heatwave coming, they allocate more ACs and water to a region.

It takes years to train good JIT supply chain models.


While we’re calling out good examples, Toyota’s whole Kanban system is the root of most JIT manufacturing or business, as a formalized method. Toyota was very successful with it.

I didn’t mean to imply it couldn’t be done, but rather three points:

- It’s very challenging to get right, requires a lot of buy in and restructuring, and takes time to transition to.

- It’s an extremely risky thing to do if, unlike Toyota, your value proposition isn’t manufacturing, but instead smoothing the supply between a manufacturer and a customer.

- Most people implementing JIT have a poor model of the variances, and undervalue disruptions (in terms of harm to customers, lost goodwill, etc — the intangibles).

Taken together, I think it’s extremely challenging to do JIT for grocery stores, and they’re basically building a black swan nest, no matter how careful.

Spoilage in exchange for consistency is what I pay them for: removing that risks removing the value they deliver me.


I think people are taking me way too literally here -

Your company uses toilet paper in the bathroom. No one suggests that you have no extra toilet paper on hand. Instead, someone says "we use 3 rolls/week on average"

Dumb person: "So let's never have more than 3 rolls spare"

Better: "While 3 is the average, 5 covers 80% of the weeks. The remaining weeks can be covered by having John run down to the corner store to grab an extra." (Note: ASSUMING someone has done the math and decides the opportunity cost of storage is greater than the opportunity cost of those 20% of times)

My concern: Joan's retirement party serves bad tuna casserole on the same day the corner store is out of stock.

JIT can totally work - wasting money on storage on something that might be wasted can be an easy win. But you have to consider the other cases. To tie this back to the article, if you're operating at capacity all the time, what happens when something comes up and you need more capacity?


Until the week there's a national toilet roll shortage and you have to shut the office for a week because you don't have any toilet roll and lose $100,000s.

Both our examples are glib, but it's very easy for JIT to turn in to a "penny wise, pound foolish" scenario, where you're saving small amounts of money, but occasionally losing big amounts of it because of those savings.


Yes, and universities in particular have a level of responsibility to their customers that is much higher than other organizations.


That's great. This probably wasn't the first dumb thing the execs asked them to do.


I do have a story about how many mid-level managers met weekly for months to develop a "more efficient" order-of-tasks for janitors to clean the bathrooms. They claimed the new process would save $100,000 over 3 year (IIRC). Reading the fine print found that they also transition to transparent sandpaper as toilet paper, which accounted for most of their savings.

I never got to ask a janitor, but I'm pretty sure I can guess the respect the idea of a "do these tasks in this order" coming from people that had never done the job would get.


there's a good chance they were put on that task in order to keep them from improving efficiency in other areas


Inefficiency isn’t the same as adaptability. It has no intrinsic value. Ineffective people aren’t automatically great generalists and ineffective companies aren‘t better suited to adapt to new market trends.

This is a bad case of one-dimensional thinking.

All the anecdotes in this thread are about optimizing for a particular outcome and failing to anticipate variations there. Even the human body stores fat not because it‘s an inefficient machine, but because it has adapted to (sometimes rare) situations of low food availability.

So, let‘s not celebrate slacking by claiming it to have some intrinsic value.


OK, no they're not the same things, but there is a trade-off there.

Large companies in stable markets reward efficiency. Staff KPI's are all about reducing costs, because that's where the profit growth is (because the market is stable and so is revenue).

But efficiency is achieved by streamlining processes. That streamlining is almost always at the expense of adaptability. The processes become optimised, but brittle and resistant to change. Which is obvious if you think about it - the process becomes designed to do one thing incredibly well, and the staff become adapted to that one process. Trying to change the process (or staff) makes it more expensive, by definition: because if it made it cheaper it would be an optimisation and would have happened already. "cutting out slack" does equate to making the process less adaptable.

Then something changes and it's hard, if not impossible, to get the process to change and the people to think differently. Managers whose bonuses are tied to their KPIs are reluctant to make necessary but costly changes to their department if those KPIs are all linked to efficiency. This is why large organisations are doing their innovation thinking in smaller, separate "skunkworks" or "labs" units.

Also, why large companies are incredibly efficient at producing profits from a stable market, but get out-competed instantly by smaller, more adaptable, less efficient startups.


> Also, why large companies are incredibly efficient at producing profits from a stable market, but get out-competed instantly by smaller, more adaptable, less efficient startups.

Because if they‘re worth their money, they can afford to. Large corporation routinely buy small competitors with profits made from their streamlined operation.


true. And then immediately destroy them by trying to make them efficient ;)


>So, let‘s not celebrate slacking by claiming it to have some intrinsic value.

This is the exact reason why slacking evolved. We are unable to expend all the energy, because those who were and faced an emergency at the wrong moment when they were already exhausted were either unable to deal with the problem or collapsed and died.


> This is the exact reason why slacking evolved.

Speculation! Slacking individuals probably got turned into food more often than not, until we discovered agriculture.

> those who were and faced an emergency at the wrong moment when they were already exhausted were either unable to deal with the problem or collapsed and died.

Those who weren't prepared probably died. Preparation could mean stocking up in supplies, training, building weapons.


> Speculation! Slacking individuals probably got turned into food more often than not, until we discovered agriculture.

Speculation, followed by your immediate counter speculation!

Yoel Harari, using examples from still living hunter gatherers calculated their avg 'working' week was ~20 hours.


Was about to make a case around how indigenous cultures work, but you nailed it. I'll go a step further:

I think the future holds for the world an emergent culture of compassionate giving. A world designed to be loving as a first principle.

In that world, "slacking" is what you do when you let out some rope. Or it's a word that old people use to signal when they're either having a flashback, have chosen to carry on judgmental ways of the past, and/or are triggered. Unless we're talking about the loving world beyond the point where all the slacker-haters died off.


Hi, I'm a recovering information addict. My particular outcome that I optimized for is to learn to sustainably coevolve with information, something I am and am immersed in, according to the perspective that all existence is information.

Inefficiency is not the same as adaptability; it is a strategy for adaptation & one that's more sustainable than perfectionism.

What you subjectively call slacking, I call rest or emotional avoidance. Rest is extremely undervalued in this capitalistically enslaved world; people are literally working themselves to death or debilitation. Emotional avoidance is highly promoted in America; escapism is the norm for those who distract themselves from their emotional growth, whether it be via video games, binge watching stuff, following and debating politics, or just learning things. I'm not saying those activities are bad, btw...simply that people are using them addictively to avoid working on themselves & we can see it reflected in social media all over the place.

Slacking to me: remaining the same person I was 3 months ago. If my personality isn't changing, I'm certainly not growing.


The example given in TFA is poor, as is (IMHO) the entire thesis. The actors change their behavior on an evolutionary timescale, whereas the environment sees periodic rapid events ("punctuated equilibrium"); to which any individual, specialized actor cannot react quickly enough.

Whereas we, as SWEs or ops or devops or business managers, can react to a changing environment. When you are in a highly competitive environment, it pays to be more efficient. Unlike the bird, however, whose beak (eg) may be specialized to reach a specific berry on a specific bush, and is immutable, we can change our beak as the needs demand.

The article isn't even very good for specialized industry, say making metal coils. Your machinery has to be specialized for it. You don't have a machine that can make both coils and tubes. Because you're able to make very, very specialized coil winding machines, you corner the market on coils. Then the coil market collapses (thanks for nothing, disruptors!). OK, your business fails. SO WHAT! You move on to make specialized tube flatteners or some such.

A better thesis would have been along the lines of not being static. Specialization is good.


I would take a different approach to reach a similar conclusion.

Assign a task to a junior dev.

- Efficient: copies blindly a solution from stackoverflow in 30 minutes

- Inefficient: copies the same solution but tries to understand it before committing it, taking 90 minutes.

The inefficient lad is on his path to become a senior engineer. :)


There's always a trade-off between exploration/exploitation - in any optimization, we're incapable of enumerating every path to our intended goal or the associated cost. We need some time to experiment, fail and learn - and ultimately discover a more optimal pathway.

I guess that's the motivation behind Google's 20% time (if that was/is ever a thing).

I'd also be interested to know how Amazon approaches the problem. From the outside, they look like a much more top-down organization than Google - keen to hear if/how that gels with "encouraging experimentation and failure".


So back when I was much much younger & brasher, my email signature was "Laziness is an Optimization Protocol". I used that email for applying to my Master's programs, including some of my country's most prestigious (and ergo competitive) places.

And I think one reason I got into one of them is because my future advisor saw this, chuckled, but then proceeded to have a conversation with me about the importance of being "smart" about how we approach questions, and being efficient about resource use (including time)..

I was being cheeky; but he saw that I was kinda / sorta aware of a deeper idea, and helped me develop & identify it. It has stayed with me since.


There is a similar argument about how you don't want your systems running at capacity because there is no headroom for an emergency.


That‘s a dogmatic approach. A practical one would be to estimate the frequency and duration of emergencies based on experience and the cost of being out of order and compare this with the cost of running below capacity all the time.


Sure, or you could run low-priority tasks that can be dropped when it becomes necessary. This increases utilization without increasing risk.

An example is using water in "wasteful" ways during wet years in order to make sure there is something easy to cut back on during a drought year.

Another example is using flood-prone land for recreation rather than building housing there.


This also applies to human resources.


In the machine learning world, this is the equivalent of overfitting to the training dataset. You can have a model overoptimized to the data used for training and it craters when used in production because the production data has maybe drifted over time, or the training set was not representative for a range of reasons. This is why early stopping is often a good idea when training models, rather than getting the most optimal model for the data being trained on.


I believe I've got the gist of it: Keep your options open, and don't put all your eggs in one basket, because s* happens.


Does it make sense to apply the idea from the article to the native vs webview apps competition?

Writing an app in native (C/C++, or Java for Android, Swift for iOS) will give you the best efficiency, but the app will not adapt to platform changes so easily compared to using webview.

It is also interesting to think about whether VMs or transpiler techs (e.g. web assembly, Haxe, GraalVM) will give us the best of both worlds.


This only works if you're strong enough to be able to endure the increased costs. If you're in an environment where suboptimal performance means elimination, deliberate inefficiency like this can break you.

Can't say I know what to do if you're in a situation like that though.


“If we all reacted the same way, we'd be predictable, and there's always more than one way to view a situation... It's simple: overspecialize, and you breed in weakness. It's slow death.”


It's interesting, since one would assume that most people here would be at least to some degree aware of Information theory and the necessity of redundancy.


Sure, guess trying to be the best programmer is bad coz in case of a nuclear war i won't be as good at digging as i could.. [/s]


This is a good article in context of what I have been going through over the past few months. The lure of perfection is too great.


We could soon see the impact of a lot of efficient JIT supply chains being disrupted if the UK leaves the EU without a deal.


Not sure why you're being downvoted, as this will be a huge, sad show when it starts. The Economist's article on no-deal disruptions:

https://www.economist.com/briefing/2018/11/24/what-to-expect...

They even stockpiled paper themselves to survive the potential disruption and keep on printing:

> Disclosure: The Economist is stockpiling around 30 tonnes of the paper on which the covers of our British edition are printed, which comes from the Netherlands.


It's impossible to make a factual statement on BREXIT without drawing flak. (I'm actually in favour of it for long term reasons) But the car industry and fresh food supply chains will be affected in the short term. How can they not? It'll be a good test of the article's thesis.


Efficiency that can adapt to change would be even better


Inefficiency safes us from over-optimization. And over-optimization is the biggest problem of our capitalistic society. A lot of time people think only about optimizing and reducing cost etc. But they fail to recognize that one can over-optimize things as easily as under-optimize them.

So control your greed, don't over-optimize the money you can make from your customers. Otherwise your marvelous growth will break down immediately when the climate just changes a little bit. Think about what happened to https://en.wikipedia.org/wiki/Kodak#Shift_to_digital .


This is a very important article.

It's also absolutely vital to understand that efficiency is not subject to the greedy algorithm. If you try to make every single part of your organization efficient, you will kill your organization.

It is easy to understand this if you are a software developer, because you get to deal with servers. If you run a server at 100% load (for any of the definitions of load -- 100% CPU, 100% memory, 100% bandwidth utilization) what happens? Obviously very often something will eventually mechanically fail, but what happens before that? You see a latency spike -- 100% load is practically the definition of how DDoS works. The efficient utilization of the one component translates to a globally worse outcome, roughly because the component you're optimizing for is not the "right" component. Similar things often happen when a company decides to fire a worker and rebalance their load across their peers: very often this looks very attractive on paper at first since you save costs on a salary, but then it starts to hit the remaining deadlines pretty hard, a swift kick right in your revenue stream. Which is not to say "never fire anyone", but just that a lot of folks do not evaluate this sort of effect on their cashflow position before they start issuing layoffs. I know of one company (but not directly, I was not a part of this, take my words with a grain of salt) which had an office in NYC but got into a bit of a tight position and was acquired strategically by a company in Chicago, but got caught in this death-spiral. After a year or two the entire East Coast NYC office was closed and the new Chicago CTO had to drive a U-Haul from NYC to Chicago with whatever supplies he could salvage, and the whole company was shrunk to just two or three folks working out of the sister company's office in Chicago -- I don't know if they had to relocate from Chicago or were new hires, but if they relocated then it was presumably on their own dime. All of this when the situation seemed quite tractable and solvable before those first layoffs started. There may have been some problems that I don't know about, like the company might have been in a much worse shape than advertised, so again please take my analysis with a grain of salt, but my understanding is that the latency caused some already a-little-dissatisfied customers to bail, which caused another round of layoffs which then caused a bunch more latency which caused many customers to get really extremely pissed and bail, which caused the closing of the NYC branch, so that only the least-dedicated customers who were the least likely to notice the changes were left as a trickle of the former revenue.

It works the other way, too. The above is about "inefficiency is excess capacity is fat," and warns that fat is biologically necessary to cushion you against biological variability. But very often you don't just "trim" the excess capacity, you make more work so that you can utilize a resource to 100% capacity. So instead of just shrinking your server, we're talking about the equivalent of pre-calculating as many web pages as possible so that you can serve them from a cache. This can be a great idea, in moderation -- it is an awful idea if you drive the server or cache to 100% load in this way, again because of latency spikes on the inevitable unforeseen loads. On the human level, this is not a layoff but rather yelling at folks who are idly talking to each other, "you lazy folks, wasting the company's time!" ... And in addition to losing latency because your one-off requests now have to wait for a developer to task-switch, you lose oversight of your organization. Everyone is "busy" with something, but mostly it's something unimportant, so it is harder to just see "here are the places where people are stuck on important tasks" so that you can intervene. Say your caching-server is very well-behaved and yields to any other requests you have, but it periodically drives the database load to 100%: now whenever you are looking at your system as a whole, you are desensitized to any other requests that drive the database load to 100%. "Probably just the caching server caching that one expensive page" -- but no, it's not, something is seriously wrong and someone is suffering quietly because of it.


i think there is generally a tradeoff between optimising for efficiency at the cost of robustness, or optimising for robustness at the cost of efficiency. e.g. one way to get more robustness in IT systems is to add redundant infrastructure. there's even more robustness if the redundancy is decorrelated : e.g. decorrelated in space (multi-region) or decorrelated in terms of vulnerability to other risks ( https://rachelbythebay.com/w/2011/10/27/monoculture/ ). it's clearly cheaper to build this stuff in the short run if you dont invest in any diversity. have one of a thing, or a monoculture of 12 things.

There's also another aspect: often becoming more efficient at doing something is at the cost of investing capital and increased ongoing maintenance costs to maintain the new systems required for the efficiency. maybe achieving the first efficiency takes low investment for relatively large reward - a "low-hanging fruit", but the second or the third one give diminishing returns, i.e. more cost of capital or ongoing maintenance drain versus the reward, but still enough reward to be worth doing. then if the context changes -- so the specialised task is no longer worth doing -- you're left with the upkeep & opportunity cost of all of this now pointless specialised infrastructure.

Joseph Tainter argues something vaguely along these lines for civilisation collapse, with the diminishing returns of increasing efficiency from increasing social complexity:

> For example, as Roman agricultural output slowly declined and population increased, per-capita energy availability dropped. The Romans "solved" this problem by conquering their neighbours to appropriate their energy surpluses (in concrete forms, as metals, grain, slaves, etc.). However, as the Empire grew, the cost of maintaining communications, garrisons, civil government, etc. grew with it. Eventually, this cost grew so great that any new challenges such as invasions and crop failures could not be solved by the acquisition of more territory.

> In Tainter's view, while invasions, crop failures, disease or environmental degradation may be the apparent causes of societal collapse, the ultimate cause is an economic one, inherent in the structure of society rather than in external shocks which may batter them: diminishing returns on investments in social complexity.

https://en.wikipedia.org/wiki/Joseph_Tainter#Diminishing_ret...

Nassim Taleb has written a bit about robustness, fragility, "antifragility", of investments or occupation, if you can deal with the writing style it's worth reading a book or two. E.g. the concept of a "barbell strategy" to diversify: https://www.nuggetsofthought.com/2018/04/02/nassim-taleb-sen... .

Taleb also spends some time writing about the difference between the realised outcome and the distribution of possible outcomes --- the focus should be on the process -- was the decision making and understanding of the probabilities involved sound --- not on the outcome . Applying this perspective to the bird and the mammal in the farnham st blog, before the change in environment, if focusing on realised outcomes, we might rank the bird population as more successful than the mammal population -- perhaps there is a larger population of birds, or they get more leisure time, or whatever. but when trying to assess the unrealised outcomes, we might conclude that the bird population is in a far less robust position, their outcome to changes in environment or context is much worse than the corresponding outcomes for the mammals. so in some sense, not focusing on the current realised outcome, the mammals are "doing better" even before the turquoise-berry-bush catastrophe.


The point about how quickly investments can turn into liabilities is really insightful - thank you!

There's a social aspect to it, as well. Within one person's head, it's Sunk Cost fallacy. I'm not sure what the term is when the fear of change is spread across multiple groups (coordination problems? I'm open to ideas). But it's clearly a huge problem for complex systems that must manage change.


I haven't yet read the article, but the title immediately made me think of Paul Graham's Doing Things that Don't Scale[0].

[0] http://paulgraham.com/ds.html


After reading it, it also has some "perfect is the enemy of good" messaging.

It also purports that being too efficient (and therefore specialized) reduces agility and can be dangerous by preventing pivoting.


Overoptimized systems often lose resilience and become fragile. Slop provides a buffer for give and take. This is analogous to materials that have a wide plastic region vs materials that have very high rigidity but a small plastic region.


I've often seen this with coding -- another reason (if you needed another) to avoid premature optimisation is, in my experience, it often specialises code to a special purpose, and can make it harder to make changes later.

Often a simple implementation of an algorithm can be within 10% of the performance of a much fancier one, and is much easier to adapt later to a new use. Even if the less optimal implementation is ten times slower, that still doesn't matter if it isn't on the critical path.


Recalls a fundamental lesson about nature/reality preferring redundancy to the detriment of pure efficiency by Nassim Taleb: https://youtu.be/AcTvkt8k0sE?t=563

Pure optimization is a very ruin-prone path to take over time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: