> What about event planners, nurses, military officers?
As a Dutch ex-Navy officer, we just called this "friction" as everyone had read Von Clausewitz during officer training and was familiar with the nuances of the term. Militaries overwhelmingly address this problem by increasing redundancy, so that there are as few single points of failures as possible. It is very rare to encounter a role that can only be filled by a single person, a well designed military organization will always have a plan for replacing any single individual should they accidentally die.
Also wrt. solutions in the military setting: a strong NCO corp, competent and enabled to take on-the-spot decisions.
But it is not "oh, we have solved friction". It trades the "combat" friction of having to wait for orders (possibly compounded by the weather, comms jamming, your courrier stepping on a mine, etc.) and turns it into "strategy" friction of having subordinates taking initiatives they shouldn't have taken. But I'd argue (like modern armies do) the tradeoff is worth it, and the strategic level has more resources to overcome their friction than combat troops engaged in a fight. But it wasn't always the case (cue the famous tale of a Roman consul, Manlius [0], who executed his own son for disobeying, even though he was victorious ).
> But I'd argue (like modern armies do) the tradeoff is worth it, and the strategic level has more resources to overcome their friction than combat troops engaged in a fight.
I think the tradeoff is practically mandatory for modern armies. The high mobility they require just to avoid artillery strikes and engagements with armor makes top down command impossible to implement in a symmetric conflict.
I have heard that there is much less of this in the Russian army than in NATO countries' armies--and before that, less in Eastern European countries' armies. The Ukrainian army apparently had to be trained out of that (although IIUC, that training began soon after the fall of the Soviet Union).
I'd enjoy hearing any comments about this--true, false, true-but, etc.!
> It's noticeable how few computer wargames simulate any of this, instead allowing for frictionless high speed micromanagement
In military Command and Staff Training (e.g. for training large HQs), the solution to this is that the trainees don't use the simulations themselves. Instead they issue commands using emulated C2 systems to role players ('lower controllers') pretending to be subordinate forces, who then execute the orders using the sim, and then report back what has happened, as far as they can tell. This generates quite a lot of useful friction. Another group of role players ('higher controllers') represent the HQ superior to the trainees HQ and in turn issue them with orders. The role players and opposing force are also following direction from exercise control (EXCON) and can easily be used to dial up the pressure on the trainees. There is a small industry (e.g. [0]) supporting the exercise management systems that keep track of the various 'injects' that are fed to the trainees via the role players, or by simulated ISR assets, etc.
That sounds great. A lightweight video game system like this is potentially in the Battlefield games, where each side is divided up into 5 or 6 person squads; one of them is the squad leader and can give orders (e.g. capture or defend this point). Because it's a video game and most people cannot / don't want to communicate, it's done in that way and squad members are given a reward for following the order.
In some of them you have a single person who is like a commander of the whole fight who can set orders to each squad. But since only one person can be that and many people want to be, I don't think they kept that feature in for long.
But it's the kind of game where I think if you had a big group of friends with a chain of command and good communication you could easily win any match against an otherwise unorganized enemy, even if their individuals are better players.
> It's noticeable how few computer wargames simulate any of this, instead allowing for frictionless high speed micromanagement.
Friction is simulated in many computer games, the problem is that taking it too far would make them unenjoyable or too niche. Remember they are games first and simulations second (with exceptions; precisely the ones that are too niche).
Friction in computer games is simulated in multiple ways:
- The most obvious one: randomized results. Your units do not do a set damage nor do they always succeed, but instead the PRNG plays a role (e.g. most combat rolls in most wargames, but also whether a missile launched within parameters tracks or fails to in DCS).
- Fog of war: many wargames do not show areas you haven't explored or where you do not have scout units.
- Morale: many wargames simulate morale, units may break if sufficiently scared (e.g. the Total War games do this) and some may even rush to charge without your command, jeopardizing your plans (e.g. Total War, Warhammer: Dark Omens). In the Close Combat series, your soldiers may become demoralized or even refuse to follow orders if you order them to walk through enemy fire or take too many casualties.
- Some have external unpredictable hazards jeopardizing your unit (e.g. sandworms in Dune II).
And many others. So wargames do attempt to model friction; the problem is that if you make this too extreme, the game stops being fun for the majority of players. The illusion of control is an important aspect of gameplay.
That first quote is normally attributed to Charles de Gaulle. [0] I wonder if it would have been in character for Napoleon to reflect on the indispensability of anyone but himself.
There are tabletop wargames for the consumer/hobby market that do try to include various kinds of friction in the gameplay. Both the classic Memoir 44 [1] and the Undaunted series [2] have you issue orders from a hand of cards drawn from a deck.
Memoir 44 divides the board into three segments (a center and two flanks) and your cards to issue orders always apply to a specific segment (e.g. right flank). Lacking the cards in your hand to issue the orders you might want simulates those orders not making to the front lines.
Undaunted explicitly has Fog of War cards which you can't do anything with. They gum up your deck and simulate that same friction of imperfect comms.
Atlantic Chase [3], a more complex game, uses a system of trajectories to obscure the exact position of ships and force you to reason about where they might be on any given turn. The Hunt [4] is a more accessible take on the same scenario (the Hunt for the Graf Spee) that uses hidden movement for its friction.
I don't know how many of these ideas leap across to computer games, but designing friction into the experience has been a part of tabletop wargames for a long time.
> It's noticeable how few computer wargames simulate any of this, instead allowing for frictionless high speed micromanagement.
Games are entertainment, and as with a novel or film, the authors pick and choose to get the level of verisimilitude they think the player/reader/viewer might want. Who wants to take an in-game break to go to the bathroom? When you pick something up (extra ammo) it's instantaneous -- and how come there are so many prefilled magazines lying around anyway? And when you get shot your shoulder doesn't hurt and you don't spend any time in the hospital.
Wargames tend to try to be fun, as opposed to being a realistic simulation of war. Imagine you are playing Napoleon at Ligny: How much fun is it that your reserves receive conflicting orders all day from a field marshal fighting in a different nearby battlefield, and that there are similar town names in the map, leading to your troops coming in late and in a useless location?
You shouldn't even be able to watch the action in detail, Total War style, as you might have a hill, some messengers and low power binoculars. Games have attempted to copy this, but it's a curiosity, not something that brings sales
A lot of 4X games - including the Total War series - develop from being close to the fight and micromanaging forces, to zooming out to an empire view and letting the fights take place without your oversight; it's not the same but I'd say pretty similar? That is, even though you start and end as emperor over everything you control, you can choose how much micromanagement you do. Example is Stellaris, where you can either micromanage your forces, planets, ships, etc, or you can let them duke it out on their own, hand over micromanagement of your planets to governors by giving them high over targets, etc.
I think quite a few wargames, both computer-based and pre-computer, simulate friction at some level.
The original Prussian Kriegspiel involved opposing players being in different rooms having information conveyed to them by an umpire (must have been a lot of work for the umpire).
The wargames used in the Western approaches to train WWII convoy commanders made players look through slots to restrict what they could see.
Computer wargames like 'Pike and Shot' often won't show you units unless they are visible to your troops. Also your control over units is limited (cavalry will often charge after defeated enemies of their own accord).
In the novel Ender's Game, the Command School training takes an interesting approach.
Ender is able to see the full battlefield (modulo fog of war) because of ubiquitous faster-than-light sensor technology. But he doesn't control any ships directly. Instead, he issues orders to his subordinates who "directly" control squads of ships.
I've always wondered if anyone's ever made something like this. A co-op war simulation game with instant visibility but divided, frictioned actions. Nothing about it would be technically difficult. It would probably be socially difficult to find enough players.
> Instead, he issues orders to his subordinates who "directly" control squads of ships. .... I've always wondered if anyone's ever made something like this
See my other comment - lots of real military command training involves the trainees issuing orders to subordinates (role players) who interact with the simulation.
> It would probably be socially difficult to find enough players.
Military training finds them by using real soldiers as role-players (understanding how to handle an order is a useful secondary training effect) and there are also loads of ex-soldiers who will happily (for a small consultancy fee) support an exercise for a few days.
This is how a lot of MMOs like Eve Online worked. You'd have a person or group of people leading the fight and they could see what was happening and would issue orders. But then it would trickle down to different groups and that friction made combat really interesting. Plus there was always latency between issuing a command and the ship acting on the command that was proportional to how massive the ship was. So you could find yourself out of position and unsupported if you moved out of step, and you always had to rely on someone else for the overarching strategy and target priorities.
Eve Online goes even further, with empire leadership making political decisions, alliances, etc. That said, it feels like that aspect of the game is focused on avoiding conflicts, because it's oftentimes a net loss if they cannot control the newly captured territory for long.
It's one reason why I stopped playing, it's the kind of metagame I can't get into without dedicating tons of time and communicating with others. I just want to fly ship and go brrt without fearing other players or having to cooperate with them.
Battlefield 2 (from 2005) and some of the later Battlefield games have a dedicated "Commander" role like that. [0] The friction would be in the fidelity of how your squad spots enemies (allowing the Commander to see them on their map) and whether they actually follow orders (which on public servers was always a question). It was actually a ton of fun if you took it somewhat seriously.
There are some hybrid RTS/First Person Shooter games sorta like that.
A commander who can place buildings or resources, and ping locations, and has a birds eye view, and then grunts on the ground trying to do what's actually needed.
There was a sci-fi story decades ago (probably in Analog) on this theme. A very realistic war game was set up, which two real-world opposing nations decided to use in lieu of losing real men. The friction caught them off guard. The one incident I recall was when one side deployed a biowarfare agent, but the wind changed and they ended up infecting their own troops. There were other incidents of friction.
Your best bet is probably actually shooters. There's several games that integrate elements of RTS games on top of FPSes, like Planetside 2, Natural Selection 2, Hell Let Loose, Squad, etc. In all of these the individual soldiers are individual players so you can't hardly micromanage them even if you wanted to
Doubtful tbh, at least in the context of the article. The problem for games that simulate war (or any other environment with "friction") too closely is not that the AI is not good enough, it's that such environments are just inherently not "fun" and thus not good material to make games out of.
Games work on a tight gameplay loop where the player can have feelings of agency (they can influence what happens at all) and mastery (they can get better at influencing what happens with practice). For this you need to have a relatively predictable relation to actions and outcomes. Having the game randomly lose the orders you give to a unit without any feedback is kinda the opposite of that.
Requiring you to manage a messenger corp in order to dispatch armies past your borders might be a good example of a mechanic that generates friction though.
That's a nice mechanic and many games have something equivalent to that, but if you can merely pay a resource tax to have everything working perfectly again then it's not friction in the sense that the article is talking about.
The problem with real friction is that, even if you did everything perfectly, orders may still not make it to the unit that has to execute them or they may do something else because of reasons neither of you foresaw, or the enemy forces you saw on the minimap are only half the forces that are actually there. Imagine if you were playing some shooter and randomly on 25% of the time your controller does not respond to inputs at all and another random 25% of the time the inputs get reversed. That would be a super frustrating game to play.
I am surprised by discussions, so far. Which, at the moment appear to be people poking holes in the shortcomings of friction as a model, and then a few talking about the unreasonable effectiveness of some processes.
My surprise is that neither discussion really leans in on the metaphor. Friction, as a metaphor, is really strong as the way you deal with things vastly changes the more mature a technology is. Consider how much extra lubricant is necessary in early combustion engines compared to more modern electric motors.
More, as you cannot always hope to eliminate the external cause of friction, you can design around it in different ways. Either by controlling what parts of a system are more frequently replaced , or by eliminating the need for them entirely, if and when you can. You can trace the different parts of a modern car to see how this has progressed.
Indeed, the modern car is a wonderful exploration, as we had some technologies that came and went entirely to deal with operator involvement. Manual transmissions largely were replaced by automatic for a variety of reasons, not the least of which was wear and tear on the clutch. And it seems to be going away entirely due to the nature of electric motors being so different from combustion ones?
Just FYI, in Europe we mostly use manual transmission and clutch is mostly so robust that something else breaks way before that.
Also, a lot of auto transmission approaches use the clutch behind the scenes, at least in the older models. But, I am nitpicking here at the analogy being transferred to the clutch system.
I fully agree otherwise that the friction is the best term to describe what is happening across the system and within social interactions.
Right, this is part of my assertion to the metaphor. There is not, necessarily, a single solution that is obviously superior to others. You can get lucky and find one, of course. Often times, though, it is largely driven by what tradeoffs can be made.
An electric motor is very different from an internal combustion engine. I'm not sure where you were going with the analogy but an electric motor does eliminate the need to lubricate that an internal combustion engine has (no valves, crankshaft, pistons, transmission etc.). Analogous somewhat to maybe a bad software architecture needing constant care vs. a better one that just works and eliminates a lot of that extra care.
I've learned about this term from the economics side rather than the military side. It's all the hidden factors that make things more expensive. Transaction costs. I do think this is a good analogy for "drag" in software development, something along the lines of "technical debt".
Right, that was my point. You can find ways of eliminating some sources of friction in a system, as was done in electric engines. Before you get to the electric motor, though, you will almost certainly have to deal with it in another way. Path dependence being hard to ignore for what we have done to deal with friction in the history of the car.
My assertion doesn't lean on "bad architecture," as I feel that there are just different choices and tradeoffs. I do think you should often times look for improvements in the tech you are working with instead of replacements. Replacing tech can, of course, be the correct choice. Often, it is a death knell for the people that are getting replaced. We solve that at societal levels in ways that are different from how you solve them at the local technical choice level.
The article seems insightful on the surface but falls apart very quickly when you take time to analyze what the author is actually saying in each sentence. Pretty much every statement is a logically false or bad argument or, at least, requires a lot of supporting material to be convincing.
Take the following sentences for example.
> If people have enough autonomy to make locally smart decisions, then they can recover from friction more easily.
Having autonomy has no relationship to recovering from friction more easily. Any why would autonomy cause one to make locally smart decision? The person having the autonomy might be the one causing the friction in the first place and might also be the one making bad decisions.
> Friction compounds with itself: two setbacks are more than twice as bad as one setback. This is because most systems are at least somewhat resilient and can adjust itself around some problem, but that makes the next issue harder to deal with.
Why would being resilient to one type of problem cause not being resilient to another type of problem? And why would this cause the friction to compound itself?
Incidentally, ChatGPT does produce an equally (if not more) plausible article when I ask it to produce an article on software friction.
Going up the chain for order is itself a source of friction. Communicating the situation on the ground, dealing with transmission issues like staticky radios, waiting for command to have time to deal with your issue (they may be dealing with other units having similar issues, especially if you have a command structure that doesn't delegate), etc. It's uncommon that higher levels of leadership have a better understanding of what lower level units are dealing with than the lower level unit itself
In my experience, the tooling that causes the most friction when it doesn't work is also the most likely to be abandoned, community supported, or only supported by an India team (requiring an overnight for each round-trip communication). Directors and VPs talk a big game about prioritizing developer productivity, but when it comes time to staff a support channel, prioritize a bug fix, or choose who to lay off, it always turns out that they were lying.
Thriving as a SWE in a medium-to-big company is not about algorithms and data structures, it is about coping with and recovering from environment breakages, and having the skills to diagnose and fix the environments that you were forced to adopt last quarter and by this quarter are deprecated.
Historically, 90% of my Indian coworkers have had a much higher pain threshold than 90% of my domestic teammates. I can think of two individuals with a low tolerance for bullshit and I always wonder how they fit in socially over there.
I have to dig a lot or try to bring a problem into N.A. office hours before I see how much rote work is required to do a task and it’s almost always shockingly high. We write software for a living. Nobody should be running a static run book. We should be converting it to code.
Keep in mind how immense income disparities are there. For someone living in India, getting a job that exists anywhere at all on the US payscale pretty much guarantees living comfortably and being able to save tons of money on top of that.
The problem applies to any pair of sites with a 12-hour timezone offset, regardless of culture. PDT<->IST happens to be the one that practically occurs for Bay Area tech companies.
It kind of makes sense for EST+IST teams but PST+IST makes no goddamned sense at all.
There is no time of day when you can hold a meeting that doesn't piss absolutely everyone off. There are only times when you piss one group off more than the other.
This is nothing but the second law of thermodynamics.
Viewing friction as the principle of increasing entropy helps.
You can think of a graph with nodes being the states of various systems including humans, software services, database, etc., and edges being dependencies between them. Reducing the states directly reduces the entropy. Reducing the dependencies reduce the rate of increase of the entropy in the system.
This directly leads to various software principles like separation of concern, parse not validate, denormalisation, state reduction, narrow interface deep implementation, KISS, readability etc. All of these reduce friction.
As such I find the "Addressing friction" section in the article lacking, but it does highlight some useful points.
Once you're familiar with friction, you start seeing it everywhere. And then you hate how much of it there is. I'm sure there's a philosophical lesson in there somewhere.
As far as battling it goes, my experience is that you can get a lot of mileage by just spending an extra minute or three making something a little cleaner, more readable, less prone to failure, etc.
The follow-up comment about the Marines' "hot washes" retrospective meetings is interesting. I would love to browse through the Marine Corps Center for Lessons Learned library that's referenced.
The example about the software updates resonates with me. My policy is usually for the team, if you can upgrade your dependencies, just upgrade now. I have seen so many companies just taking the short term thinking again and again just to realize that oh, now it is too much of a step to update anything let's... Wait? Friction is way easier to take amortized over a longer time so you have to basically bake that in the everyday processes, oh an update? We are not in a bind, just upgrade! It is related to tech debt basically, just avoid accumulating it because it compounds very badly.
We had a hell of a time getting a fix for a CERT advisory deployed because we were several versions behind and there were breaking changes in the way. The idea of rushing a change to make a system more robust is absurd because all of the rushing is your surface area for regressions.
Our solution was that at least once a month we had a story to upgrade deps. But as each new person got the assignment they would immediately ask the question, “but upgrade what?” I didn’t have enough information at that point to care, so I told them to just pick something. Our dep pool wasn’t that big and any forward progress was reducing the total backlog so I figured they would pick something they cared about, and with enough eyeballs we would collectively care about quite a bit of the project.
Now part of the reason this ranked a story is that we were concerned about supply chain attacks on this project, so it wasn’t just a matter of downloading a new binary and testing the code. You also had to verify the signature of the library and update a document and that was a process that only a couple of us had previously used.
Don't forget deprecation. Today Apple and AWS (among a great many others) are notorious for unilaterally turning off services that people have built businesses around.
The responsible thing to do is to create a new service and then write wrappers that emulate the old service's interface and business logic, before finally turning off the old service at some point in the distant future.
But it's more profitable to make a shiny new service and end support for the old one. Capture the profits and pass the costs on as externalities to developers.
This may seem like a small inconvenience, but I have watched basically all of the software that I have ever written over a career become unrunnable without significant modification. The friction of keeping up with deprecation has expanded to take up almost all of my time now. In other words, the rate of deprecation determines the minimum team size. Where once 1-3 people could run startups, now it's perhaps 5-10 or more. It's taken the fun out of it. And if it's not fun anymore, seriously, what is the point.
I probably should have said Apple APIs or libraries. It's been a while since I did macOS/iOS programming, and I got out of it because I got tired of seeing stuff like this:
It was rough 20 years ago with the Carbon to Cocoa transition, Objective-C to Swift, CoreGraphics to OpenGL to Metal, etc. Always a moving target, never cross-platform. It's all so opposite to how I would do things. I remember when the US national debt was $3 trillion in the 80s, now that's Apple's market cap. Makes it hard to focus and make rent these days when so many other people are rolling in dough.
For anyone curious, with AWS I get notices every few months about some core service or configuration option that's getting discontinued in 6 months. Last year it was launch configurations for ECS (maybe the EC2 portion) that had to be migrated to launch templates (can't recall exactly). A deploy failed before I could finish a large feature I had been working on, which caused me to drop everything, which led to overlapping merges in git and associated headaches that set us back weeks. I should have been ahead of that, but it couldn't have come at a worse time.
I'm curious what AWS services you have in mind. I can definitely think of services that aren't actively updated and that don't integrate smoothly with newer services (thinking of Elastic Beanstalk here), but outright deprecation/removal seems pretty uncommon.
I was using Elastic Container Service (ECS) like I mentioned in my other comment. I first used it 3 years ago when it was still evolving, and in 2023 they turned off their launch configuration portion in favor of launch templates. I can't remember the exact details, but I think this covers it:
I use that concept constantly in my work for backwards compatibility, but basically never see it from service providers, which I find sad and somewhat irresponsible or at least discourteous.
I kinda glossed over the list of suggestions on addressing friction until I got to the games and checklists items. I think that’s really the only way to deal with the moving target that is developing software: regular checkins to make sure things work the way you think they will. The other points are all susceptible to the element of surprise when something inevitably goes wrong. You have to have practiced and internalized the most important parts of your process (shipping and root cause analysis) in order to adapt and respond.
This kind of friction is so expansive and ubiquitous that another, less negative word for it is simply “life.”
It’s like John Lennon said:
“Life is what happens while you are busy making other plans.”
Instead of trying to eliminate or stigmatize it, it can be more productive to think of it as a creative input into your static system which can be harnessed for unexpected good.
(And personally I would rather live by the worldview of Lennon than a 19th century German general.)
There’s always a nonzero quantity of coworkers who seem to identify too much with our captors (that machines). It’s shocking to them if someone suggests that human errors should be accommodated by a system instead of ruthlessly vilified. That you think you can do something perfectly every time is a topic for you and your therapist. Expecting others do to the same is toxic.
A small caveat is that the idea of friction suggests resistance to a necessary change, and the concept drives one towards more flexibility and power.
In my experience, it is simplification that reduces friction: accepting constraints and limitations, focusing code and architecture. Removing degrees of freedom reduces execution risk exposure.
The main feature is the working state: how to keep that accurate, working, and and replicable. Avoid long transactions, splayed-out invariants, backup intervals, complex restart procedures - any exposure to being outside the working state, or in some provisional or pending state.
>>Is friction important to individuals? Do I benefit from thinking about friction on a project, even if nobody else on my team does?
>Even if you were to eliminate a lot of friction, the profit would go to the business anyway.
At the intersection of software development, the military, and whether or not friction is important to individuals... I'm reminded of the USDS, and IIRC some of their work to improve workflows and discoverability around (specific) VA benefits.
If you've ever listened to vets talking about the VA, they're rarely complimentary about it, frequently complaining about how hard it is to find the entry point to get the needed benefits, and how hard navigating process is after the entry point is found.
Reducing that friction means more benefits are exercised, at a higher cost to the government. OTOH, maybe the overall cost is lower, if fewer phonecalls are answered explaining how to do a thing, and fewer forms are filled out justifying a thing.
> Agile supports some uncertainty, but often a mile is taken when an inch is given.
An inch or a mile, over time and with local information only it has you running around in an ant mill pattern[0]. I've seen my share of such projects going nowhere fast.
Friction comes up--although not under that name--in The Mythical Man Month. (People do still read that, don't they?) One example Brooks gives is the hard drive that goes bad, perhaps losing code, but also preventing people from doing work until it's replaced. (This essay was written about a software project on a main frame, when "the hard drive" was a real thing.)
Most important point on keeping friction to a minimum is the reduction of moving parts in your app. Find the most architecturally simple way to build, without relying on many third parties or external dependencies. Think long and hard every time you sign up for an API key somewhere. Baroqueness doesn't necessarily win the game.
A lot of the friction mentioned in the article revolves around tooling. Anyone with more that a few days in an engineering org will witness this. What is not mentioned is human friction. Over simplifying but an engineers job is to write code and push to main. Anything that gets in the way of that I categorize as friction.
Can you elaborate on how you made the connection between the article and BEAM languages? I suppose you experienced a lot less friction when working with Erlang or similar, care to share your experience?
Well, friction and execution risk aren't exactly the same thing as I understand it, and it's not really "repurposing" the military term as much as it is using it in a broader context than it was originally used in. A lot of military stuff is simply one particular usage of some field: deciding how many subordinates a general should have is a military usage of organizational dynamics; getting bullets from the warehouse to the soldier is logistics; friction is simply a property of complex systems regardless of what the system is
only hope is to minimize LOC - keep entire system down to 10,000 LOC. Which we don’t know how to do yet (and even if we did, the migration path needs to be solved for too which is a human problem moreso tech)
These physics analogies always fall down on the details. I had a mentor who tried to model story throughput as fluid dynamics which sort of worked but he downplayed the psychological aspects, like the author here is doing.
> Friction matters more over large time horizons and large scopes, simply because more things can go wrong.
“Scope” is doing a lot of heavy lifting here and I have met so many people who don’t get it that I find it dangerous to sum things up thusly. There’s a nonlinear feedback loop here that is the proverbial snake in the grass. Many people in your org think of incidents per unit of time, not per unit of work. If you have a hundred developers the frequency of a 1% failure mode becomes a political hot potato.
When managers or QA complain that something is happening “all the time” they mean it figuratively. I’ve seen it happen for multiple people for an event that happens on average once a month, but one time happened twice in a week, and that seems to be the nexus for the confirmation bias starting.
If you have a big enough drive array you will need to order new disks “all the time” and someone will question if you’ve picked a shitty brand because they personally have only experienced one dead drive in their professional life. It’s because humans are terrible at statistics.
Now as to the friction of someone leaving to go home (“don’t deploy on Friday”), this also a psychological problem, not friction.
The problem isn’t people going home. The problem is people rationalizing that it’s safe to leave or skip a check. They are deluding themselves and their coworkers that everything is fine and their evening plans don’t need to be interrupted. You can have the same problem on a Tuesday morning if there’s a going away lunch for a beloved coworker. Time constraints create pressure to finish, and pressure creates sloppiness, complacency. (It’s one of the reasons I prefer Kanban to Scrum.)
Don’t start things you can’t finish/undo at your leisure because you will trick yourself into thinking you have met our standards of quality when you have not. As Feynman said, the trick is not to fool yourself, and you are the easiest person to fool.
As a Dutch ex-Navy officer, we just called this "friction" as everyone had read Von Clausewitz during officer training and was familiar with the nuances of the term. Militaries overwhelmingly address this problem by increasing redundancy, so that there are as few single points of failures as possible. It is very rare to encounter a role that can only be filled by a single person, a well designed military organization will always have a plan for replacing any single individual should they accidentally die.