If we're going to talk about how to "scale care" and how not to, then Buurtzorg has to be mentioned[0][1]. It's a Dutch home-care company that was started by nurses who basically got frustrated that managers got in the way of them doing their job. It operates by having a flat organization working with small teams of highly trained nurses, and trusting them to make the right decisions. The result is a low-cost provider of high-quality health-care.
So when the author says care "doesn't scale", they obviously mean "you need a one-to-one ratio of caretakers", which I fully agree with. But what they're also accidentally doing in the process is explaining why creating bigger teams with bigger hierarchies and structures does not appear to increase the efficiency of care.
Some projects do need big teams with hierarchies - even in healthcare. Effectively responding to a pandemic comes to mind. I suspect it's when the core problem to tackle most naturally breaks down like a tree - a hierarchy then mirrors the way the problem breaks down. For some projects, like general homecare, the efficiency sweet spot is a flat structure of autonomous individual teams, because it doesn't scale beyond those teams anyway.
And just like the author concludes, I find that there's something comforting about that.
Thanks for elaborating further what we mean by "scaling".
Couldn't it be said that when organization acquire ranks, it is to solve coordination problems that arise with scale but that in this case, there isn't much that is solved by more organization ?
There's also a different kind of coordination problem.
The incentives of an organization - or at least of a corporation - are to make money and to continue to exist and to enrich and empower the people who control it. But employees' incentives are usually pretty different. Nurses usually get into nursing because they, at least to some extent, want to help people. Engineers get into engineering because they, at least to some extent, want to build something really good. Teachers get into teaching because they want kids to learn. And so on.
If you're the principal of a school, and your goal is for a teacher to teach, you don't have a coordination problem. Most teachers want to do that. But if your goal is for teachers to maximize standardized test scores, or to minimize the number of frustrated parent calls, etc, you do have a coordination problem, because your goals are misaligned with your employees'.
If you're the CEO of a tech company, and your goal is for your engineers to build good software, you don't have a coordination problem. Your engineers want to do that. But if your goal is to maximize conversions, or to ship faster, or to raise money, etc., you do have a coordination problem.
If you run a hospital, and your goal is for your nurses to care for patients, you don't have a coordination problem. But if your goal is to maximize profits, then you do.
It's usually these kinds of coordination problems that managers are, effectively, in charge of enforcing. Their job is not to help individual employees achieve individual employees' collective goals, it's to make employees feel (sometimes truthfully) like their goals are aligned with the organizational goals, or to use the threat of loss of income or work to force them into line.
This is not true, I can confidently say, most of the time. Engineers generally want to build software they think is a good trade off between fulfilling the spec and not being too hard to build.
This does not generally translate to good software. It’s not a detriment of character or anything like that. They’re just lazy like most people are
Classically people got into software engineering because they liked writing code and they needed a way to pay the bills. Plenty of retired software engineers still code because it is part of who they are.
Most software engineers especially now are definitively not plucky kids who just love computers. They are in it entirely for the money. It's a completely different kind of person, the motivation comes from a completely different place.
A company should be able to build working software even without good engineers. Just like how TCP can deliver ordered packets over unreliable connections - you change the processes and systems to do it. Of course, you give up some things - such as speed and agility in that case.
For most people, a job is a job. It's not something they want to dedicate their life to - that's for artists and craftsman, and not many people are of that ilk in the modern day.
When I graduated (2006) it was still mostly that way, a few kids in it for the money tried to get a CS degree and failed out and changed majors.
Most of the people I've had the pleasure of working with have a love for the craft.
Sadly I understand that with how messed to the economy is now days, making a good living in a US costal city doing anything else but software is rather unrealistic...
I'm generalizing, obviously, but I think most engineers would rather not build horribly tech-debt-y things all else equal. Yes, there are some business-minded engineers, but just looking at my own data set of job seekers, people seeking code quality far outnumber people seeking rapid product iteration.
If you're focused on teaching, building or caring instead of test scores, conversions and profits, you're going to lose customers and opportunities.
Parents choose schools with better test scores, not schools with better teaching. Customers use products with better conversions; this is literally what the word "conversions" means.
If you focus on making your employees happy instead of making your customers happy, you'll quickly find that your employees have no more customers to serve. You will be outcompeted by other companies in the great rat race.
This is why most of the companies we have now are the way they are. It's not because all CEOs are evil, it's because the non-evil ones can't survive.
> Parents choose schools with better test scores, not schools with better teaching.
Only because other signals don't exist.
Parents know test scores don't correlate perfectly with learning, but what other metric do they have to judge by?
There are some schools with a stellar reputation for learning, and they get to charge an obscene premium. Amongst private schools, the ones with the best word of mouth reputation get to charge a lot more!
Yeah, my post was not intended to be a value judgment.
Personally, I think it's somewhere in between. Yes, some degree of "actually make things people want" is an essential component of a business...but I think the amount of "required evil" is usually overblown.
If markets were so ruthlessly competitive that no one could afford to be not-evil, no one would be making a profit. Every company would be barely surviving and profits would be minimal. The fact that profits exist - indeed, are much greater than they have been in the past - means that markets are NOT that ruthlessly competitive, at least not between actors with comparable access to capital resources.
Every dollar of profit a company makes is a dollar they could afford not to make and still survive. And given that the feasibility curve of profits <> level of evil is likely pretty convex, choosing not to make $1 likely pays out many dollars of not-being-evil externalities.
The goal of "getting money" is what is the problem here. For what? To do nice things with it? Can't you do nice things in the first place? If we could agree on more than money as exchange among us, working on/in humanities would not be such a grind. The idea of "you take care of this, I take care of that" is completely lost.
That sounds about right. If you watched the video then you also will notice that the further up the management chain we go, the more abstract the goals become compared to the most concrete ones that the nurses are dealing with. One could say that if most care work starts and ends with those concrete individualized issues, the cost of handling (or worse, prioritizing) abstract goals is far greater than the benefits that they add.
Compare that to the pandemic response example I gave: that is the kind of challenge where there are concrete problems at individual, city-wide, national and international scale to tackle, and they're all interconnected too. So plenty of those coordination problems you mentioned.
Your reply seems focused on size, but this line caught my attention:
>trusting them to make the right decisions.
I'm wondering if trust isn't the real issue. For example, wouldn't a large high-trust organization be able to provide good care? Or is there something intrinsic to scale (e.g. diffusion of responsibility) that makes "high-trust" and "large organization" incompatible?
My mom is a (retired) preschool teacher. By the end of her career, she was working for a school affiliated with a large chain, and maybe 1/3 of her time was spent on filling out all the paperwork required by management. That was time she absolutely was not spending with the kids.
My dad is a (retired) doctor. Similar story there. His hospital ultimately ended up associated with a regional health care network, and the paperwork load ultimately got so bad that they ended up having to hire additional staff dedicated to help the actual health care providers fill out all the paperwork so that they could spend more time actually providing health care.
I switched primary care providers over this a few years back. I had a great doctor, but her practice got bought up by one organization, which then got bought up by another, and over time working with her office became a huge bureaucratic quagmire. I switched to a different clinic that's still local (although also chain with multiple locations), and everything's easier again.
The same thing happens at my own job, software. The bigger a company I'm at, the more of my job consists of filling out paperwork about the work I'm doing, rather than doing the actual work.
>The same thing happens at my own job, software. The bigger a company I'm at, the more of my job consists of filling out paperwork about the work I'm doing, rather than doing the actual work.
There's some irony here considering software has been promising for decades to help businesses scale information sharing.
Oh, it has. The scale at which we share information has never been greater.
Concrete example:
15, 20 years ago when we tracked user stories, the status of the sprint, etc., with sticky notes on a wall. We didn't have to record a lot of information on them because they only needed to track status at a high level. Details were communicated through conversations. That did mean that a lot of that information was tribal knowledge, but that was actually fine, because it was of ephemeral value, anyway. Once the work was done, we'd throw the sticky notes in the trash can and forget the tribal knowledge. Reporting out happened at a much higher level. We'd report status of projects in broad strokes by saying what big-picture features were done and what ones were in progress. We'd fill operations in on the changes by telling them how the behavior of the system was changing, and then let them ask questions.
Nowadays, we put it all in Jira. Jira tickets are extremely detailed. Jira tickets live forever. Jira tickets have workflow rules and templates and policies that must be complied with. Jira rules make you think about how to express what you're doing in this cookie cutter template, even when the template doesn't fit what you're actually doing. Jira boards generate reports and dashboards that tell outside stakeholders what's happening in terms of tickets and force them to ask for help understanding what it means, almost like you're giving them a list of parameters for Bézier curves when what they really wanted was a picture. Jira tickets have cross-referencing features, which creates a need to do all the manual data entry to cross-reference things. Jira tickets can be referenced from commits and pull requests, which means that understanding what changed now means clicking through all these piles of forever-information and reading all that extra documentation just to understand what "resolves ISSUE-25374" means when a simple "Fix divide-by-zero when shopping cart is empty" in the commit log would have done nicely. etc.
We communicate so much more these days. Because we can, because we have all this communication technology to facilitate all that extra communication. What we forgot is that, while computers can process information at an ever faster pace, the information processing hardware inside our skulls has remained largely unchanged.
I think that highlights the issue I'm poking at. "Good communication" doesn't just mean a firehose of information at your fingertips. It means getting the right amount of information at the right time. Developing systems like the latter is much harder than the former, but they both get the same sales pitch.
This is also where I really dislike a lot of this more recent push toward automating communication.
One person deciding what needs to be said, to whom, and when, can have a LOT of leverage in the productivity department, by reducing the time that tens or even hundreds or thousands of other people lose to coping with the fire hose.
Microsoft Copilot has been yet another downgrade in this department. Since it got adopted at my job, I've seen a lot of human-written 3-sentence updates get replaced with 3-page Copilot-generated summaries that take 10 minutes to digest instead of 10 seconds.
At my company we are aggressively rolling out policies to forbid the use of AI. I'm one of the bigger folks behind it. I just see no benefits. I have no desire to debug AI generated code, I have no desire to read pages and pages of AI generated fluff, I have no desire to look at AI generated images. Either put the work in or don't.
If you use AI like a quick answer machine, or quick example machine, they all outdo Google by a large margin.
The friction between moving between, and knitting different systems and languages together that I don't use frequently enough to be fluent, has been lowered by an order of magnitude or two. Due to small knowledge gaps getting filled the instant I need them to be.
The same with getting a basic understanding (or a basic idea) about almost anything.
My AI log documents many stupid questions. I have no inhibitions. It is a joy.
> If you use AI like a quick answer machine, or quick example machine, they all outdo Google by a large margin.
I mean, A) hallucinations still happen, and B) Google sucks anyway. I don't know of anyone at the company still using Google because we're largely an engineering outfit and all were aware as Google's search features slid into uselessness.
I find that the code I get from Copilot Chat frequently fails to do exactly what I asked, but it almost always at least hits on the portions of a library that I need to use to solve the problem, and gets me to that result much more quickly than most other ways of searching do these days.
Hallucinations (or more correctly labelled, confabulation) is a property of human beings as well. We fill in memories because they are not precise, sometimes inaccurately.
More to the point, once you know that, having a search engine for ideas that can flexibly discuss them is a tremendous and unprecedented boon.
A new tool, many (many) times better than Google ever was for many ordinary, sometimes extraordinary tasks. I don't understand the new gigantic carafe of water is half full viewpoint. Yes, it isn't perfect!? It is still incredibly useful.
> Hallucinations (or more correctly labelled, confabulation) is a property of human beings as well.
Yeah and if, when I asked a coworker about a thing, he replied with flagrantly wrong bullshit and then doubled-down when criticized, I wouldn't ask him anything after that either.
I will say I have warmed to GitHub Copilot's chat feature. It's a great way to look up information and get answers to straightforward questions. It feels similarly productive to how well just Googling for information was back in the 2010s, before Google went full content farm.
We can’t. Paperwork exists to be able to transfer knowledge and liability. It isn’t meant for you and it is mostly cost for your company. It’s for lawyers, insurers, investigators, auditors, your future replacement, etc.
Ha, no instead we're going to eventually have to have AI workers getting AI salaries and to spend on AI products. Then the AI governments and eventually AI wars...
Related: every few years someone posts "Paperwork Explosion"[0] again, and people here rediscover that there's nothing new under the sun.
> In 1967, Henson was contracted by IBM to make a film extolling the virtues of their new technology, the MT/ST, a primitive word processor. The film would explore how the MT/ST would help control the massive amount of documents generated by a typical business office. Paperwork Explosion, produced in October 1967, is a quick-cut montage of images and words illustrating the intensity and pace of modern business. Henson collaborated with Raymond Scott on the electronic sound track.
Usually the discussion quickly converges on how automation in administration is a prime example of the Jevons Paradox[2] in action.
Well, Buurtzorg is a large organization, it's just that it does not have a large hierarchy. I suspect you're really asking "high-trust" and "large hierarchy". In that case there's plenty of causes to point at. I'll just give a few of the top of my head
First, note any organization that breaks into different departments (hierarchical or not), at least partially does so to let each department "abstract" away the other ones - if you don't have to worry about issues outside of your responsibility, you can focus more on yours. That is actually a form of trust.
In the case of a hierarchy however, that means that each layer abstracts away the layers above and below, and since going up multiple levels in the hierarchy happens indirectly, the further away, the more abstract things become. So that often needs some kind of structure to regain the trust that is lost by dealing with abstract departments - leading to bureaucracy.
On top of that, usually more power resides higher up the hierarchy. That means that without explicit structures to compensate for this, people lower in the hierarchy lack individual leverage to protect themselves against bad decisions made higher up, that may not even be malicious or intentional but just a consequence of the aforementioned abstraction.
Of course, most structures that are created to fix this typically are also abstract procedures, meaning they barely help with our instinctual "I cannot attach a face to this" type of distrust. Bureaucracy can create leverage, but rarely creates trust. Which also explains that quite often, talking to someone in person can make such a difference in being allowed to "go ahead" or not. Because it can provide a more "natural" sense of trust that bureaucracy is supposed to provide but barely does.
Not OP, but orgs that scale usually rely on metrics. It's metrics that are easier to measure, not the one that measure system performance (i.e. lines of code written per day, points closed per sprint) that get selected. Then management lampoon workers for not meeting those metrics (they need to prove their doing something, and can't lose control), regardless of how the system is performing. So trust erodes.
I honestly think most orgs would leap forward considerably (with some pain) by doing a severe reduction in middle management, basically making them prove why they are actually providing value, and removing them if they aren't.
Tons and tons of middle managers do absolutely FUCK ALL in terms of delivering product, meeting goals, and serving customers.
I generally dislike middle-management as much as the next IC, but I think these types of arguments tend to ignore latent or low-probability risks.
You see this all the time in discussions about quality or safety metrics. By definition, if those teams are doing a good job you won't see many quality or safety issues, which leads people to believe they are doing "FUCK ALL" and provide little benefit. Only in hindsight, after a low-probability but high risk event happens, does getting rid of them seem like a bad idea.
You’ve never worked in middle management, have you? Just because they do largely ‘soft stuff’, mediating between different departments, teams and layers of the organisation, and (hopefully) running interference so that their team can focus unimpeded on the actual fun part of the job, doesn’t render their contribution null and void.
I’ve dealt with bad upper management, bad project management, bad clients, bad suppliers, but only rarely bad middle management. (And no, I’ve never worked in middle management although at times I’ve been some of the other categories above. :P )
Most healthcare organizations in the US are profit-oriented, sometimes to the detriment of the patients. That is why we have large, unwieldy organizations surrounding the very few people that actually do hands-on healthcare. There’s also the issue of liability – it can be pretty litigious in the US, and companies are frequently wanting to limit their liability, which means having the paperwork to back up their decisions. Unfortunately, it also means they have to restrict their decision-making to a very small matrix.
>Most healthcare organizations in the US are profit-oriented
According to the American Hospital Association, less that 20% are for-profit [1]. I'm sure all are extremely budget-conscious, but that's not the same as being profit-driven.
It seems to me that the US optimizes for quality to the detriment of cost and, more recently, access.
But also be aware that non-profit does not mean non-profit-oriented. Just that any profit goes to executives [1] instead of toward community/charity services [2].
I think there’s an error in conflating “not for profit” with “charity”. You could provide all care at cost and have zero charity care. That d doesn’t imply all “profit” goes to executives, but rather to keep reimbursed or charged costs lower.
1. Thanks for the nuanced view. I agree that zero charity care doesn't necessarily mean greedy execs.
2. In my mind keeping costs lower is a form of charity. Especially with something as frequently difficult to understand as health care costs.
3. Executives do deserve to make a fair amount for their skills and effort. I'm not sure myself what salary I consider it fair pay vs taking greedy advantage of not-for-profit status.
On your last point: I think it's useful to think in multipliers and desired outcomes.
Do you want the best doctors involved in care for patients and training juniors, or do you want them to spend time jockeying for a position in the hierarchy because that's a plausible but also the only way to 2x their income?
This doesn't fully answer the question, of course, but it suggests that large pay disparities are extremely wasteful for society as a whole.
I couldn't find the 20% in your reference, but is it talking specifically about hospitals? I can believe that only 20% of hospitals are for-profit. But, if I do a maps search for all 'healthcare organizations' within 10 miles, the vast majority are for-profit.
Also, of the groups of ambitious hustlers that I know nearby, many are looking to get into running healthcare clinics because there's so much profit to be made.
Don’t be fooled by the “non-profit” label. Many are as greedy as for-profit hospitals, except that the money goes to execs and their friends instead of shareholders.
Legally an organisation is usually exposed in relation to its scale, even if the misconduct was limited to one employee trusted to carry out work independently, the penalty will likely be related to the total size of the company.
The second is if you're scale is large enough that you are considering a significant portion of the employees with some skill in a region, you have a harder time selecting for anything other than "holds a qualification" during hiring. This leads to all sorts of policy to prevent someone with qualifications but less integrity causing issues.
I don’t think it’s that they are incompatible by definition. I think the large organization is a result of low-trust.
A trust-breaking event occurs, so a new form gets added, a compliance process is created with a new team monitoring, etc. etc. Have enough of those and you eventually get your typical, modern healthcare bureaucracy today.
I live in Switzerland and my girlfriend has been working as a doctor (surgery), and it has mostly to do with the politics that come with hierarchy. As in nearly all companies, hierarchy allows politics and favourism to enter the playing field which will attract people that do not act in the primary interest a service should have: ergo, patients.
For example you have leading doctors who prefer not to look at patients, even tho they belong to them (from a specialization point of view), as they are "cumbersome" cases. That often leads to them "ignoring" it for some time until someone else takes over, or completely delegating it to non-fit persons.
It's a huge pain for me to heard this every day, because it literally sucks out any desire to work as a doctor from my girlfriend. At the same time, it's infuriating: we pay a lot each year, and with every year more, for services like this. If I was to ever win lottery, I'd use that money to make my own hospital without all of this crap.
The other companies are also shortstaffed, all healthcare is at the moment, and the bureaucracy layers just keeps humming along ... Buurtzorg is however cheaper, and since you cannot scale the actual care that counts.
Yes, and the actual nurses also have more job satisfaction and their patients are happier too, so the quality also improves. The main explanation given for that is that compared to solutions that require a lot of time spent on administrative work, these nurses get to focus on the actual care work, and have more personal time with the patients, meaning a bond of trust can be established. Which is also an important aspect in the scaling trade-off I'd say.
Imagine if we had an openly adoptable Buurtzorg model along side open source software used for adminstration. With the power resting with the carers the non-care admin work could be collectively automated through the open source model.
I'm actually working on the software part of that, in the UK. It isn't open source yet (mostly because we can't afford the level of comfort we need about data security, given that this is very sensitive data) but that remains the intention.
But even with open-source software, there will still be the issue of how highly regulated the care industry is. There are significant variations even within the UK, and indeed, there are even inconsistencies between inspectors of the same regulator.
But below the level at which regulation kicks in, care work is—for some people—the most rewarding (not financially!) type of work and arguably will be one of the last professions to be replaced by AI (though there is plenty of work on robots).
Honestly sounds like that's Conway's law, but in reverse. Instead of a project's setup mirroring it's organization, the organization mirrors what they see as the ideal project setup. That's something that I wish more teams were willing to recognize as the best approach; Conway's law isn't necessarily something to be worked around, it's a tool.
Doesn't scale compared to what, though? Because if the comparison point is the more typical modern care-work organization where tasks are divided into a fine-grained categories of "high skill" and "low skill" work, outsourcing the "low skill" tasks to the low-trained workers as an attempt to save costs, then Buurtzorg shows that that approach is not a net savings.
Because Buurtzorg has been shown to have lower costs and higher job satisfaction than its competitors, and the reasons are quite obvious too: the hierarchical outsourcing solution adds extra communiciation costs, administration costs, and similar organizational overhead. On top of that patients are in a situation where they see many different unfamiliar care workers briefly, meaning they do not get to form a bond of trust with any one of them. That is a pretty heavy costs that I don't even know how to categorize.
So when the author says care "doesn't scale", they obviously mean "you need a one-to-one ratio of caretakers", which I fully agree with. But what they're also accidentally doing in the process is explaining why creating bigger teams with bigger hierarchies and structures does not appear to increase the efficiency of care.
Some projects do need big teams with hierarchies - even in healthcare. Effectively responding to a pandemic comes to mind. I suspect it's when the core problem to tackle most naturally breaks down like a tree - a hierarchy then mirrors the way the problem breaks down. For some projects, like general homecare, the efficiency sweet spot is a flat structure of autonomous individual teams, because it doesn't scale beyond those teams anyway.
And just like the author concludes, I find that there's something comforting about that.
[0] https://en.wikipedia.org/wiki/Buurtzorg_Nederland
[1] https://www.youtube.com/watch?v=SSoWtXvqsgg