I was comparing the concept to the Actor model as I skimmed the piece, but I don’t think there’s much overlap. What about Smalltalk correlates to cells?
According to Alan Kays 1971 description of Smalltalk:
“An object is a little computer that has its own memory, you send messages to it in order to tell it to do something. It can interact with other objects through messages in order to get that task done.”
Smalltalk concepts heavily informed the Service Oriented Architecture (SOA) tenets, e.g.
1. Services are autonomous - Cohesive single responsibility.
2. Services have explicit boundaries - Loosely coupled, owns its data and business rules.
Choosing good service boundaries is really crucial for a successful, resilient, maintainable system.
The impression I got from the document is that cells don’t communicate much with each other. They’re effectively multiple copies of the same cluster of services.
Cells could certainly encapsulate multiple microservices, but I don’t see a strong correlation between Smalltalk objects and cells. That may just be a lack of imagination on my part.
Hummmm while there might be some similarities the concept are not that close.
Cells are usually used to reduce blast radius and to force design that can scale.
It is a bit like a multi-AZ/multi datacenter architecture where however cells can share the same AS/datacenter. But nothing else.
So cells can share physical hardware but not logical components.
If you were to create a cell based architecture, you might have 9 cells over 3 AZ each using a different S3 bucket and DynamoDB table. For 9 S3 buckets and 9 DDB table in total.
While cell-based services make sense for critical applications, not all services require this approach from a Cost of Goods Sold (COGS) perspective. In practice, I've observed that companies often adopt an all-or-nothing approach to cell-based services, resulting in inflated COGS, increased operational overhead, and unnecessary complexity.
Worked for a large SaaS provider and the cell based architecture was seen as “too cumbersome” and less cost efficient. Ultimately, cell based architectures can be great for the reasons they mentioned, they just need a bit more tooling around them to prevent operational toil.
I was thinking about cell-based distributed systems years ago and I haven´t yet figured out how to handle their complexity, unreliable connectivity and, as always, what use cases there might even be.
> self-contained infrastructure/application stacks that provide fault boundaries
There. That's it, more or less. Having 10% of my customers sad is better than 100% of my customers being sad. It's better to have a handful of TAMs and RCAs to deal with the front page of consumer news networks.
If youve invested sufficiently in control plane and routing it can also make incremental capacity allocation easier. Similar for fractional deployments, security boundaries, etc. But those are all side effects you get from the (large) effort you put in to achieve fault isolation.
That said, I never saw an implementation I was truly happy with. For cultural reasons my previous employer lacked sufficient tooling to make the effort sustainable much less consistent across teams. Also too many times the isolation boundaries end up being very hypothetical due to org structure or "efficiency" challenges.