Hacker News new | past | comments | ask | show | jobs | submit login
CRDTs: The Hard Parts [video] (kleppmann.com)
404 points by benrbray on July 11, 2020 | hide | past | favorite | 124 comments



See also automerge [1], discussed at the end. They are currently working on performance improvements [2]. Quoting from the repo, "automerge is a library of data structures for building collaborative applications in JavaScript:

* You can have a copy of the application state locally on several devices (which may belong to the same user, or to different users). Each user can independently update the application state on their local device, even while offline, and save the state to local disk.

* (Similar to git, which allows you to edit files and commit changes offline.)

* When a network connection is available, Automerge figures out which changes need to be synced from one device to another, and brings them into the same state. (Similar to git, which lets you push your own changes, and pull changes from other developers, when you are online.)

* If the state was changed concurrently on different devices, Automerge automatically merges the changes together cleanly, so that everybody ends up in the same state, and no changes are lost. (Different from git: no merge conflicts to resolve!)"

[1] https://github.com/automerge/automerge [2] https://github.com/automerge/automerge/pull/253


What does no merge conflicts to resolve even mean? Isn’t it fundamentally impossible to have both no merge conflicts and have meaningful data.

Incase of a conflict you can either keep the most recent change or both changes. But like got keeps both changes with <<<< HEAD and theirs markers the code is now invalid and won’t compile. Suppose both the changes were kept without conflict resolution, now you have two things that may interfere with each other.

I’ve had git mess up by trying to do an auto merge and still breaking the logic.

So I don’t think there is a golden algorithms. Just a bunch of trade offs like any other problem.


The talk makes it clear, "no conflict" in CRDT sense is that states update, even ordered differently will converge to the same state.

For practical sense, it is last-write-win baked in the the CRDT design (you can choose alternatives, but it has to be in the CRDT design, not something pluggable).


If the merge operation is commutative the order doesn’t matter.


Yes, but the question is whether the semantics is meaningful for the application (e.g. text editing). I can also make string concatenation commutative by generalizing the concept of strings to multisets, but it is not given that this structure is useful for the applications where strings are needed.


> Yes, but the question is whether the semantics is meaningful for the application (e.g. text editing).

CRDTs are building blocks for larger systems. They themselves need to be composed into higher level constructs.

I think this is the crux of the CRDT, it assists the application designer in creating an algebra over the domain model such that state can get updated w/o having a single representation of that state space. It pushes that complexity back down, so that we can reason about it in serial code.


See also the PushPin project [3], which uses React+Automerge, along with Capstone [4]

[3] https://automerge.github.io/pushpin/ [4] https://www.inkandswitch.com/capstone-manuscript.html


It’s hard enough getting your own code to run and work correctly. When would this be a useful development paradigm?


CRDT's and operational transforms are most useful for live, online editing. Each client sees a nearly up-to-date version of the document and any differences due to network lag are relatively small.

The idea is that normally each user will see other users' edits as they happen. They are trying to cooperate, not stomp on each other's edits. So long as the merge is reasonably intuitive, it can be fixed manually if it's not exactly what the authors wanted.

CRDT's aren't very good for writing code asynchronously, since you probably want each version to compile and pass tests, and sometimes do code review as well. Git works better for that. But they could be sort-of-okay for pair programming, though it might be an overly-complicated solution and better to use some kind of remote desktop.


Realtime collaboration on documents (e.g. source code, rich text editing, etc).


I understand how it's useful for documents. I don't believe source code is anything like a document and thus that mental model doesn't feel useful.


When I edit code it's broken most of the time. It wouldn't make sense to collaboratively break it in various parts for other people...


I have done remote pair programming many times though and in that case it’s perfectly ok, because you’re communicating with the other person/people and can let each other know if you will break something.

For that use case, I can see this being very useful.


While you could do real-time collaboration on code edits with this tool (and I've done that with some success via pair coding, where you commit them to git after your shared edits are complete), this tool isn't strictly about code. It's about any shared content at all. Think google docs, but for any data, with offline/online sync. The offline/online sync might not be great for simultaneous edits, but in a shared "project" of say, field research data, the combination offline/online sync and soft-realtime shared documents at any hierarchy level layer is a nice "don't need to think about it generally" approach.

I'm not sure which kind of CRDT these are, but since all edits are part of history, as in git, you're not going to lose history if someone DOES inadvertently stomp on someone else's edits.


This is a library, not a tool.


That sounds similar in functionality to the protocol for Google Wave, now Apache Wave, and discontinued in early 2018 (https://en.m.wikipedia.org/wiki/Apache_Wave)

If so, what’s different? Better algorithms, assuming fewer collaborators and/or less frequent updates? Or is my understanding that these are similar in functionality incorrect?


Google Wave’s algorithm is now used by Google Docs, it’s Operational Transformation (OT) [1]. You can approximately view it as a special form of CRDT but the theory underpinning each have separate origins.

OT is efficient and fast at real-time editing with a central server, whereas CRDTs are more capable in distributed/p2p scenarios but bring significant overhead as they store a lot of metadata.

1) https://en.m.wikipedia.org/wiki/Operational_transformation


You seem to be implying that OT requires a central server. OT was designed for peer-to-peer, at least as far as I know.

My OT knowledge isn't deep; perhaps in practice, some implementations like Google docs use an authoritative node?


My understanding is that most specifications for OT have obscure failure cases if a central server is not used. Some papers have fixed these issues to enable fully P2P OT, but IIRC the resulting ops require similar amounts of metadata per-change as sequence CRDTs.


Interesting. I thought that OT implementations require peers to have at least one authoritative view of the transformations somewhere, and then distribute different operation sequences (after transformed) to different node. That, in simplest way, would imply a central server (otherwise you will have an alternative method to communicate the authoritative view between peers, which sort of defeat the purpose).


Yes. There do exist peer-to-peer OT algorithms, however they have not become popular in any production websites.

The popular production systems (such as Google Wave and Docs) are based on the Jupiter style of operational transform, which features a single central server, with a single line of time. The clients try to send their edits to the server as the most-recent edit. If they fail, for instance because another client has made an edit, then they rebase their edit on the new most-recent version, and then try to submit it again.

Keep in mind that OT and CRDT aren't actually algorithms -- they are perspectives from which a programmer might try to think of an algorithm.

The Operational Transform perspective says "think about how you can transform your edits (aka operations) so that they work when applied in a different order, on a peer with a different history of time.

The CRDT perspective says "think about how you can formulate your data structures so that you can apply any edit in any order."

In practice, programmers who took OT perspective were able to create mostly-working systems pretty easily. But getting full consistency (aka correctness) was very difficult, because it was difficult to think of all the ways in which multiple operations could interleave and affect one another, especially without the constraint of a central. Thus, it took many, many academic papers before anyone succeeded in coming up with a fully P2P algorithm that resulted in consistent synchronization after arbitrary edits.

This frustrated many of the academics enough for them to change their perspective. The first step towards this was a system called WOOT, which stands for "With-Out Operational Transform", where the researchers explicitly gave up on their old perspective, and started thinking along the CRDT paradigm.

The CRDT paradigm made it easy to get fully-consistent peer-to-peer systems. However, it has remained elusive to make one that's performant enough to be used in practice. They tend to require holding onto the entire history of all operations that have ever occurred, and each operation itself tends to require 10x or 100x overhead. Thus, you can edit a small text document and quickly end up with megabytes of data for a small string of text.

But there's a third paradigm here that isn't discussed as much -- Version Control. Think about git. It provides a DAG of versions over time, that branch and fork, and a way to consistently merge any two versions. From this perspective, it turns out that OT is the discovery of the rebase operation, and CRDT is the discovery of multi-way merge in a DVCS. OT people have been trying to simplify complicated merges by doing clever rebasing. This is much easier with a central server, and it happens to allow you to clean up old history more easily, which saves a lot of memory, making it useful in production systems.

In practice, I think that these three perspectives are all going in the same direction. If you build an OT system, and then try to make it fully consistent and peer-to-peer, you end up with CRDT algorithms. If you build a CRDT, and want more flexibility in memory requirements, a great approach is to throw a server into the mix and rebase (aka transform some operations). And git already has "operational transform" and "CRDT" algorithms in it.

My personal interest is in unifying these algorithms in the https://braid.news project. We have a unified protocol that allows OT and CRDT systems to communicate with one another, and we've got some great new algorithms for pruning old history in a fully-p2p network that I expect to release this summer.


So if I change "foo" to "moo" and you change "foo" to "boo", who wins?


So this is either represented as a delete followed by an insert (delete one character at offset N, insert "m" at offset N), or as a replace (atomically replace character at offset N with "m").

For an atomic replace operation, CRDT algorithms will solve this by having the last write win. What CRDTs give you here is a guarantee that the order is the same for every participant. So if you're building a collaborative text editor, for example, either everyone will either see "moo" or everyone will see "boo".

For a delete + insert, it might not be atomic, in which case only the delete will "conflict". Since you both deleted at the same time, it's not actually a conflict (you both did the same thing), and the result will be either "mboo" or "bmoo". But again, it will the same for everyone.


Interesting. What about the seqeunce "Alice deletes f, Bob replaces f with b, Alice inserts m"? I guess it doesn't matter, as long as all implementations do the same thing.

git could easily take an approach like this too, but there are obvious reasons why it doesn't. It feels like the people designing this algorithm believe the text being worked on is less important than source code.

I don't see how it's possible. I get Alice's changes, I spend 3 hours working on them, I get Bob's changes. The algorithm might be able to resolve these three sets of changes consistently according to its rules, but I've got no faith that the meaning of the text would survive the process.


CRDTs work best for high-contention situations such as online text editing (think Google Docs) where the conflict resolution can be seen right away and addressed by the user.

For offline sync, where someone edits a text document for an hour and then syncs, you're right: You can end up with something unintended, since each participant is editing based on ("branching off") a snapshot. For example, if I deleted a whole paragraph, and you edited it, what should the end result be? But at least the end result will be consistent in the sense that all participants end up seeing the same thing, though semantically it may be wrong.

Note that CRDTs go beyond just text. CRDTs can be used to represent arbitrary data structures and operations on them: Array s (insert, delete, append, etc.), numbers(increment, decrement, etc.), dictionaries (insert, delete), etc. A great implementation of this is Automerge [1].

[1] https://github.com/automerge/automerge


That makes sense, and is reasonable. Thank you for taking the time to explain. I am now wondering about the space where CRDTs and databases overlap.

(Very unformed thoughts follow) We're used to databases storing the system's current state. If we're lucky, we're writing changes to the database, rather than just the current state, so we can reconstruct the system's state at any point in the past. What would a database that not only stores changes but also resolves conflicts look like, I wonder. A database where CRDT was a column type, I guess.


Further thought: A list of deltas in a database is reversable - you can wind back to see what the database state was 3 weeks ago. Can you reverse a CRDT?

I'm not sure why I keep coming back to this. Maybe because it's a new structure I've never thought about before. Maybe I'll have to implement one, just to get a feel for them.


> either everyone will either see "moo" or everyone will see "boo" That seems in conflict with the claim elsewhere in this discussion that this works offline, too (https://news.ycombinator.com/item?id=23802495)

I guess everyone will eventually either see “moo” or “boo”?


Right!


Ideally, you'd get a popup asking you and the other party to reach consensus.


Depending on timing, you end up with "mboo" or "bmoo".


How about concurrently editing "your bonus is 1000" and "your bonus is 10000"?


So, literally a change nobody wanted? Seems like it would not work in any real sense. I change it to moo and as a line checking something on it. You change it to boo and add a line, as well. Congrats, now neither of our additional lines makes sense...

This really feels like a solution in search of problems.


> Congrats, now neither of our additional lines makes sense...

What would you expect to happen? That one persons input is ignored? That’s hardly expected for that person. If anything, it’s much more confusing. This way, both people see both Ed it a and can react appropriately. If they both remove the same thing, then no problem, if they keep stomping on each other’s work, then they need to communicate anyway.

The important thing isn’t that you ended up with something neither of you wanted but that it’s consistent for all people. You see the exact same thing they see.

> This really feels like a solution in search of problems.

Hardly. As someone who once wrote a collaborative editor as a you project long ago, this seems really useful to me. I’ve also worked on mobile sync (multiple devices that could be edited offline syncing with the online version) and again this would have been really beneficial as the solution being used wasn’t great at all.


Don't ignore, but just like I expect my car to not start of I don't have my foot where it is supposed to be, I'd expect there editor to indicate to me that my edit could not go through.

And I think I wasn't clear. Collaborative editors at the character level feel like the solution that is a misfire. Doing the same things at a higher level of abstraction works. Merge in document changes in remote sections. Code merges with git work reasonably. None are bullet proof, and I expect conflicts at a level lower than paragraph to almost always need an audit. Certainly lower than the line level.


This is how google docs works and this problem doesn’t come up much in practice. The reason is that usually when you’re collaboratively editing a document you do so in real-time. If we both see one another’s cursor on the same content, we pay extra attention and make sure our edits make sense.

For offline edits (eg multiple developers working on independent features in a codebase), generating merge conflicts is probably more appropriate. OT and CRDTs can be written that generate merge conflicts in cases like this - it’s an implementation detail. It’s just that most algorithms are written first and foremost with real-time collaborative editing in mind. And again, in the real-time collaborative editing case, merge conflicts aren’t what users want.


I recently did realtime collaborative editing of the same file with a Visual Studio Code plugin and have also often used Google Docs and its really not an issue in practice. If it were to suddenly stop me to notify me that there was a conflict, that would be a very jarring and unpleasant workflow.

As josephg says, you don’t want merge conflicts in realtime editing and for offline editing, a git merge conflict resolution style is probably more appropriate.


It’s an extremely valuable solution to a very real problem. Surely you must have tried Google Docs? The “mboo” is an indication that the two editors have different ideas about where they’re going. In a typical setup both editors will see that someone else has their cursor in the same place in the document, and they will very quickly see that a conflict has happened. Now they can coordinate on how to resolve it. It’s not a problem, but a desirable step in the process of two people working together on something that isn’t finished.


I've seen all to often where people don't see the conflict because it allowed them to continue.

Such that editing a Google doc is easily up there with many other experiences I don't like. Taking the act of editing, that used to just be single user and forcing it into distributed tricks from the get go.

Yes, collaboration is distributed. And sometimes it is nice to both be working at the same place/time. Usually, though, a batch process is easier to reason about and execute.


A text editor could easily highlight passages where concurrent changes were made. (Perhaps only if a heuristic indicates that the merged change doesn’t make any sense.) The metadata is all there.


That's the best failure mode you can expect for the average user without diving into the usability pit that is git - both changes are kept and it's obvious to users that there is a merge conflict. Otherwise, one update would just eat the other silently, leaving one side confused and the other oblivious.


Meh. Git isn't so bad, all told.

But, the point is that you get a marked conflict. And take it back to the users.


What do you expect to happen instead?


Things are always sequenced. One will win, and the other will have to redo their change on top. Partially applying the change at the word level just seems way too fraught with false edits. It is already a source of a lot of bugs at the file and project level.


That is exactly what's happening here -- both deleted the "f", so there's no conflict, and their inserts of "b" and "m" are executed in order.


But one person edited the word foo. The other edited a weird that no longer exists. I get why this feels like a clever combination of the edits. But I'm struggling to see how doing this at the character level makes any sense.

Consider instead that you could do this at the byte level, with equally off results.

At higher levels, this trick sounds useful. But you pick your abstraction height where all conflicts should just go back to the user.

So, people edit the same document, but at different paragraphs? Fine. They edit the same paragraph? Almost certainly a problem. No different than code.


Different strategies have different tradeoffs and work better for different use cases. If you have multiple people interactively editing the same document (i.e. syncs are happening regularly), it can feel more natural to err on the side of applying each user's edits and letting the humans work it out, rather than flagging a conflict that must be resolved before proceeding. When using Google Docs I've had the occasional awkward "after you, no after YOU" moment while trying to edit the same text, but it's pretty rare. You obviously wouldn't want to use this same approach for asynchronous/offline collaboration, where more explicit conflict resolution like a VCS offers is necessary.

The place where this kind of character-level approach actually does start to fall apart is when users can make larger structural changes to the document with single actions -- reordering lists, cutting and pasting chunks of text, etc. There are other options for that.


"Data laced with history" (2018) [1] is very relevant here. Interestingly, that extensive post obsessively vivisects RON 1.0 (Replicated Object Notation [2] as of 2017) which was based on columnar compression techniques Automerge recently implemented (53:24 in the talk).

Columnar formats have their upsides and downsides, though.

[1]: http://archagon.net/blog/2018/03/24/data-laced-with-history/

[2]: http://replicated.cc


It's worth noting that you are the author of RON :)


... and working hard to release the new version. Hearing the fuzzer buzzing as we speak...


Are there changes coming to RON that aren't currently mentioned on http://replicated.cc? Any clues as to what they might be?


I think, the new RON oplog is the biggest change.


awesome, thank you for your work.


> Columnar formats have their upsides and downsides, though.

Oo, it sounds like you have some interesting thoughts. Could you elaborate?


Does anyone take issue with the fact that CRDTs seem to require keeping a history of every change ever made to the document?

Seems like it could get unwieldy very fast. Especially, in the face of a bad actor spamming a document with updates.

I've considered using CRDTs in a few projects now, but the requirement to keep a running log of updates forever has ruled them out. I've ended up using other less sound (more prone to failure), but more practical methods of doing sync.

Perhaps, I'm missing something. Wouldn't be the first time.

Are there alternatives without this requirement, or that would at least allow a cap on the update log?


You don't actually need to keep a log of all events. You only need to keep around just enough information to merge any 2 replicas.

Here's a very simple example: If there are only 5 users concurrently editing a document and each user has seen operations o1...oN, then you can safely compress the data for o1...oN.

Depending on the CRDT type you may not need to store any log at all. For an add-only set, for example, you only need to store the elements.

I think what's harder to solve is the metadata overhead problem. Most CRDT based text editors have a huge per-character overhead. As Martin mentioned, Automerge used to have a overhead of ~200bytes per character, but using a new binary encoding format they were able to reduce the overhead down to 1.5-7 bytes per character. (https://speakerdeck.com/ept/crdts-the-hard-parts?slide=67).


That's not quite right. There are a lot of sound strategies for culling/merging/resolving CRDT state in-part or in-total depending on the use case and/or the topology of the system that interacts with the CRDT.

It's possible to construct a pathological case where it's impossible to soundly GC the CRDT state, and where you have to keep around an arbitrarily long list of per-agent updates or list of agents forever, but that shouldn't be the normative case.


Yeah, CRDTs don't require keeping history of every change forever or at all. In other words, all the changes coming from a bad actor can be merged locally into a single change or a small set of changes or whatever is appropriate that will actually be propagated to other nodes. Plus nodes can easily know how far all of them have progressed and drop history before the most far behind point. Only during outages history should grow a bit more than usual.


> Plus nodes can easily know how far all of them have progressed and drop history before the most far behind point.

That may be easy coordinating servers that are almost always online, but it's definitely not easy for desktop/mobile clients that go offline for long periods (and sometimes don't come back).


A middle ground could be a combination of "historic" nodes that keep all known history and "client" nodes that only care about history from their own moment of sync onward, the optimistically drop history via some heuristic.


Do have any pointers to javascript libraries that work this way? All the libs I've looked at (not recently) require the server to keep a running log of updates for the life of the document.

For example, the automerge library linked elsewhere on this thread requires it.

My understanding (flawed) is that you need to keep all the changes on the server because you never know how long it's been since a client has pulled/pushed changes to a document.

I guess arbitrary limits based on number of updates or time can be imposed, but I haven't seen libraries that do that.

Thanks.


Did you look at hypermerge, also in the automerge org? It is based on DAT protocol and hypercore, and the p2p version of automerge I think (just found the project).

https://github.com/automerge/hypermerge


No I didn't. I'll check it out. Thanks!


We tend to overestimate how often a document will be modified. For example wikipedia seems to keep "a history of every change ever made to the document" and they seem to be managing fine. If there are bad actors, what you need is better moderation. Having easy undo helps a lot in that case!


Depending on the use case, you can at a specific and suitable time create a snapshot of the value and clear the history of changesets.

I do that with my OT collaborative editor.


> I've ended up using other less sound (more prone to failure), but more practical methods of doing sync.

Could you share the alternative methods that you’ve used?


Stuff like diffing multiple json documents and merging adds, updates and deletes where conflicts did not exist and overwriting conflicting attributes based on timestamps (latest wins). There's some reasonably good jsonpatch libraries out there that will do the heavy lifting.

Plenty of room for problems to arise, but it did not require keeping a log of updates. My use cases did not require real-time collaboration and the structure of the document was known beforehand, though.


Got it. I’ve typically seen Operational Transforms brought up as the alternative to CRDT for real-time collaboration. But, if you don’t need to solve for that use case and the format is JSON, it’s a different problem to solve and you don’t need that journal of changes.

If you have code available in the public domain, I’d love to see it.


Operational Transformation, though not the easiest to grasp, offers a solution to the same problem that is perhaps more loyal to the original user intent.


OT is just a fancy name for a CRDT


Is this really true (I just watched the video, so still learning these concepts, so please be kind)?

The linked video makes a clear distinction that OT and CRDT are different, as OT has the idea of a centralised coordinator to ensure consistency by mutating the proposed, conflicting operations whereas CRDT uses commutativity to attempt to make conflicting operations an inaccessible state


There is no formal distinction between an OT and CRDT algorithm.

It's true that the most popular OT systems have a central server -- but there exist OT algorithms without a central server.

It's also true that CRDT algorithms tend to require too much memory usage to be practical -- but there exist CRDT algorithms with bounded memory.

There are even algorithms that can be equally called OT and CRDT.


No, CRDTs and Operational Transforms are not the same at all.


Well, so OT is the same as CRDT except that you also have a centralised server that decides the merge conflicts.


Not true whatsoever.


Actually, the algorithms have a lot of overlap. The original CRDT (called WOOT, for "WithOut Operational Transform") was derived from an OT algorithm.

Historically, what happened was that the OT research community couldn't solve consistency without adding tombstones into their data structures that recorded deletions (and generally kept these deletion marks around forever). They called these "tombstones." People didn't like that you had to keep them around forever, but they seemed to solve the problem.

Over time, some researchers decided to just keep more things around forever, and promoted the idea that you don't need to do as much transforming of operations if you just preserved all operations in a spatial data structure, not just the tombstones, but everything. They came up with a new name this style: CRDT.

But in fact, the technical definition of a CRDT actually fits OT algorithms quite well. A CRDT just means that you can accept any operation at any time. That's what a good OT system also needs to do. So there isn't a technical distinction between an OT or a CRDT. They each define a set of features, and your synchronizer can possess OT features, and it can possess CRDT features.


You could introduce a quiet period in which no updates are let in, coalesce changes into a snapshot, clear history and open the floodgates again.


“Ever” in practice is usually a few months or years.


There was a recent post of a series that dives into CRDTs [1] . Someone linked to a paper, Chronofold: a data structure for versioned text [2] dated this April, that attempts to map CRDT semantics onto text editing. It's still on my list.

[1]: https://news.ycombinator.com/item?id=23737639

[2]: https://arxiv.org/pdf/2002.09511v4.pdf


This spawned an idea, which I feel a need to post.

It seems the hard part about CRDT is choosing the correct commutative function, as merging two operations in line with user intent is non-trivial. Would it be possible to use a combination of superposition (Please correct me if this word is wrong) and pruning to derive user intent?

The idea being that instead of combine being (A, A): A (A commutative semigroup), couldn’t we represent the operation as (A, A): Set[A] and have a way of showing the user set results in a way that their next operation shows us the “correct” interpretation.

He’s doing this implicitly with the file tree example, wherein operations that don’t create trees usually defy user expectations (Symlinks aside), so he decides to prune those choices from the result Set[A] before introducing a heuristic to further prune the set. There’s still an issue of users having opposite intent, but at that point it just seems like we need to introduce “Set pruning” or “Superposition collapse” operations as a user-level primitive and then rely on recursion to resolve the possible nesting of these result sets.

Does this riff with anyone / does anyone have further thoughts on this formulation?


This sounds similar to the concept of a "multi-value register" [1] I've seen in a few places while reading about CRDTs the last couple days. Is that what you're looking for? The idea is that each process maintains a vector clock with the latest timestamp from all other processes, and we don't delete values until one version can be verified to be "later" than all the others.

[1] Section 3.2ish of https://hal.inria.fr/inria-00555588/document


This was 100% the path I think I was traveling (Thank you for the link). I had a hard time grocking the complete specification from the paper, but the README of this repository that implements CRDTs in that frame was helpful [1].

The core part is separating the "merge" from the "resolve" state. Merging state can be done in a variety of ways, so in the default formulation there seems to be a focus on making the "merge" operation also "resolve" to only one value, when really there could be multiple formally valid merges that the client may desire (Which is part of the difficulty that the video notes as he proposes a variety of possible file system mutations for a set of two uncoordinated operations).

The cleanest clarification of my thought process is similar to the following:

Given the operation Merge(4, 2), I could propose that there are two valid ways to perform this merge, addition and multiplication. This means the result would be either 6 or 8. The act of returning a single value (6 or 8) changes that proposal into a statement, though, which is the "resolve".

One subversion of this restriction is to return a set of all possible results for the operations we think are valid, so {6, 8}. At this point, the user can say (Either explicitly or implicitly) "I actually want 6" and we resolve it to the single value of {6}. There are also special cases like Merge(2, 2), where this whole situation is especially ergonomic because the merge operations are equivalent.

There are problems, of course, with this approach.

One issue is the need to categorize all possible operations the user may think are valid for Merge(4,2). If the user intended to do division, the result set proposed above will not include the state that they would expect. Still, this seems more general now, as we just need to gather the set of operations that the user may think are valid instead of assuming which one is valid. There's also a ranking problem that then exists at the UX level, as we need to find a way to cleanly propose this set of alternatives.

Another issue is, of course, exists if both users propose conflicting resolutions (Actor1 says "I want multiplication" and Actor2 says "I want addition"). This is the issue with decoupling the "merge" and "resolve" steps, as now we may cause a fork in the model which causes a fundamental divergence of the collaborator's data.

[1] https://github.com/rust-crdt/rust-crdt


Oh wow! I've been studying CRDTs for a year but never understood what a multi-value register would be used for, so I'd ignored it... But I get it now. Also because it sounded like a concept from the 70s or something.


This might be a reasonable UX in some cases -- but it seems to assume that whichever user chooses an alternative also gets to broadcast that choice and have it immediately binding on all the others -- which is incompatible with some use cases for CRDTs.

Failing that, you've just kicked the can down the road. Consider two users A and B who make conflicting edits, and so wind up being presented with one of these choices. If they choose different options, and then make further edits, what do they see of each others' subsequent edits, and how do you explain the whole branched history to user C, who was out to lunch while this happened?


It does kick it down the road, but it also doesn’t make it unresolvable — one issue the video seemed to articulate is that the writer of the “merge” function needs to make a choice for the user about what “combine” looks like which is often the root of weird UX. The idea here was “Maybe we can not make a choice, but still expose valid results to the user so that they can make a choice”. It definitely isn’t fully fleshed our, though


From my (admittedly very limited) experience implementing CRDTs it became clear that even technically correct is not good enough. Optimistic merge strategies require a very clear understanding of user intent and expectation.

Properly consistent can still mean utterly confusing.


I think that's basically the point of this talk, that correctness is the minimum, but for user collaboration at least, its important to encode the user operations as 'close' to the CRDT as possible.

For example, in this talk he discusses the problems with moving items in a list as a pair of delete+insert operations. Then proposes adding a move operation to the CRDT that solves some of these problems.


Exactly that which is pushing many people away from CRDTs despite them being mathematically proven true, and towards Operational Transformation.


OT is just a fancy name for a CRDT.


No, it isn't.


First off, this is an excellent talk. The presenter explains the different topics in a way that even a layman like me can easily follow along. The compression scheme he presents at the end seems very interesting as well.

I do wonder if in practice OT isn't a simpler solution for most applications. He mentions the differences in the beginning of the presentation and the main advantage of CRDTs is that they don't need a central server. It seems to me that for e.g. a web app you have a central server anyway so all the extra complexity of CRDTs isn't needed. I know almost nothing about this though, so would love for someone more knowledgeable to explain why I might be wrong.


For single server/database web-apps, CRDTs might be useful because they allow offline edits, and (to me at least) they are simple to understand and implement. OT does allow offline edits too, but (I think) has poor performance if there are many offline edits.

For multi server/database web-apps, CRDTs might be useful because they reduce the centralization required for collaboration, and increase the fault tolerance. In a load balanced web app, different clients could connect to different servers/databases and stil achieve eventual consistency when those back-end systems sync up. If any of those systems go down, in theory traffic could be routed to other systems seamlessly.


I recently shared a thread [1] on Twitter with CRDT resources I found useful if you’re interested in that kind of thing.

[1] https://twitter.com/ollermi/status/1279067350269124609?s=21


CRDTs are great examples of what the NoSQL movement called Eventual Consistency. I never understood why this movement assumed that abandoning ACID-style consistency automatically gave you Eventual Consistency.

As a side question, have any new algorithms been developed over the last decade that have significantly improved automatic source branch merging?


Yes. Take a look at Pijul. Here is an article explaining the algorithm. https://jneem.github.io/merging/


very good read. however, I'm confused. the article doesn't mention how to handle changes within a line?


I have no real understanding of this, but following the bread crumbs, I found this in Pijul's site (my emphasis below):

> In that paper, and in Pijul, the definition of a file is slightly expanded, to included these cases, and “artificially add” all pushouts of all possible patches to the space of files.

> The resulting structure is a graph of byte chunks (which can be for instance binary blocks, lines, or smaller parts, such as indentation).

> The relation between them, described by the graph edges, is the order in which they appear in the files. This graph acts as a “cache” of applied patches.

From: https://pijul.com/model/#pushouts-and-distributed-algorithms

The paper is they mention is the same one in GP's linked article.

A Categorical Theory of Patches - https://arxiv.org/abs/1311.3903


Eventual consistency and ACID are not at odds, you can have one or the other or neither or both.


depends how you interpret ACID. for some interpretaions, only strong consistency qualifies


Not true. Eventual consistency can not (provably) provide consensus.


sure it can! A particle can be either this or that until someone looks at it, and when they do, the act of looking at it causes the state to collapse and propagate. Sure, a partitioned group may come to alternate realities before they reunite, but then you just get to do the same thing again! Like Conways Game of Life, except with application state.

(I am joking of course)


If anyone is interested, I've been trying to think about the problem of moving ranges in a list-structured CRDT for a couple of weeks now for a side project, and I've got a candidate that seems to satisfy the most obvious constraints. I'd be really interested in any feedback / holes you can poke in my solution!

Rough notes are here:

https://docs.google.com/document/d/1p1K3sxgKGYMEBH72r-lnP9Gn...


This is interesting! I've been looking for a good sequence crdt to implement in my Rust CRDT toolkit [0], which is still very much a work in progress but I want to make it really useful. Do you know how this compares to yjs [1]?

[0]: https://github.com/anchpop/crdts [1]: https://github.com/yjs/yjs


I hadn't looked at yjs; I'll check it out! [edit: It looks to me like yjs is much more flexible than my design here, but doesn't include an ability to move ranges of lists to different locations]

Darn, and just after I'd implemented it myself in terrible beginner Rust! I might get started reimplementing it using your tool :)


What an excellent talk, many thanks.

One idea comes to my mind (a bit out of topic): as we can store the complete editing history in a document (even including mouse movements) with a fairly small overhead using the ideas of automerge, would this be useful for distinguishing texts that are generated by machines and by humans? Or to detect plagiarism?


I wrote the realtime sync and edit history code in cocalc.com, which is used by instructors to teach courses that use jupyter notebooks with realtime collab. Instructors do regularly suspect and diagnose instances of cheating by students due to us storing the complete edit history. We don't automate this - it's more that they are grading, get suspicious, and use the edit history manually to investigate further.


I thought about this once and came to the conclusion that the problem doesn't exist: Anger is the only thing you can accomplish by typing letters into the sentence I'm writing. If the goal is not anger but to write text only one person can write a sentence at a time.

The solution then becomes really simple: You insert your cursor where you want to edit. You wait for the text and/or background of that paragraph to change color. Light green would do nicely.

Other users will see that paragraph turn red.

The paragraph is now under your control. If someone else desires to edit a sentence in YOUR paragraph that sentence changes color again and the side bar displays a dialog allowing you (the owner) to 1) hand over the paragraph to the new author, 2) hand over only that sentence or 3) ignore the request.

Dumb software wins!


That falls apart as soon as the author with the locked paragraph leaves the document open while they’re doing something else, preventing others from editing.


Or if network connectivity isn't guaranteed.


I really don't get it. You want to add thousands of lines of code and complex data structures with tons of edge cases so that I can interrupt you while you write??? It's just not desired functionality. Poor network connectivity is not an excuse to break your workflow.

You simply do not assign 2 people to a task that can only be done by a single person.

It is like a system where 2 people can poor coffee into the same cup at the same time. We deal with the cup overflowing with some drainage (marvel at our creation of course!) but then 12 people put sugar in the cup. Solution: we overflow the cup further to dilute it! We solve the mobility issue by spilling a bit more coffee out of the cup and whip the bottom with just the right type of towel or napkin. The only problem that remains is both drinking from it at the same time.


Just put a timer on it?

You can also write comments above or under the section if it absolutely needs to be corrected immediately.


Hmm. Does anyone know of a transcript? I can’t watch right now, only read.

I wonder if there’s some free YouTube transcription service... even just showing the captions would work.

Oh, hm. It’s not on YouTube anyway.


Actually, the video on the page is a YouTube embed and the English captions are pretty good (although understandably it struggles with CRDT jargon).

I found that the last 4-5 references listed in the link are pretty accessible, and most of the diagrams from the talk are taken from one of the papers by Kleppmann.


Great talk! I look forward to diving into "Interleaving anomalies in collaborative text editors" to see how Kleppmann et al fixed the RGA interleaving issue.

The section on moving items in a list makes my mind jump to Causal Trees (≈RGA). The problem here is that a) we need the concept of a list bucket or slot, and b) we also need to preserve the sequential integrity of runs of items as in text editing. In CT/RGA, each letter/item is simultaneously a list bucket and its contents. I wonder if this problem could be solved by adding a "reanchor" op that moves the "contents" (and some/all children) of a letter op to the "bucket" of another letter op?

Each reanchor op would have the job of "moving" a subtree of text to a new parent bucket. (Under the hood, the subtree would obviously remain topologically unchanged for convergence purposes, but the reanchor op would inform how the tree is parsed and turned into a user-visible list or string.) First, the reanchor op would need to reference the start and end letter/item ops in a given subtree; and second, the reanchor op would need to reference the letter/item op into whose bucket the subtree contents will be moved. If the set of subtrees for a range of text/items contains multiple independent subtrees, they would each need their own reanchor op. Concurrent moves of the same subtree would be resolved as LWW, similar to the Kleppmann proposal.

Let's say you have a graph for string "ABCD" that looks roughly like this, sans metadata:

        +---+
        | A |
        ++-++
         | |
      v--+ +--v
    +---+   +---+
    | B |   | C |
    +---+   +-+-+
              |
              v
            +-+-+
            | D |
            +---+
If you wanted to move "CD" after "A", you would add the following op:

        +---+
        | A +<---------+
        ++-++          |
         | |           |
      v--+ +--v        |
    +---+   +---+      |
    | B |   | C +<---+ |
    +---+   +-+-+    | |
              |      | |
              v      | |
            +-+-+   ++-++
            | D +<--+ m |
            +---+   +---+
"m" references "C" and "D" as the subtree range, and "A" as the target bucket. The string would render as "ACDB".

We can get cycles with this scheme, but as far as strings or lists are concerned, this doesn't actually matter, since the parent references aren't reflected in the output (as they would be with an explicit tree data structure). If, concurrently to this, another device wanted to move "A" after "C", the merged graph would look like this:

    +---+   +---+
    | m +-->+ A +<---------+
    +-+-+   ++-++          |
      |      | |           |
      |   v--+ +--v        |
      | +---+   +---+      |
      | | B |   | C +<---+ |
      | +---+   +-+-+    | |
      +---------^ |      | |
                  v      | |
                +-+-+   ++-++
                | D +<--+ m |
                +---+   +---+
How does this get resolved? Well, we can simply read this as "move the contents of 'A' to the bucket for 'C', and move the contents of subtree 'C' through 'D' to the bucket for 'A'." The string would consequently render as "CDBA", as we would expect.

This is just a sketch, not sure if it would actually work in practice. Pain points are a) moving multiple subtrees in a somewhat atomic way (when the range to be moved covers more than a single run of items), b) sane results when overlapping ranges are moved concurrently, c) moving items to a bucket whose contents had already been moved before, and d) parsing performance, especially given the fact that we'll now have ops (and their children) whose contents might end up in arbitrary places in the output string or list. Might just end up with a slow, confusing, and labyrinthine data structure.

There's also the question of intent: does the user want to move a range of items to the slot currently occupied by a different item, or do they want to move that range to the right of that item? Lists of notes will want the former, while strings will usually want the latter. Perhaps the reanchor op could explicitly differentiate between these two cases.


Interesting idea, but the devil is in the details. Especially two concurrent moves of partially overlapping ranges of characters is a tricky case to handle, and it is not obvious to me how your scheme would deal with this.

Another fun case is two concurrent range moves in which the destination of the first move falls within the second move's source range, and the destination of the second move falls within the first move's source range. How do you handle this?

I expect that any algorithm solving this problem will need a formal proof of correctness, because it's very easy to miss edge cases when using informal reasoning.


Incidentally, Martin's book ("Designing Data-Intensive Applications") is excellent and highly recommended reading. If you find yourself saying things like "this database is ACID compliant", "we have an SQL database with transactions, so we're fine" or "let's just add replication to Postgres and we'll be fine", you need to read this book.


> If you find yourself saying things like "this database is ACID compliant", "we have an SQL database with transactions, so we're fine" or "let's just add replication to Postgres and we'll be fine", you need to read this book.

can you elaborate why? are these sentences fundamentally wrong? they dont appear so.


The entire book is about when and how these statements turn out to not be absolutely true. And it's not a short book. I don't have my copy in front of me right now, so I won't get into specifics. But I consider it one of the most important books I've read, if only for making me realize how difficult it is to get distributed systems correct. Or, rather, learning that getting distributed systems correct is impossible and what sort of tradeoffs you can make in order to keep things mostly working.

And it turns out that most services that I find myself working on these days are distributed systems, so having a healthy respect for all the ways things can break is a useful place to be.


It's not that they are fundamentally wrong, but rather that these seem like simple, black-and-white statements/decisions, when they are anything but. When it comes to databases (especially distributed ones), the devil is in the details, and things like ACID compliance can surprisingly mean vastly different things in different systems.

+1 from me on Martin's book, btw. One of the best technical books I've ever read.


I found this short paper really interesting to understand the fundamental limitations of distributed systems: Consistency Tradeoffs in Modern Distributed Database System Design http://www.cs.umd.edu/~abadi/papers/abadi-pacelc.pdf . The theorem introduced in the paper (PACELC) provides a better way to understand the design of modern distributed systems than CAP.


Well, I recommended reading the book and I was serious. I can't "elaborate" on why — there is a very smart researcher out there who spent years studying this and wrote an entire book on it. Seriously, please get the book and read it. It's worth it!


It's simplistic. When you move into distributed systems, these claims are not sufficient to indicate "so we're fine". Even with the ACID, the "I" has multiple levels, and often they get things wrong (see Jepsen).


In the context of databases wrong is not binary. You need to understand the failure models of your database and the implications of your parameters.


Oh man, I was just about to recommend this book, you beat me to it. I am currently reading it, and I cannot recommend it enough. If you're interested in distributed systems and learning about database fundamentals, it really is a must read.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: