It is fairly trivial to construct an example where editors are editing _different_ sentences and OT takes two locally semantically correct states and converges to a semantically incorrect (but grammatically correct) state.
I think OT and other "real-time" collaborative editors are practical if you are willing to (or your use case can) live with "silent semantic errors".
The greater the document "interconnectivity" (eg, paragraph A is semantically related to paragraph C), the greater the likelihood of having far-flung silent semantic errors.
For documents like spreadsheets this is very obvious because you start getting nonsensical results and (hopefully) errors very quickly. For Word-like documents, the errors are "silent" and thus much more insidious.
My point was that that is an element of OT which many users don't realize.
With regards to predictability, I would not call the results of OT predictable from a user's perspective. It is predictable in the narrow sense that for a sequence of arrival of operations AT THE SERVER it is predictable.
However, it is impossible for a user to predict how their local operations will interleave at the server with other users' local operations. For all practical purposes the converged result is unpredictable from the user's perspective.
The only property which one can confidently assert with OT is eventual consistency.
Yeah, I guess I see what you are trying to say. I just want to clarify when I say predictable, I mean that given a set of operations. No matter the order they come in, the results will be the same. This makes OT powerful in that everyone just needs the operations eventually in order to have a consistent document. The only middle ground that I could see that would allow predictability in the document, and help mitigate these silent errors would be to notify users of when they have both edited the same range before consistency was reached. This would catch "almost" any case that I think you are talking about, although would of course miss the situations in which semantic errors arise due to errors in very different parts of the document(e.g. referencing a figure 2.1, while someone changes that figure to 2.2), but these errors can still easily arise with a single editor, and so are not really unique to OT. I do think that it would be nice to have a solution to that problem though...
I think OT and other "real-time" collaborative editors are practical if you are willing to (or your use case can) live with "silent semantic errors".
The greater the document "interconnectivity" (eg, paragraph A is semantically related to paragraph C), the greater the likelihood of having far-flung silent semantic errors.
For documents like spreadsheets this is very obvious because you start getting nonsensical results and (hopefully) errors very quickly. For Word-like documents, the errors are "silent" and thus much more insidious.
My point was that that is an element of OT which many users don't realize.
With regards to predictability, I would not call the results of OT predictable from a user's perspective. It is predictable in the narrow sense that for a sequence of arrival of operations AT THE SERVER it is predictable.
However, it is impossible for a user to predict how their local operations will interleave at the server with other users' local operations. For all practical purposes the converged result is unpredictable from the user's perspective.
The only property which one can confidently assert with OT is eventual consistency.