Actually we use React+mobx on both our web and RN stacks and it's been great. What we really like is that it is simple and extremely performant since in most cases it automatically updates only the React components that need to be re-rendered based on the state that's been modified.
We jumped directly from a flux-like state management to mobx (skipping Redux) and the development time has gone down significantly as well.
The other major issue with Blockchain is that "all nodes can see everything". This is theoretically a problem with distributed databases as well. However, in the case of distributed databases, the database nodes that can "see everything" are not the end user, whereas, in Blockchain because the nodes are untrusted, one must assume that the end user can see the entire Blockchain state.
This limits the number of use-cases tremendously to those where "everyone-can-see-everything" is an acceptable tradeoff.
There are several ways around this.
1) Zero Knowledge Proofs. But these are highly specialized and resource intensive. To my knowledge we don't have these for generalized Smart Contracts.
2) Split the overall state into Channels, Subledgers etc. with narrower "viewing rights". But again this typically involves an application compromise.
3) Encrypt or cryptographically hash portions of the state. But by definition, this portion of the state cannot be acted upon by smart contracts.
4) Use frameworks like Microsoft's recently released CoCo Framework which relies on Hardware Trusted Execution Environments (TEE). The issue here is that a compromise of a single TEE negates the whole scheme.
In my opinion the privacy characteristics of Blockchain are a critical factor that needs to be taken into account while deciding on the suitability of Blockchai for an application.
Sure. Hyperledger Fabric 1.0 has the option of "Channels" to limit viewership rights. In particular it restricts rights to a subset of the community. For example, you may have a 1000 parties in the community, but a particular channel may have only three (say A, B and C).
Now, if one takes a Supply Chain example (a domain I'm quite familiar with), most transactions cannot be restricted to just parties A, B and C. Some will involve A, B and D and some will involve B, C and F etc. So, it is difficult to come up with a suitable Channel membership model.
Even if the transaction is between A, B and C often the view rights are not symmetric. For example in a drop ship case where A is the Buyer, B is the seller and C is the fulfiller, the price attribute may need to be visible between A and B but not C. This is not possible with the Channel approach.
So this particular type of hard partitioning only works for the simplest Supply Chain examples.
Another type of hard partition is to partition by Transaction. But this involves issues such as synchronization between transactions. This becomes an off-chain concern with major consistency issues.
We're working on 5), a privacy option for public blockchains using secure multi party computation. Few application trade-offs (mostly around availability logic and additional cost), no private or trusted chains.
I don't believe that one should try to create a single tool that covers both synchronous and asynchronous. Their UX is too different.
It seems to me a better idea to create an asynchronous deep collaboration tool and integrate it with synchronous tools like Slack.
The bigger question to me is what should an asynchronous deep collaboration tool do (if anything) beyond first-class threads.
In my mind the opportunity lies on three dimensions
1) Bringing updatable content and structure to threads. This makes threads vastly more useful for real collaboration (not just communication).
2) Making "catchup" much more efficient as this is the main problem with email.
If you think of one of the most successful asynchronous collaboration approaches, it's source control like git. The key thing that these tools to is make changes "diffable". That allows users to work at completely different points in time and still easily "catchup" with what's changed.
3) Allow regular users to easily convert unstructured one-off threads to semi-structured template threads. This naturally and gently moves teams to greater process-orientation.
What about immutability? A key part of functional programming is using immutable structures to avoid side effects.
My understanding is that Swift has these to some extent ("let" vs "var", structs being immutable etc). And since Kotlin is built on JVM compatibility I _assume_ that Kotlin does not support an immutable style of programming. Maybe someone here has more clarity on this.
> And since Kotlin is built on JVM compatibility I _assume_ that Kotlin does not support an immutable style of programming.
The underlying host doesn't imply anything about language support as that's handled at the compiler level, not at the machine or VM level.
Even in Haskell data isn't immutable if you have a debugger or modify the machine code. It's simply a tool provided and enforced solely by the compiler - other JVM languages do support it like Clojure or Scala.
You can have immutability by making the setters methods of a class private. So if you combine it with "val" you have immutable objects. It is a bit more job in kotlin but as it is also object oriented programming you can achieve exactly what you want.
And for structures you have list, mutablelist, map, mutablemap, etc.
For immutability to be practical one would need to implement structural sharing in their collections otherwise copying would be prohibitively expensive.
I get your point now. Yes, it is read-only, not immutable. For me, I only need read-only. I think kotlin wasn't meant to be pure functional. I see it as object oriented with some functional programming.
out of curiosity, what cases will you use an immutable collection instead of a read-only? I cannot imagine any use case where immutable is a gain over read-only.
"Read-only" would not implement structural sharing.
Consider the situation where you had a linked list with 10,000
elements and you wanted to return a new list with one new element added to the 'head' of the list. With "read-only" you would literally have to make a new linked list with 10,001 elements which would kill performance.
While semantically correct, immutability via deep-copying is very impractical. So, immutability (of Collections in particular) needs to be implemented via structural sharing of elements.
In the above example, with structural sharing, the new list would have the new element and inside would point to the old 10,000 element list but the whole 'structure' would appear to you as just a normal list.
Sorry, I don't see the difference. With read-only you can create a new object which is the first element of the list and also points to the rest of the list. Instead of an structure you have an "immutable" object.
Yes, but the resulting object is not a "List" and hence would not inter-operate with any function that took a List as input.
In essence if the original object implements a List _interface_, then the new one should as well. If you do all of this, then you've essentially implemented immutability and structural sharing. But then you have to do this for 10 other Collection types as well.
Now, you could do all this, but I could do it as well and do it differently. Then my function which took MyList as argument would not interoperate with your function that wants to pass YourList as argument.
Something as fundamental as an (immutable) collection needs to be standardized so that all functions can take these and return them and thus compose easily.
This is the case with languages that implement immutability like Elixir, Haskell etc.
There are already interfaces for Immutable collections in kotlin. And all the functional operators as map, filter, etc, returns them. But it is not true immutability as you said, because under the hood they are normal list. They aren't even read-only objects, but you encapsulate this behaviour under an interface.
This needs to be built into the language or somehow standardized by the community.
With lack of standardization of immutable collections, there would be lots of different ways that libraries would implement immutability. This would result in losing one of the main benefits of functional programming (i.e. awesome composability).
This is already in the language. It is just an interface which makes the common lists only visible as immutable objects. But inside they are a normal list. And in this case the object is read-only, not immutable, as we were discussing in the other comment thread.
Ever increasing distraction and "real-time" communication and collaboration are reaching a point of diminishing returns and actually decreasing our productivity.
The book Deep Work, by Cal Newport is a start on identifying the problem and a possible solution (i.e. isolate yourself for stretches of time to accomplish Deep Work).
Unfortunately, we live in a world where collaboration is necessary. So, what's the solution?
One possibility is to come up with a collaboration solution that is built from the ground up to be asynchronous in nature. (Deep Collaboration as the enabler of Deep Work)
Such a solution would complement our real-time collaboration solutions.
E-mail, wikis, discussion forums, and issue trackers are all collaboration tools built from the ground up to be asynchronous in nature. Are you looking for something else?
Agreed that all of these can be used asynchronously. However, I would refer to these as "incidentally-asynchronous", in the sense that their design goals were not primarily to be asynchronous (and hence support "Deep Work").
The major determinant of asynchronous deep collaboration efficiency is number of "cycles-to-outcome". (where a cycle is roughly a request/response loop).
Email because of its lack of shared state collaboration generates lots of extra cycles because of confusion on shared state (i.e. attachment nightmare).
Conversely, something like online document collaboration supports shared state collaboration, but is really poor at "what happened". For all but the smallest of documents this leads to compounding "implicit document rot" on every iteration. Alternatively, it leads to ever increasing time to "catchup" once again vastly expanding cycles-to-outcome.
Neither is good with accountability (something that issue trackers are good with). Lack of accountability is another driver of increasing cycles-to-outcome.
A deep collaboration solution that enables Deep Work could be designed from first principles based on minimizing cycles-to-outcome.
Deep Work is great! I agree that after reading it the collaboration part is very difficult. Unfortunately the best solution that I've been able to come up with is to work on projects that can be accomplished by myself alone. It's obviously not ideal and dramatically limits the types of fun projects of otherwise take on.
"Unfortunately, we live in a world where collaboration is necessary. So, what's the solution?" - There are specific examples of Deep Work collaboration in his book. Deep Work =/= No collaboration.
The basic problem with OT (and current "real-time" collaborative editing approaches) is that they can only achieve eventual consistency.
While this sounds great, eventual consistency DOES NOT mean semantic consistency. This rules it out for many applications where semantic correctness is important.
Even for simple text documents you can get eventually correct but semantically incorrect results.
For example, consider the sentence
"The car run well"
This has an obvious grammatical error.
Now imagine two collaborative editors.
Editor 1: Fixes this to
"The car runs well"
Editor 2: Fixes this to
"The car will run well"
Depending on the specific ordering of character inserts and deletes this could easily converge to
"The car will runs well"
Obviously this statement is both grammatically incorrect as well as semantically ambiguous. (However, both editors see the same result and it is hence eventually consistent). Worse, OT collaborative editing will silently do this and carry on.
Now, for non-critical text where errors like this are ok, this may not be a big problem. But imagine contracts or white papers, or trying to use this on something like a spreadsheet where semantic correctness is critical and one can see why the current scope of collaborative "real-time" editing is very limited.
In general current "real-time" editing approaches like OT are outright dangerous.
I would expect that if an editor makes a change, their change should be preserved. There is no general case where you can decide which edit to keep, because in some cases(like the one you presented) people are editing the exact same sentence, but far more often people will not make edits to the same small part at the same time(at least in the real world). This makes OT very practical since generally the eventual consistency can be reached quickly, and there is consistency, so the results with given inputs are predictable.
It is fairly trivial to construct an example where editors are editing _different_ sentences and OT takes two locally semantically correct states and converges to a semantically incorrect (but grammatically correct) state.
I think OT and other "real-time" collaborative editors are practical if you are willing to (or your use case can) live with "silent semantic errors".
The greater the document "interconnectivity" (eg, paragraph A is semantically related to paragraph C), the greater the likelihood of having far-flung silent semantic errors.
For documents like spreadsheets this is very obvious because you start getting nonsensical results and (hopefully) errors very quickly. For Word-like documents, the errors are "silent" and thus much more insidious.
My point was that that is an element of OT which many users don't realize.
With regards to predictability, I would not call the results of OT predictable from a user's perspective. It is predictable in the narrow sense that for a sequence of arrival of operations AT THE SERVER it is predictable.
However, it is impossible for a user to predict how their local operations will interleave at the server with other users' local operations. For all practical purposes the converged result is unpredictable from the user's perspective.
The only property which one can confidently assert with OT is eventual consistency.
Yeah, I guess I see what you are trying to say. I just want to clarify when I say predictable, I mean that given a set of operations. No matter the order they come in, the results will be the same. This makes OT powerful in that everyone just needs the operations eventually in order to have a consistent document. The only middle ground that I could see that would allow predictability in the document, and help mitigate these silent errors would be to notify users of when they have both edited the same range before consistency was reached. This would catch "almost" any case that I think you are talking about, although would of course miss the situations in which semantic errors arise due to errors in very different parts of the document(e.g. referencing a figure 2.1, while someone changes that figure to 2.2), but these errors can still easily arise with a single editor, and so are not really unique to OT. I do think that it would be nice to have a solution to that problem though...
I can't think of a CRDT / anything that can handle your example...
but wouldn't this be somewhat (*not 100%) mitigated with UI? (e.g. showing carret positions + time-agos of different users, asking user(s) to resolve a conflict, etc.)
Yes, this problem could be mitigated somewhat by showing caret positions. But this method is very reliant on the users paying extremely close attention. It does not work at all when the silent semantic errors are caused by "far flung" edits, where you don't even see the other caret because it is "off screen" for you.
Asking users to resolve a conflict is not possible, because the whole idea of OT is to have no-conflict merges and would have no idea where the conflicts are.
If one absolutely wants real-time collaborative editing then the only (long-term) solution I see is something like a deep learning solution that continuously semantically analyzes the merged state for semantic errors. In a particular problem domain this might be 5-10 years out. In the general case, this starts to approach the level of difficulty of AGI and hence who knows when that'll happen :)
Some practical solutions are that the document starts out in a 'real-time collaborative editing' phase. After this phase is over, the document moves to a 'review' phase where the document is reviewed for semantic errors and those errors are fixed using a 'non-real-time' approach.
The only way I see at this time to avoid silent semantic errors in the first place are non-real-time approaches.
The best practices here are optimistic locking/leasing of "semantically-connected regions" (could be defined as a paragraph, document, multi-docset, worksheet, slide etc.) along with semantically useful diffs (diffs that are meaningful for an end user) for conflicts.
You could say that this is the approach taken by version control systems like git, where the semantically-connected region is the File/Document.
Semantically useful diffs for anything other than text documents is a non-trivial problem in itself. But is still more tractable than avoiding or detecting silent semantic errors with OT.
The reason this is happening more and more is because teams are bumping up against the limits of synchronous communication. Especially when teams get large, and everyone has an expectation of immediate response the whole thing falls apart.
What's really needed is a communication platform that is geared towards asynchronous communication but integrated deeply with chat apps like Slack, Hipchat etc.
"In my opinion both of these posts get it partly right. Both of these articles also have as a subtext an implicit or explicit comparison with email. Both, Slack and email tend towards “communication overload”. However, the generic description of “communication overload” obscures the very different type of overload created by these two technologies. Understanding this distinction also leads us to a possible solution.
We jumped directly from a flux-like state management to mobx (skipping Redux) and the development time has gone down significantly as well.