This is very interesting. It seems like a much more user-friendly version of TLA+. Wonder what Lamport thinks about it :)
I think the next logical feature would be to have Runway compile a model-checked specification to an actual C++/Java/go/python implementation, using something like Thrift or CapnProto RPC codegen. There has been some work on doing similar spec->implementation compilation from Coq to erlang [0] but the author hit a lot of obstacles.
That is exactly the effort done in our group before (from TLA+ to real C++ implementation) - the challenge is that the gap between specification and the implementation is large in terms of both implementation details and language expressiveness. As a complementary approach, we have come up libraries for distributed system development with built-in implementation-level model checking support. You may check out https://github.com/Microsoft/rDSN for details.
Hmm, I find TLA+ to be very user-friendly. It's one of its major strengths (it was very important to Lamport to observe engineers as they learn and apply it). It is certainly easier than learning Haskell, or even OCaml or Rust. I think it's a bit like Lisp: it's a minimal language with very few concepts, that may be unfamiliar at first and take a bit to fully wrap your head around, but once you do, you have unlimited power. I think that you can become seriously productive in it after about two weeks. All the tricky concepts (the TLA part of TLA+) are explained in this very short text[1] (I haven't seen the operator introduced in the Assumption/Guarantee section used in a spec). When you get those, you know TLA+ well, the "+" bit is very easy; a little like Clojure, only simpler.
It's not only easy to learn, it's extremely powerful, mostly due to refinements (an advanced concept, better left for week 3 :)), which are absolutely necessary for large systems/algorithms.
There are also two breakthroughs in TLA, which you may come to appreciate: the expression of the program and the program's properties in the same minimal language, and the demonstration that simple, high-school math, plus one or two new concepts, is all it takes to specify and verify programs.
And yeah, code extraction from specification is an open research question, but I don't even consider this to be a top-ten desired feature, considering the amount of time spent specing and verifying vs. translating to the implementation language.
The Banana and Elevator demos did not specify/verify temporal properties so I suspect it doesn't support specifying temporal formulas at the moment (the T in TLA+). This means it can be used to check correctness properties but not liveness properties. Despite this limitation I believe this is a fantastic tool as it makes writing specs way more approachable.
Very happy to see more tooling being created for distributed systems, it is much needed. Will have to check out the talk, too.
We've also released a tool for testing distributed systems, https://github.com/gundb/panic-server . It integrates with your existing tests (like mocha and others), and even reports/throws bugs back from the remote machines you're testing across.
> I’ve explored many techniques to help with distributed systems design,
> including formal specification, model checking, formal and informal
> proofs, simulation, and visualization. I’ve found all of these things
> valuable to learn about, and I think some of them are entirely practical
> to use. However, standard practice in industry is to use none of them.
This is what surprised me most when I started learning about formal specifications. I had been developing distributed systems for years armed with nothing but algorithms, papers, and my own sloppy thinking. I had worked alongside some very smart, senior people who never brought out any kind of formal specification.
It wasn't until I met a researcher developing his own specification method that I heard about such tools.
And I'm super-glad I jumped in.
I'm happy to see more tools entering the space. My feeling is that they are becoming sophisticated enough that we will be able to see people adopting them in the wider industry, even in open-source software. I believe they should be the state of the art. I also think that any distributed system should be published with a formal specification, if not proof, before being accepted into wider adoption by industry.
Seems a lot more accessible than TLA+. I have tried to get started with TLA+ a few times and never been able to get going so hopefully this yields better results.
Did you try the hyperbook tutorial? I found it significantly easier (and better documented) than learning almost any new programming language. The main concept -- that of the action -- is confusing at first, but once you understand it (which can be gradual), there's really nothing more to learn...
This tool seems to be equally useful for modeling any complex software at the concept level. Is there any thing that would prevent this from being used in non-distributed systems scenarios?
I think the next logical feature would be to have Runway compile a model-checked specification to an actual C++/Java/go/python implementation, using something like Thrift or CapnProto RPC codegen. There has been some work on doing similar spec->implementation compilation from Coq to erlang [0] but the author hit a lot of obstacles.
[0] - http://christophermeiklejohn.com/coq/erlang/2013/11/19/verif...