Hacker News new | past | comments | ask | show | jobs | submit login
Modeling Message Queues in TLA+ (hillelwayne.com)
195 points by panic on Nov 1, 2018 | hide | past | favorite | 46 comments



I'm reading Hillel's book right now and have just finished Chapter 2 (Intro to PlusCal). I really like it. I've had some failed starts in the past with TLA+. A few observations on the book:

a. The language is clear. I feel like making progress at a fairly regular pace. The examples so far could be replicated without any issues. I have both my Kindle and cloud reader open when I read. The screenshots are better in the cloud reader.

b. I believe it was a great decision to use PlusCal instead of TLA+ itself. One of the issues coming up to speed in the past was grappling with TLA+. PlusCal just makes it easier.

c. I hope that I can actually improve the quality of my C++ programs. I'm currently working on some infrastructure software for embedded systems and have implemented a Hierarchical State Machine "framework" in C++ (https://github.com/tinverse/tsm). My goal is to add a pub/sub mechanism and create a HSM that can be distributed over a network. Reviews welcome. Since I just came across this book, going forward, I'll be attempting to develop my specs in PlusCal before actually writing code!

d. I think other (to be written) promising books/blogs in this category would be something in the flavor of "Specs to Code" for common design issues. With its focus on concurrency and distributed systems, that implementation language could very well be Rust. Any pointers on porting C++ code to Rust? I'm trying to port above mentioned C++ code to Rust. Respond directly if you can. I don't want to turn this into a discussion on Rust.


> One of the issues coming up to speed in the past was grappling with TLA+. PlusCal just makes it easier.

I think this has both pros and cons. PlusCal, being essentially a formal pseudo-code, is much more familiar to programmers than TLA+, and familiarity can greatly assist learning. The downside is that TLA+ is much simpler -- though far less familiar -- than PlusCal (or any programming or programming-like formalism), as it shares the complexities of code (although programmers are used to ignoring them), and at some point if you work with PlusCal you will have to learn and understand it and go through the necessary process of "unlearning" code and thinking in simpler terms. Some may find it easier to start with that and see why any code is more complicated than you may think; after that, learning PlusCal comes for free once you see the translation to TLA+.

Personally, I find writing specifications in PlusCal less convenient and less clear than writing them in TLA+, unless you must reason at the code level (like in the case of low-level concurrent algorithms, where interleaving of particular instructions is crucial). PlusCal, however, has the advantage of being more communicable to those unfamiliar with TLA+, as it reads like pseudo-code.

> something in the flavor of "Specs to Code"

If a programmer can turn an informal specification (or even a set of requirements to code) then transforming formal ones, which are more precise, shouldn't be an issue. But on a theoretical level, once you become more comfortable and experienced with TLA+ you'll see that the notion of specs to code is a special case of a very general and powerful concept called refinement, which forms the very core of TLA -- the Temporal Logic of Actions -- the logic at the heart of TLA+. In fact, in TLA, the ordinary logical expression A ⇒ B, i.e. A implies B, means A refines or implements B (and conversely, B abstracts A).


I find that learning raw TLA+ to the point where you're comfortable reading it and internalizing what the spec is saying to be a critical skill to be able to write specifications successfully. This is because PlusCal translates directly to TLA+, which means that:

1. To debug errors given to me by TLC, I generally find that it's easier to look at the TLA+ spec directly.

2. It is sometimes easier to see what your algorithm is doing directly in TLA+. I find that I will write the algo in PlusCal, it gets translated, and I can get a better understanding of how the system works by looking at the TLA+ spec. Example: the "process model" of PlusCal does hide some of the ideas. Multiple process are just P1 \/ P2 \/ P3 ....,

3. There are some oddities within PlusCal (label placement, the with statements) that makes much more sense if you understand the TLA+ foundation.

What I really wish is that I can write partly PlusCal and partly TLA+ for maximum flexibility. However, it's kinda ackward to do within the current framework.


Right. In addition, you miss out on the notion of refinement (ordinary implication in TLA+) which is the most powerful (and most foundational) idea of the formalism. But I think many developers may be threatened by the unfamiliarity of TLA+, and PlusCal serves as a gateway drug, as it's often useful enough on its own (you're right that you need to know at least a bit of TLA+ to understand some of the tooling's output). If "PlusCal first" makes TLA+ less threatening and hopefully more approachable to many developers, then the approach is doing its job.

(BTW, I wouldn't call it "raw TLA+" as "raw TLA" is a particular logic that Lamport uses to introduce TLA, so the term may be confusing.)


P.S.

I think that learning TLA+ is easier (and takes less time) than learning Python on Javascript provided you can forget everything you know about programming, or, harder still, remember that despite superficial similarities, TLA+ is not programming (confusing a TLA+ operator with a subroutine or thinking about things like evaluation order are bound to bite you sooner or later). This is, I think, the hardest part about learning TLA+ (at least, it was the hardest part for me).


Can anyone comment on where exactly lies the line between, too simple to need formal specs, vs crazy to build it without them?

What would be the simplest system for which you would recommend taking the time to make a formal spec?


I readily use TLA+ or PlusCal whenever I am reasoning about a system which may (if buggy) experience deadlock, starvation, race conditions, etc. This includes most systems involving two or more communicating processes. (The exceptions are those systems, such as a simple pipeline, where communication is trivial and acyclic.)

It's perfectly suited to this domain of problems, which also happen to be those which are difficult for humans to reason about (owing to the sheer number of possible interleavings of execution between two or more processes).

And TLA+, an especially PlusCal, are simple enough that no system is really "too small" to be modeled. E.g. a PlusCal model of a simple pipelined server (say, reading stuff from the network and writing asynchronously to disk) is maybe ¼ to ½ the size of the actual implementation, with half as much again worth of invariants.


Exactly what I was feeling in my gut... but then what is the overwhelming advantage over rust that as far as I know is designed specifically for this kind of problems but is much easier to reason about, at least coming from standard programming?


Rust is much less expressive (meaning, it is not able to say many interesting things about the system) and far harder to reason about than the much simpler and more expressive (though less familiar an un-compilable) TLA+. In fact, here[1] is some work by someone who designs concurrent/distributed algorithms in TLA+ and implements them in Rust.

This is not a shortcoming of Rust in particular. No programming language can be as expressive and as simple as a specialized specification language like TLA+ (it could be by essentially embedding a specification language in some specification tier of the language -- like the type level in languages with dependent types like Idris or the contract level like languages with formal contracts like Java with JML or SPARK, but this comes at the cost of either less expressive power and/or much increased complexity)

[1]: https://github.com/spacejam/tla-rust


The primary benefit of TLA+ over any general-purpose language is that it – as a modeling language – allows you to elide those parts of your system that "don't matter". To continue my earlier server example, when modeling, I probably don't care specifically which functions I'm using to read data from the network, under what conditions they produce which errors, or how the OS might decide to schedule them. I just care that such a function exists, that it may error out, and that it gets scheduled somehow.

Then, as a consequence of clearly expressing all the behaviors my system might express, TLC (the TLA+ model checker) is able to exhaustively search these possibilities for bugs. This is at best intractable for any system where you are unable to abstract out the things that "don't matter", like in Rust. There are just too many variables (literally) in the search space.

That said, there's nothing particularly special about TLA+, or especially PlusCal. They just provide language features such as first-class sets and clearly-defined atomic state transitions that make it very easy to describe a system without too much irrelevant "stuff". The only particularly notable feature of the modelling portion of the languages is nondeterminism, which is necessary to express the boundaries of your model (e.g. nondeterminism is how you model the "receive from network" function as "a function that either returns some data or throws some error").

One could even imagine using Rust (to use your example) as a modeling language. You'd need to add nondeterminism (or another means of specifying contracts), and a way to specify temporal invariants to check. And also annotate which functions act as mutexes and which may block waiting for I/O. And you'd probably want to stub out container types so the model checker doesn't have to model their implementations. But once you've done all that, your mutant Rust is starting to look a heck of a lot like PlusCal, only more complicated.


Doing the comparison TLA vs Rust as programming languages is a misunderstanding. It's exactly equivalent to the misunderstanding in a comparison like "what's the advantage of SQL over Rust, that as far as I know is designed specifically for this kind of problems". Now, one could believe, perhaps, that Rust is "designed specifically" for the implementation of database "kind of problems". But the conclusion that there is therefore no "overwhelming advantage" to databases would not follow from that belief.

TLA and the TLA tools form a model checker. The TLA language is not used to do computations,but, rather, is used to describe properties and behavior of complex systems. The TLA tools then machine validate the descriptions to assure that the evolution of a system with the given behaviors will satisfy expected correctness properties. A TLA specification tells you that if your system is implemented (in a computer language, like say Rust) according to the description then the systems operation will satisfy the validated correctness properties.

TLA is about answering the question "Do I properly understand my problem and will my solution logic satisfy problem needs?". Rust doesn't help answer that question.


What happens when your Rust has coordinate with an unreliable vendor system written in QBasic? What if a sensor is affected by bad weather? What if there are malicious clients? With specification, we can talk about systems in a wider context than just individual programs.


Awesome response, esp. the concrete example


There are two parts to this: writing a spec and formalizing it.

Writing stuff down helps you organize, clarify and communicate your thoughts. You should write down a spec if you think the spec you hold in your mind can benefit from organization, clarification and communication at a level that's higher than the code.

Formal means mechanical. You should write a formal spec if you feel it is complex or subtle enough to benefit from machine-aided analysis. Code is a form of formal specification (it is a specification because it describes what the system does; it is formal as it is interpretable by a computer). But code is a formal specification at a very particular level of abstraction, that may not be convenient to answer your questions.

Some people think that high-level programming languages are convenient enough for the kind of analysis you want. Sometimes they are right, but often they are wrong. Explaining exactly why a language like TLA+ is more convenient than any programming language that exists or could ever exist requires some technical details (I've attempted to give a high-level explanation for why that is so here[1], but roughly, some useful levels of abstraction cannot possibly be compiled/interpreted -- as doing so in general is uncomputable -- but compilation/interpretation is a strong requirement of programming languages). However, a more concrete example of why code is insufficient is in the case of a distributed system, where you want to cocisely describe not just the behaviour of a single program, but the interaction among a set of communicating programs, the network environment and the behavior of the users (the last is relevant for any interactive system, which is a special case of a distributed one).

As a rule of thumb I'd say that algorithms/systems with any non-trivial amount of concurrency (i.e. concurrent algorithms/data structures and distributed systems) can greatly benefit from a formal specification. Sequential programs may also benefit, if there is a complex interaction of rules. For example, I saw a talk that suggests formalization of the tax code as a means to find inconsistencies[2], but a business system that encodes many interacting business rules could also benefit.

[1]: https://pron.github.io/posts/tlaplus_part3#algorithms-and-pr...

[2]: https://youtu.be/EshxZVMURt4


For those who have used modeling tools like TLA+ and Alloy in industry -- could you give an example of at what point(s) in the development process you used the tool and how long it took?

I'm intrigued, but having trouble visualizing how even lightweight formal methods can fit into an agile development process at a startup building a web application.


You might create a TLA model during the initial specification process to assist you in thinking through your problem. You would then update the model going forward as your understanding of the domain improved and as system requirements changed. However, you could write a specification for an existing system as well. Any processing where the complexity seems greater than ones reasoning ability or that needs assurance guarantees, could signal the need for a high level description to guide engineering efforts.

An initial model might take 4 hours to put in place. The time would be spent thinking through how best to abstract the modeled process and its logical properties, slowly building up a more and more complete set of events, and checking the model by running it as one goes. With an initial model, additional hours would probably be spent here and there adding enhancements and finding ways to do things better, just like one would do with regular code. The model would effectively exist over the lifetime of a corresponding code artifact and guide work on the artifact.

For a standard web application that mostly does reads and writes to a database, there usually wouldn't be any need for a tool like TLA to help you reason. A general use case for TLA is describing systems where there are multiple processes or threads coordinating on some sort of shared state.There would be indeterminacy in the order in which events happen. There could be the possibility of failure and the need to handle it gracefully.


Web applications can be surprisingly complicated distributed systems, when you take into account..

- the JS frontend that communicates with AJAX and/or WebSockets IPC with the backend

- the backend may actually be a bunch of microservices

- the fact that there are many concurrent frontends (users)

- messages from frontends may end up reordered/dropped/rerouted on their way to different backend ipc channels (microservices or replicated backend)

- the frontends are untrusted (the user or browser may tamper with your code running there)

- the backend server is also often replicated / sharded

- there are caches (think Redis) that the backends share in addition to the main DB.

- load balancers, high-availability reverse proxies with various failover propreties, etc

- various caching behaviours that are only partly under your control (browser-side content caching, DNS, etc)

- all of the above are interconnected over unreliable channels

- there are secrets and security involved so functional assurance ("pressing button X always makes function Y happen") doesn't tell you enough

(I have no TLA+ experience so can't say how one would use it with a web app, though).


You won't use this to model your web application, but the core logic. Usually it's really important that this core logic doesn't break.

Imagine something like twitter, why would you model it? Once you start thinking of having N servers, and you have to deal with caches, and consistence of message then it might make sense to model it to make sure that tweets get to the right users, are not lost and duplicates don't happen. You are not exactly modeling the web or mobile application but the "system". At the end of the day, your goal is a logically sound system. So figure out what will make your web application non-logically, if it's going to provide a terrible experience and it's hard to reason, then you might want to model it.


OK, but could you model Twitter in TLA+ and then would TLA+ be able to find inconsistencies in the model given that you relax certain assumptions?

Can TLA+ direct you to which of the caches is being incorrectly invalidated given that you have provided that 10% of the servers online are still using the old version of the software?

That seems interesting, but haven't yet played with TLA+ enough to know that.


You can't know which of the caches is incorrectly validated, but if you specify the behavior of both new and old versions, you could check to see if mixing them could cause problems.

But the more elaborate the specification, the harder it may be to answer the question. Using tools like Coq, you essentially need to write the proof (say, of the claim that mixing the versions could cause no issue with data consistency) yourself. With TLA+ you can do the same, but also have the option of letting the model checker check the theorem automatically for you, but the more elaborate the spec, the longer that would take.

This is why you need to learn to specify at a level of detail that's right for answering the questions you want answered -- no lower (higher abstraction), or there may not be sufficient detail to answer them, and no lower, or answering may take a long time. Sometimes you'll find that you're specifying the same system at several levels of detail. One of the cool things in TLA+ is that you can check that the different levels are indeed related (could be a description of the same system at different levels).


On a more practical side of things - how do you mix your application code with TLA+?

E.G. Do you have a "proof" project and a the code in different directories with comments in the TLA+ pointing to the files in the actual code?

It would be incredibly nice if I could put TLA+ proofs in the comments next to every function or module I am working with and then choosing which of them to be proved. But that seems to be way beyond what the toolset allows you to do.


1. Writing formal proofs in TLA+ (or in any other proof assistant) is almost always a waste of time. If the proof is very easy, you can certainly do it, but then you don't need to unless it's a strict requirement or an academic project whose entire purpose is to show that a formal proof is possible rather than some functional correctness requirement. And if the proof is not easy, then there are better methods, like using TLA+'s model checker.

2. Putting references to the TLA+ spec in the code could be nice and useful.

3. You should realize that the benefit from a powerful tool like TLA+ comes when it helps you reason about things that are hard for you to do, and those are hardly ever at the code level. So while using a code-level tool like Why3 to prove some routine is cool, it is hardly ever done in practice, because that's hardly every where the problems you need help with are. Those are usually found in complex interactions between machines, or your program and the user etc.. So often TLA+ is most useful when the specification is so far above the code level that a direct link is not entirely useful.

Having said that, you could use TLA+ to actually prove at the code level by compiling your code to TLA+, as has been done with Isabelle or Coq. This has been done for TLA+ with C and Java, but in practice, code-level verification is not where you'll get the most bang for your buck.


i've encountered TLA before and am a complete newb. That being said, I've wanted to use it for modeling distributed systems but felt the learning curve was too steep. Now many years later, I think i'm finally going to take another go.

Would love to know where the TLA community hides. I dont see much real info on it in the wild.


https://groups.google.com/forum/#!forum/tlaplus

This is the Google group and it has Lamport and others chiming in with answers to questions regularly. Though it's not terribly active.

https://www.reddit.com/r/tlaplus/

The subreddit for it. Hillel Wayne (author of this article) and Ron Pressler (pron here, pron98 on reddit) both frequent the site and post content to it. It's not terribly active, but it exists.

I think this is something that doesn't quite have a widespread and public community. Which is unfortunate, but it's just advanced enough for many to feel it's beyond them, and just abstract enough for others to feel it has no (or very limited) utility.

I will recommend exploring it though. I spent some time about 2 or 3 years back learning it to a moderate degree and successfully applied it to some problems at work. Though I haven't used it much since then.


> Would love to know where the TLA community hides.

Currently, the (tiny) community hides mostly in large companies and doesn't blog or evangelize much. Nevertheless, because it is used in such large companies on such critical and popular systems, its direct impact is far larger than that of say, Haskell, despite fewer people knowing it.

Here is a talk[1] about Microsoft's experience with TLA+. They do a three-day workshop for developers, which is sufficient to specify and verify the design of large, complex, real-world systems.

[1]: https://youtu.be/ifFfxRCX_jw


Haven't used it, but this site looks like a great resource and is in my queue to go through.

https://learntla.com/introduction/


I was working on gathering my links and that was one I was going to suggest. It's a very approachable intro, and also written by Hillel. I've added his (now published) book to my queue to purchase and read through. I may do that sooner rather than later so I can provide feedback here on other discussions regarding it, beyond saying that it exists.

The talk linked at the top of that page is pretty good as well as a tutorial, though hardly complete. As I recall he presents his examples well and has good motivating examples for using TLA+ in it.


While I'm reading this, I feel more and more like asking: Am I the only one that finds TLA+ much easier to read and write than PlusCal? It's an odd thing to say, with PlusCal positioned as the easier option.


Nah, different things click with different people. In my experience more beginners find PlusCal easier to start with than "raw" TLA+, which is why I focus on presenting it first. But plenty of people find raw TLA+ more natural to start with. Use whatever works best for you- once you have one, learning the other is really fast. What matters much more is getting your foot in the door, y'know?

Personally, if I'm not writing teaching or workshop materials, I write specs in raw TLA+. Some of that is due to fixable limitations in PlusCal, but a lot is because it's fundamentally a really elegant way of expressing specifications.


Personally, I keep trying PlusCal and keep going back to plain TLA+. Every time I get bogged down trying to learn it, and instinctively switch back because it's faster (or maybe it just clicks better for me). I got Hillel's book, so I'll probably give it another shot soon.

The one thing I really like about PlusCal is that variable updates don't need to be complete, so you don't have to write stuff like:

foo' = [foo EXCEPT ![i] = bar]


At the risk of being too opinionated, PlusCal is a needless and unproductive diversion. The TLA language and source file structure is mostly standard mathematical notation, using ascii characters. Most people will be familiar already with that mathematical notation before any exposure to TLA. And I wouldn't be able to recommend that people learn just another novel representation for something already known. Learning PlusCal to reason about logical propositions strikes me as comparable to learning an XML notation for manipulating formulas in differential calculus. Yes, we are developers; yes, we are used to XML and, still, ... I don't see how it improves upon writing math.

Mathematical notation is a concise, time tested language for enhancing thought. People should use that to their advantage. As an added advantage, unlike PlusCal, facility in the language of propositional logic generalizes widely to other tools used in formal methods, as well as other technical fields.


I didn't know about the PlusCal option until I saw Hillel's book and it clicked with me instantly. Might be the damage done by programming.

Objectively, PlusCal is a higher level language. So it should be easier than TLA+ right? More specifically, look at https://pastebin.com/cwZaApmH. The top half is PlusCal and bottom half (after \* BEGIN TRANSLATION) is the translated TLA+ code. PlusCal reads more like pseudocode. Maybe with some more training/effort the TLA+ would be obvious as well.


One thing I want to caution is that that's the translation of PlusCal to "raw TLA+". If you were not using PlusCal, you might write it a different way. For example, you wouldn't need a `pc` variable because you could express sending and receiving as short guarded actions.


PlusCal is not higher level, it just has constructs that programmers are used to, like assignment, sequential composition, processes and procedures. Even in this particular case the specification would be clearer in TLA+ than in the PlusCal version, had it not been a translation of the latter. For example, the PlusCal version contains an unnecessary and confusing sequential composition of the send and receive bits.


I found it easier to read TLA+ and easier to write PlusCal. This may be because of "software-first" experience.


For people who have experience with TLA+ do you generate implementation code from the TLA+? Or is the idea to only use it for some formal design specifications?


TLA+ is only for specification. In one of the papers on it Lamport mentions just putting a copy of the specification in a comment above the implementation. If you get a bug you just need to find where your implementation differs from your specification which he finds is usually pretty easy.

If your code doesn't have a specification (of course usually less precise), fixing bugs is incoherent. How can you say come behavior is a bug?


Yeah but why not generate code from your specification? Or create a model of the specification that you can interrogate in code?


You would need to include irrelevant details. TLA+ is an actual declarative language, so it's not executable, but it can be model-checked.


This seems as good a time as any for my mild rant about the formal modelling toolchain...

A few things recently made me want to try out these techniques at work. Basically some issues we'd been facing, combined with reading Hillel Wayne's "Augmenting Agile With Formal Methods" blog post.

So I read a little bit (I have no prior experience with any of this stuff) and decided to try out TLA+

First downer is I seem to have to download a 500MB Eclipse-based dedicated IDE to be able to follow any of the tutorials. Seems like a bad omen.

Next I have the inevitable problem with wrong JVM runtime on my system... as ever, the "write once, run anywhere" promise pushes these issues down to the end-user rather than having the author compile working binaries for common platforms. https://github.com/tlaplus/tlaplus/issues/194 No matter, solved easily enough.

Ok so I'm stuck with a clunky unfamiliar IDE, but that's ok for now because to start out I will just be copy and pasting code from Hillel's beginner tutorial https://learntla.com/introduction/example/

Within minutes I have put the IDE in an unusable state: https://github.com/tlaplus/tlaplus/issues/195

At this point I'm thinking - these are just text files like any other language, why can't I edit them in my usual IDE and then run the model checker from the command line?

This is technically possible. But this scenario is not well documented. You can download the tools from here https://lamport.azurewebsites.net/tla/tools.html The tools have no --help output explaining what args they need.

They are not stand-alone command line tools either. They're just the source code, which you run directly against java.

There is a .jar file available for download which might be some attempt at a packaged version of the tools, but who knows how to run it... Googling for instructions turns up this thread from 4 years ago, someone asking exactly this question, which goes unanswered https://groups.google.com/forum/#!topic/tlaplus/R9amI13F_-M ...the thread ends with a dismissive post from Leslie Lamport ridiculing such stupid ideas as having some documentation published on a website, and recommends everyone use the Toolbox IDE instead of trying to run the cli tools themselves.

At this point it's quite clear to me why more people don't use TLA+, and it has nothing to do with the weird syntax or unfamiliarity with temporal logic.

Since I still wanted to experiment with modelling I switched to Alloy. This also requires you install a dedicated (and rudimentary) IDE. At least this one actually works and I was able to experiment with some basic models of my own design.

I could not see any stand-alone cli tools for Alloy but they do provide some basic instructions for using it programmatically in your own Java project http://alloytools.org/documentation/alloy-api-examples.html (I'd prefer something accessible from Python, like a C library, but that's ok)

I do understand that visualisation (Alloy) and presentation of complex results (TLA+) are an important part of the development process, and this is presumably what motivated both to provide via their own IDE.

But clearly where the expertise on these teams lies is in the computer science, logic, model checker areas. Tying all of this brilliant and clever stuff to a cumbersome GUI app which is not your core proficiency to deliver seems like a very bad idea. A dose of the 'unix philosphy' would be beneficial I think.

Think from the user perspective - I'd much rather use my usual IDE, which has great editing tools, a nice colour scheme, doesn't crash or become unusable, and is totally familiar and comfortable.

Ok, it still needs a front-end for model-checking results... but imagine if the checker tool was well documented and provided structured output in a common format (json, graphviz .dot, whatever)... if these tools really are useful then I'm sure pretty soon you will have multiple front-ends created by people who are good at that.

It seems sad to me that a small and brilliant community are wasting so much effort on making bad software, instead of making great tools which other people can build great software with.


I'm not the only one it seems https://lobste.rs/s/8rfign/modeling_message_queues_tla

That said, I bought Hillel's book and will come back and try again with TLA+


Hopefully you already know what TLA+ even is, because the article won’t tell you.


Does every article need to be an introductory article to a topic including a summary of what the thing is? Should every Python article start with:

  Python is a language developed by Guido van Rossum and
  named after the comedy troupe.
?

Hillel even gives you an out early in the article:

  This post assumes knowledge of TLA+
Followed by links to two earlier articles which offer better intros to TLA+ itself.


In case it helps, from Wikipedia[0]:

TLA+ (pronounced as tee ell a plus, /ˈtiː ɛl eɪ plʌs/) is a formal specification language developed by Leslie Lamport. It is used to design, model, document, and verify concurrent systems. TLA+ has been described as exhaustively-testable pseudocode,[4] and its use likened to drawing blueprints for software systems;[5] TLA is an acronym for Temporal Logic of Actions.

For design and documentation, TLA+ fulfills the same purpose as informal technical specifications. However, TLA+ specifications are written in a formal language of logic and mathematics, and the precision of specifications written in this language is intended to uncover design flaws before system implementation is underway.[6]

Since TLA+ specifications are written in a formal language, they are amenable to finite model checking.

[0] https://en.wikipedia.org/wiki/TLA%2B


Not every technical article or blog post assumes a beginner audience, nor should they.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: