The first synchronous programming languages were, AFAIK, Statecharts and Esterel, the former designed by David Harel while he was working alongside Amir Pnueli, who would later win the Turing Award for introducing temporal logic to computer science. https://www.wisdom.weizmann.ac.il/~harel/papers/Statecharts....
Eve also combined synchronous programming with logic programming: https://witheve.com
This is also how PLC's in industrial automation work.
In general, any large hard real-time system. Factories and assembly line automation has a system of state updating devices.
They were initially programmed using Ladder Diagrams, then STX (Structured text) a Pascal like language and IL (Instruction List), Sequential function chart, or Function block diagram. These all use Common Elements (IEC 61131 standard) so, you can mix languages.
Author here, I'm aware of Lustre/Esterel/Signal/StateCharts/etc - there are a _lot_ of examples of prior art, each with their own interpretations of how time should work in a programming language. I'm hoping that Temporal Programming can serve as both a reboot of Synchronous Programming, as well as a fresh generalization of the ideas from all the various branches of research that have popped up over the years.
There is also functional reactive programming, virtual time in TimeWarp, hmm, some other things. You can see the related work of our managed time paper.
This idea has a history! The reason most language research programs like this had limited success is as follows.
There are precious few programs for which a|b ~ a;b is true for all atomic programs a,b where:
| is a parallel composition operator
; is the normal sequential composition operator (defined in the completely standard way in terms of updates to a global state or heap/stack-like structure if you want to be fancy)
~ is any sort of equivalence relation on reasonable and realistic semantics.
More-over, there are precious few programs that easily decompose into parts A,B,... such that A|B ~ A;B is true for each A and B in the set of parts. You can give theoretic characterizations of this fact, and many have, but the pragmatic point is more convincing.
Associativity and even reflexivity often fail as well.
The allure of this line of thought is promising, but alas... The "just make a simple algebra and arrange your programs to fit it" research program died early on for a reason. In the context of parallel and concurrent programs, it is hard/impossible to solve the decomposition problem at the language level of abstraction.
That pretty much summarizes outcome of a big chunk of program analysis research from the 70s to early 90s in a nutshell, unfortunately.
Let me briefly illustrate using OP's example:
`a' = b + 1; b' = a + 1`
from which I've deleted the errant training semicolon (it's `1+1` not `1+1+`!)
Suppose we instead have the program:
a' = a + b + 1; b' = a + b + 1
Now, the ordering does matter! No worries, so we order.
Additionally, suppose we put these programs into an outer while(true) loop and that's our whole program. Now what have we bought ourselves? Not much.
In this case we can solve that problem by... well, by solving some sort of integer recurrence equations that give us a program without loops which we can then think about algebraically. But of course this gets difficult or impossible very fast. (And, btw, the algebra part here did not buy us much. It was solving integer recurrence equations that saved the idea in this example). We haven't even added real data structures or external state yet.
Anyways, looks like OP is building something around this idea. Good luck! I'll be curious to see how you handle loops inside of blocks that contain @'d variables with recursively defined quantities :)
Sorry, I edited a bunch and the point was that the ordering can matter due to references to the new value on the rhs or due to a loop. So
a' = a + b + 1 and b' = a + b + 1
should be
a' = a + b' + 1 and b' = a' + b + 1
Or alternatively they can have non-primed right hand sides but occur in a loop and you have the same problem even though the rhs's are at-free. A syntactic constraint won't work unless it includes "and also no loops in at-mentioning blocks" which... well, yeah, we know how to reason about NFAs :)
Sorry again, trying to make too many points with a single example
Could you not tackle this problem by not allowing this mingling of 'next' and 'current' variables? You would only allow a next variable to be a computation of current variables. Not sure to what extent this would limit the application of the paradigm, though.
You can allow mingling as long as you don't produce an evaluation loop. The existence of evaluation loops can be detected by the compiler (Metron does this already).
Author here, you are both correct and not correct - in your example the two equations represent an impossible constraint (there's an evaluation loop), thus a program (or TLA proof) containing those two statements would be invalid.
Well of course I disagree with that statement. My work on Metron suggests that we've bought ourselves a way to write programs that can execute on both CPUs and FPGAs, which is a valuable thing. The ability to more directly translate the behavior of our programs into a proof framework like TLA is also quite useful, and I think folks who are deep into functional programming also like the lack of "side effects" since the process of computing x' is guaranteed not to change x (or any other observable property of the previous program state).
I once glanced through some explanation of monads by Simon Peyton Jones. He said that in ALGOL specification, the semicolon operator is formally defined as taking the current state of the world, transforming it by the statements preceding the semicolon, and then arranging for the program to continue execution in this new world.
So John Backus not only suggested "Applicative State Transition" in his paper. He participated in ALGOL creation. Which influenced most contemporary languages.
We have this style of programming in our Pascal and Java programs.
Please no new names. Semicolon is semicolon.
Sorry, I can't find this blog or paper by Peyton Jones. Leaving that as an exercise for the reader :)
And of course, I can misremember something.
Author here. From the Algol viewpoint, you can think of a temporal program as one enormous statement followed by a semicolon, embedded in a while(1) loop.
Semicolon is semicolon, but in Pascal/Java you have lots of tiny semicolons and in a temporal program you have one giant semicolon. The practical implications of that are more wide-ranging than you think, and quite interesting.
I must be missing something. The Backus publication referenced by OP is from 1978, much later than Algol. Maybe the OP means some substantial difference, but I don't see what exactly.
And indeed, the Backus publication even has a section "14.2 The Structure of Algol Compared to That of AST Systems".
They don't use x', but rather fby x, but otherwise the ideas seem very similar.
Of course, they called it dataflow, and Lucid inspired the "synchronous dataflow" languages like Esterel (imperative) and Lustre (functional). Which in turn inspired the non Conal Elliott variants of "FRP".
Author here. Yep, it's all related. My hope is that temporal programming can provide both a simple practical framework (in the form of MetroC and whatever else spins off of it) and a simple theoretical framework for future research into time-centric programming languages.
Isn't this how Operational Semantics works for describing formally a given programming language? You define what the state of execution of the program is made of, and how state transitions occur during one execution step.
Author here. Operational semantics are a way of "talking about" a programming language, whereas temporal programming is a way of expressing operational semantics more directly from within a programming language.
This sounds a lot like software transactional memory (STM) combined with static single assignment form (SSA).
The idea behind STM is to specify some block of code as being one big fat atomic operation, and then the runtime will magically make it so. The idea behind SSA form is to only assign variables once, as doing so makes imperative optimization as easy as functional.
I wonder if the author knows about these things, or if they're coming in at a different angle.
Author here. You'd be hard-pressed to find an example in the literature that I _haven't_ already heard of, I've been researching this for a few years now.
STM certainly makes temporal programming easier, though it's not universally available and I'm not sure to what extent it will scale up to be performant enough for large-scale use - how many megabytes can I commit in a transaction before I hit some hardware limit.
SSA is a very valuable formalism for describing the interrelation of changing local variables inside an imperative function, but it's difficult to apply program-wide when you're dealing with large data structures and complex call stacks.
Author here. Thanks, that is a core goal of the whole project - take a bunch of existing concepts and represent them as implementation details of a simpler, more general framework.
There's a big and historical overlap. Now these are all distinct topics of study, but in the 80s when these ideas originated there were precious few CS departments. Most all of the software scientists knew each other and were kind of swimming in the same primordial soup of ideas.
To me, it sounds a lot like the part of functional reactive programing that nobody bother to explain or specify.
But, well, I'm not really sure people mean that when they talk about it, because it's left unespecified. And when people implement FRP they always seem to take obvious "shortucts" with significant downsides and that avoid implementing something like this.
Author here. There is a _huge_ overlap between a _wide_ variety of programming terminology, which is why one of the motivations of temporal programming is to bring all these various ideas under one relatively simple umbrella.
Author here. I'll have to go look at Elm again - it's in my list of references, but I have so many at this point that I don't recall what exactly Elm does differently.
> Programs that adhere to these tenets should in theory be far more optimizable than what would be possible in a procedural language, as the compiler is essentially given the entire evaluation graph for the program up front and can completely reschedule evaluations in whatever order is most efficient. They should also be more amenable to formal TLA-type proofs since their computation model is much closer to what TLA+ expects.
This sounds a lot like graph computation but on an extremely granular level.
You want to try temporal programming? Try clojure: immutable values, immutable & persistent data structures, software transactional memory.
You need a data base with full history (transaction time), try datomic. Additionally if you need full history AND domain time (bi-temporal) included in your data base, try or xtdb.
I second this, Clojure gives you this both in memory and in the database via Datomic and reasoning about immutable data and pure functions is such a dream. Stateful programming is the worst, once you live in Clojure for a while everything else seems nuts trying to debug what got passed to what and when and what modified what. Such a disaster relatively speaking.
Author here. One goal for MetroC is to have a codebase that can run equally well on a CPU or a FPGA. To do that efficiently, I need pretty detailed information about how exactly my program gets translated into x64 instructions or FPGA gates. I'm not sure I can do that in Clojure, but I haven't done more than a cursory skim of the language.
void is the unit type, not the uninhabited type. (I'm really not sure what Haskell was smoking when it messed that[0] up, but it's probably too late to fix now.)
0: Calling it Data.Void rather than something like Data.Noreturn or whatever.
In C void is the unit type. I'm not sure why we should respect C's choice of nomenclature, when the obvious thing to call it is Unit, and an obvious thing to call an uninhabited type is Void. Haskell's choice is fine.
The example you give is a statement, not an expression. There is no value returned, and the return-type is void, which can't be constructed. Only imperative languages, that can ignore the lack of a value from an expression (by simply moving on to the next statement), can deal with void returning functions. In Haskell this causes the end-of-the-program because it works solely with expressions, and an expression must have a value.
It's been a good 30 years since I've written any C, and I know its type-system is compromised at best, but I'm fairly sure you can't do something like this?
void x = new void();
Or, using your foo example:
void x = foo();
If you can then void is inhabited (and incorrectly named). I realise you could create void* and other elaborate means of claiming you have an inhabited void, but really can you legally construct one, not hack any underlying value to pretend to have one - there is a difference. If you can construct one and assign it to a variable, then it's inhabited.
Railing against Haskell when its implementation actually works correctly is a very strange position to take. The clue really is in the name empty-set = void, singleton-set = unit.
Author here. My 'native' language is C++, though I'm productive in a half-dozen others. You can express the same snippet in just about anything, void or no.
Author here, you are correct. Implementing non-blocking assignment semantics in C++ is hard and can't really be done cleanly from within the language. Metron (the C++ to Verilog translator) relies on static analysis to prove that C-style assignments to non-local variables in a C++ program have the same semantics as if they were non-blocking assignments. MetroC will extend this with the '@' operator to allow for real non-blocking assignments in code that otherwise looks like regular imperative C.
This is how I think about programming and how I design software.
You don’t need a functional language to do this. However it does require more discipline when the language doesn’t enforce purity.
And once I started thinking about software this way I realised that events are important and should be the core of any software design. Basically the application is a single function that takes an event and the current state, and computes a new state and zero or more events.
While you could describe just about anything with a monadic type (or, indeed, any "Turing-complete" programming language in any other), this isn't functional programming because the central idea of functional programming is that the meaning (denotation) of every term (expression) is a value which is an object of the program, while this is neither the case nor the point for the synchronous paradigm.
Author here. It's not "pure" functional progamming in that state exists and is mutable-in-place, but only via atomic swap - you can interpret the sequence of states as a sequence of monads, but the evaluation semantics are different.
It's not, because the article is feeding several commands into the same RealWord value, before changing into the next, and feeding a lot of commands again.
Author here. I've commented on SSA and simulations (in the context of gamedev) elsewhere in this thread.
Regarding phi nodes, they represent a "merge" of two possible execution branches. In C++ you evaluate one side of the if() branch and ignore the other based on the predicate and the "phi" (sometimes called "phony") function then copies the result from the selected branch to the new SSA variable. In a language like Verilog, you evaluate _both_ branches and the phi node is effectively a "mux gate" that selects between the two branch results based on the predicate.
this is dependency-driven inference. Sequence-of-execution (i.e. "time") follows from dependencies. Think "make".
i call this "wave" calculation - as the algo to find the sequence after each input change/stimulus is spreading changes like wave from-the-point-of-change-outwards, in topological sort.
There might be (direct or indirect) recursion allowed, up to some predefined level, or minimum-change-diff thresholds or whatever.
Writing a calculator for such set of rules is relatively easy, from memory they were like 1000 lines in c++, 100 lines python, 200 lines js.. it more depends on the "syntax" and conveniencies for the programmers, than actual engine.
Now, Turning that into full language .. will be really interesting. Kind of turning inside-out the relation wavecalc <--> general-language-of-choice..
Author here. Temporal programming overlaps with a lot of things, FRP included. I like to thing of FRP as being stream-at-a-time evaluation, whereas TP is whole-state-at-a-time evaluation. That probably doesn't make a whole lot of sense without a diagram, I should draw one. :D
If you write your program this way, how does it interact with the environment (e.g user input, output, network…)? Something is missing.
And if it doesn’t interact with the environment, then your program is just one big pure function, in which case you get all of the benefits in OP’s proposal anyway, without having to worry about transforming old state to new state, you just gotta transform input to output.
We can then use that as a building block whereby we define an apparent variable v as a symbol macro which expands to (temporal vold vnew). All accesses to v get the value of vold, but assignments to v do not affect vold; they go to vnew.
Here is a let-like macro which sets up the specified variable as temporal. The initial values propagate into both the old and new location.
Then before the last form of the body is evalulated, the new values are copied to the old values.
TXR Lisp's compiler performs frame elimination optimization whereby the v registers get reassigned to t registers, eliminating the frame setup. The t registers are subject to some decent data flow optimizations and can disappear entirely, like in this case.
Author here - If the transition takes place "before the last form of the body is evalulated", how does this work in the context of potentially deeply-nested function calls or assignments to sub-fields of some complex data structure?
It doesn't. This is like the minimum viable temporal construct we should come up with in under 30 minutes of coding, in a single let-like statement.
For more general packaging on my division something like a macro with establishes a temporal contour. The effects are settled when this contour terminates. Within that contour we have special constructs which indicates that certain variables have temporal semantics. We clearly need some way to and close multiple temporal blocks so that they behave as a unit.
In a temporal language this contour which binds together the blocks will disappear because part of the semantics of any block, such as a function body. We can make some dedicated helper constructs like defun-temporal, whose body is a temporal contour.
Assignments to subfields are an interesting problem. I made the simplifying assumption that a temporal place binds together two symbolic places: the old and new. I could quite easily redefine this place so that its arguments, or rather it's left argument, is an arbitrary place. Reading the number location will access that place. Writing the place will go to the temporary variable, which eventually is committed to the underlying location. When the place is just a variable this will generate the same code as it does now.
I’ve done most my programming in the past several years in Elixir, where mutable variables aren’t an option, and it’s been a great dev experience. An entire class of bugs disappears and it makes concurrency very easy to reason about.
I’ve written a lot of Clojure and Clojurescript over the past ~12 years and I find it a very natural way to program to the point that it’s the single biggest thing I miss when I work in other languages like javascript and Python.
Author here. State is not immutable, it is mutate-in-place via atomic swap. That makes it somewhat different than both functional programming and imperative programming, but it's not an excessively difficult programming style to use in practice.
The first synchronous programming languages were, AFAIK, Statecharts and Esterel, the former designed by David Harel while he was working alongside Amir Pnueli, who would later win the Turing Award for introducing temporal logic to computer science. https://www.wisdom.weizmann.ac.il/~harel/papers/Statecharts....
Eve also combined synchronous programming with logic programming: https://witheve.com