Hacker News new | past | comments | ask | show | jobs | submit login
Coroutines and effects (without.boats)
248 points by todsacerdoti 5 months ago | hide | past | favorite | 97 comments



Wow I’ve been thinking of something exactly like this! Sort of a super charged, statically typed version of Go’s context.Context. It would allow you to describe the capabilities (or effects) of every function, including IO, memory allocation, cancellation, deadlines, concurrency, and whatever application-specific stuff you need on there (like logging).

Then you could implement something like Google capslock [0] just by looking at type signatures.

0. https://github.com/google/capslock


You already have something like this (production ready) through effect systems in Haskell (ex. https://hackage.haskell.org/package/effectful)


See also aspect-oriented programming (AoP): https://en.wikipedia.org/wiki/Aspect-oriented_programming Algebraic effects might be a little too strict and narrowly defined for something as generic as, e.g., injecting ad hoc logging. With algebraic effects you need to reify in the type system the specific control flow behaviors you care about, and then get code to use those types and operators accordingly. AoP seems like a higher-level (if looser) concept that isn't necessarily specifically bound to the particular details of the type system. That modeling freedom seems necessary unless you're creating a language from scratch or confining yourself to a very limited set of control effects that can be shoe-horned into the existing type system and control flow operators.


Do itttttt :)


Like clockwork, every few years, smart people will re-invent introspection.


I’m personally betting big on OCaml due to the effect system work. I think this is one of the next big advances in industrial programming.

- Lambdas (mainstream)

- Static types (mainstream)

- Pattern matching (getting there)

- Sum types (getting there)

- TCO (getting there)

- Global type inference (future)

- Functors (future)

- Effect systems (future)

- Expression orientated (future)

With OCaml, I get all of this today.

The future is here, it’s just not evenly distributed…


Global type inference is not a positive in my book. In my experience it becomes very hard to understand the types involved if they are not explicit at systems boundaries.

I can also imagine that it must be hard to maintain, like sometimes the types must accidentally change


Being hard to maintain and having no static types at all, did not stop Python rising to conqueror the world. Type inference allows us to at least give those users the succinctness they are used to, if not the semantics. Those who like explicit types can add as many annotations as they need in OCaml.


> Those who like explicit types can add as many annotations as they need in OCaml.

They cannot add it in other people's libraries.

> did not stop Python rising to conqueror the world

I wasn't talking popularity, I was talking maintainability. Python is not a stellar example of maintainability (source: maintained the Python API for a timeless debugger for 5 years).

Python's ubiquity is unfortunate, thankfully there seems to be a movement away from typeless signatures, both with Python's gradual typing (an underwhelming implementation of gradual typing, unfortunately) and Typescript.


> They cannot add it in other people's libraries.

Does it matter that much how the internals of someone else's library are implemented? The tooling will tell you the types anyway and module interfaces will have type signatures.

> Python's ubiquity is unfortunate,

Well that we can agree on!


There’s a trade off - was the mistake here or there? The type checker cannot know. But for those few cases you can add an annotation. Then the situation is, in the worst case, as good as when types are mandatory.


> But for those few cases you can add an annotation.

not in other people's code. My main concern is that gradual typing makes understanding other people's code more difficult.

Idiomatic Haskell warns against missing signatures[1], Rust makes them mandatory. Rather than global inference, local inference stopping at function boundaries is the future, if you ask me.

[1]: https://wiki.haskell.org/Type_signatures_as_good_style


Huh? If you are consuming code you can’t change from someone else, then I presume this is a published package? Then the IDE will tell you the types.


> the situation is, in the worst case, as good as when types are mandatory

The worst case is actually worse than when types are mandatory, since you can get an error in the wrong place. For example, if a function has the wrong type inferred then you get an error when you use it even though the actual location of the error is at the declaration site. Type inference is good but there should be some places (ex. function declarations) where annotations are required.


The lack of function overloading was really the only thing holding OCaml back 15 years ago. Last I checked, there was no interest in changing this.


That can't really be the main reason, Rust is doing just fine without it.


Rust has overloading via traits.


That's an entirely different concept imo. Overloading in Rust would be to have separate `Vec::new()` and `Vec::new(capacity: usize)` functions, which is not allowed.

Case in point: Java does allow exactly this, while also allowing classes to implement interfaces very similarly to how Rust structs can implement traits. Rust code sometimes uses traits to achieve similar results, like the `slice::get(index: SliceIndex<[T]>)` function, but the example above can't be done.


I think the GP meant polymorphism. Rust traits being static ad hoc polymorphism. I thought it was possible, though, to achieve the same thing with Functors. Just the standard library deliberately didn't and chose to have things like 'print_int', 'print_string' and so on instead of a Print Functor.


Yes functors would be the way in OCaml, but there's no implicit context resolution that implicitly resolves such symbols within a Functor the way you can in Rust or Haskell. This missing implicit context resolution makes for bad ergonomics, particularly for any kind of numerical code, which tend to be the very first kinds of things that new programmers try in a new language. "Hey, why do I need to use + in one case and +. in another? This is dumb, OCaml is not a serious language."


In OCaml the integer operations are implemented using LEA instructions, and the floats are boxed. The language is not really suitable for numerical code...


I'm not familiar with ocaml, but the LEA thing can reasonably be fixed by a better compiler, no? Is there no LLVM backend for ocaml?


I am not an expert on this either, although very interested. Note that delimited continuations, which are a superset of coroutines are identical to effects in the sense of yielding control to the handler, not the caller. In languages like Racket/Scheme which allows associating tags with handlers, you can also model the same function having multiple effects.

In general, coroutines seem vastly easier to understand/type compared to effects unless your language can do a lot of effect inference.


This classification seems somewhat Rust-centrict. The terminology I'm more used to seeing for coroutines is this:

Symmetric vs assymetric coroutines: symmetric coroutines are the original style, where the currently executing coroutine decides who to transfer control to. Assymetric coroutines have a caller and a callee and the callee always yields control back to the caller. (This is much more common today)

Stackful vs stackless coroutines: in a stackless setting the language distinguishes between synchronous and asynchronous functions. This is what Python and Rust generators do. In a stackless setting (such as Lua), every function can potentially yield or call a sub-function that yields.

Exception handling is somewhat similar to stackless coroutines, specially in languages such as Common Lisp where the exception handler can choose to resume execution from the point that raised the exception, instead of unwinding the stack.


> symmetric coroutines are the original style

This isn't quite correct, but it's close. The original style were co-routines, as distinguished from subroutines, this was when programming was still heavily goto-oriented, these were little more than patterns used to structure go-to.

Simula, if I recall correctly, was the first high-level language to have coroutines, and they were basically asymmetric. I say basically because there was a mechanism for a coroutine to replace itself with another coroutine, but it wasn't a general transfer mechanism. Not unlike a delimited continuation in fact.

Good summary of how it all works however! I believe this is a typo:

> In a stackless setting (such as Lua)

Since Lua is stackful, and you're contrasting it with stackless generators.


yup, was a typo >_<


I don't see how exception handling is similar to stackless coroutine as it is very non local.


Sorry, I meant stackful (was a typo)


Ah, yes, then I agree completely!


This touches on something I’d like to see in more mainstream languages, but it doesn’t quite get there.

> For example, Koka has a “diverging” effect, which means that an expression may diverge (that is to say, it may not finish evaluating). An expression containing a diverging expression is also diverging. So you can distinguish in the type system between a function that is guaranteed to finish and a function that may not finish (this is imperfect, of course, because of the undecidability of the halting problem; some functions that do not diverge will be marked diverging).

As I think about it (and I’m not a programming language theorist, nor have I done much serious work in any language with any sort of effect system), there are two vague categories of effect: control-flow effects like exceptions, yields, async waits (Pending, sleep, or however you feel like modeling it) and non-control-flow effects (divergence, various forms of unsafety, nondeterminism, impurity, reads or writes of global state, syscalls in models that don’t treat IO in and of itself as an effect), etc.

I would like to be able to run and write code that is definitely free of certain kinds of effects. Xz should not be unsafe or do IO, for example. Leftpad is an entirely pure, non-diverging function. And I should be able to ask my language to enforce that, ideally with trivial code. Maybe even by default.

But mainstream languages seem to mostly limit their use of effect-like systems on the control flow part, like this:

> Overall, coroutines strike me as the most promising way to handle many kinds of effectful functions because they seem to be in the design sweet spot: They are statically typed, lexically scoped, and unlayered.


D has a lot of these kinds of "effects", though they're called "attributes" [1] like "nothrow", "pure", "safe", "mustuse", "nogc"... there's also the "noreturn" type for functions that will never return [2] (infinite loop or throws).

[1] https://dlang.org/spec/attribute.html#return

[2] https://dlang.org/spec/type.html#noreturn


There are two aspects to effects as a language feature- first there is the runtime behavior, and then there is the type system. Ruling out categories of effects is the job of the type system, and you don't have to commit to (or avoid) any particular runtime approach to use it that way.

This is related to the vagueness of your two categories. While exceptions, yields, and awaits don't really make sense without a handler, all that matters to the type system is which operations an expression might perform in addition to producing a result of their primary type. Handlers only interact with the type system in the sense that they remove an effect from their handle-ee expression.

So even in a language that committed to coroutines as its approach handling effects at runtime, the type system could still track and rule out effects like divergence or system calls. (For what it's worth, nondeterminism, state, etc. can all also be defined in terms of handlers. And at the same time, though, unsafety is not an effect because it is not entirely captured by "can this expression perform operation X.")


This is one of the motivations of algebraic effects: you can have fine grained control over which effects are allowed.


Yeah, the whole blog post reads like "if you worked too much with a hammer, everything starts looking like a nail". In other words, it puts the cart before the horse.

Effects is a MUCH more general and powerful technique than coroutines. An effect system can help greatly with implementing a coroutine system (instead of dealing with the unfortunate type-level hack Rust uses), but trying to implement an effect system on top of a coroutine system will give you a very limited emulation in the best case and another incoherent mess in the worst.

It would've been really great if Rust had a proper effect system, but it's very hard to introduce it into an existing language similarly to how you can not easily introduce borrow checker to C/C++. The messy "keyword generics" proposals only serve as a good demonstration for this. So we probably have to wait for Rust 2 or a different successor language.


I would love something like this where to write a function with side effects I'd have to annotate it as such.

Then the compiler could recursively understand whether any given function is pure.


I write code like this every day in Haskell.


Any recommended further reading on how this effects are handled in Haskell?



The solidity language for smart contracts works like this.


> Leftpad is an entirely pure, non-diverging function.

`leftpad(string, int) -> string` needs to allocate so not fully pure that way. You could pass in the resulting string but then it would have to mutate, which is not really pure either.


It’s certainly pure in the Haskell sense: the inputs are two immutable values, and the output is another immutable value. It’s even pure in some Rusty senses: it could take String or &str as input and returns a String — in a model where String is a value, it’s pure.

A lot of modern programming logically involves telling a computer to what to compute and not particularly caring how it gets computed. Yet we mostly lack a language in which one can specify computations and then then run the result portably and safely.


It would be pure in languages without manual memory management


Unless it can't allocate, and then the whole program terminates. There are unfortunately limitations everywhere.


Of course as well as limiting the total amount of memory a running program is allowed to allocate, we can also limit how many CPU instructions it can execute and numerous other (OS dependent) resource uses. Accounting for this all operations are fallible, which no practical language copes with.

It's very normal in a Unix to be allowed to limit total runtime for example.


Is it really worth worrying about? This strikes me as pedantic. Once a program can no longer allocate memory, it's very likely toast anyway.

Edit: @naasking, all good points. Still, isn't the general strategy to ensure memory isn't exhausted, rather than handling such a situation "gracefully", whatever that means? :)


It depends what your program is for. Some programs need guarantees because of bounded resources, so you may want resource allocation to be a visible effect (like embedded systems), or because you're running potentially untrusted code (like JS or WASM in a browser), or because you want sane failure handling (like running modules in a web server).


Graceful is a misnomer, but there are better versions. If you are day, a database server, and you run out of memory and die horribly, availability for all clients is compromised. If you begin rejecting queries or connections until memory js available at least some availability is maintained.


I don't follow, it seems like you're still be hosed in this scenario. What's the difference of stopping accepting connections and rejecting queries vs crashing out? Meaningful work cannot make progress when a busy dynamic system is OOM -- which a database is a prime example of.

Best to avoid the condition, or design the client side to handle the possibility the resource could be unavailable.


If the whole program terminates, the purity of the function is still intact, the computer just can't compute it :D


Hack programming language (Facebook's php++). They are called coeffects and can range from "this is pure with no side effect" to "this can modify local members" to "this has I/O".


> Leftpad is an entirely pure, non-diverging function. And I should be able to ask my language to enforce that, ideally with trivial code. Maybe even by default.

I think you can do this in Idris with "total" functions.


don’t forget unbounded loops diverge - while(true){} and fn f() {f()} - also impure are blocking “pure” computations like fib(50)


In proof languages where you want a proof that a function will succeed without running it, knowing that it doesn't diverge is important.

But if you're going to do the calculation, there's no difference between an infinite loop and a loop that finishes, but will take longer than you're willing to wait. Either way, it looks like the program hung.

So in practical programming languages, I'm not sure that the "diverge" effect is useful? You still need to kill it if it hangs. If it will take a long time, returning some kind of progress indicator would be nice. Maybe writing it as an iterator would help with that?


It may depend on what you mean by 'practical', perhaps, but in a Haskell-like language divergence (as opposed to induction) more or less takes the form "let x = x in x" with varying degrees of ceremony. If you can provide to the compiler that you don't have any such thing in your fragment of code it can omit black-hole updates and checks during garbage collection for a small (possibly negligible or nil, if divergence still exists in the language anyway) performance boost.

As you suggest, explicitly adding a provably-reducing value like a maximum iteration limit lets you show progress, give up at a "reasonable" time, and avoid divergence. This is how unlifted programming languages permit general recursion - that, and allowing inductive types to merely prove they're productive within a finite time bound (eg, a stream of values only needs to show it produces the next value eventually, not that it produces all values eventually).


> but he made me realize that effect systems (like that found in Koka) and coroutines (like Rust’s async functions or generators) are in some ways isomorphic to one another.

It's a shame this was not explored before going down the Async route. I believe Effect handlers would have been a better fit for a systems language. IMHO a Monadic approach really needs something expressive like a functional language in order to work well.


What’s the difference between an effect and a “coroutine” where calls in the function body to other coroutines are implicitly yielded? Or is that not a real coroutine?

That’s what Kotlin’s `suspend fun` does.


Both effects and coroutines can (and typically do) work implicitly across calls like that. The version with forwarding operators like in this post is something of a Rust-ism, or more generally an async/await-ism, or a stackless coroutine-ism.

The more fundamental difference between effects and coroutines is that a `yield` in a coroutine always goes to the one unique resumer, and carries a single type of value. On the other hand, an expression may have several effects, each of which may be handled in a different place, in the same way that distinct exception types may be caught in different places.


> The key difference between coroutines and effect handlers is that coroutines yield control to their caller, but effectful expressions yield control to their handler. The difference of affordance this implies is the materially significant advantage of coroutines over effect handlers.

Should that last bit be: "advantage of effect handlers over coroutines"?

EDIT: Oh, maybe I should have read the rest of the article first. It might be going the other way than that one example suggested. :)


> what if it was dynamically scoped…. but statically typed…………..?

That's the right insight. Just give me implicit parameters (i.e. the statically typed versions of dynamic scoping). I'll build my own effect system, coroutines, exception and what not as needed.


Indeed - I do not think it is a coincidence that a lot of production experiments in effect systems are happening in Scala right now - the language is very flexible to conduct them. https://github.com/getkyo/kyo in particular looks interesting as it explores a different space where the monadic nature is less exposed to the end user.


> understand when the effect occurs requires examining the type signature of every function that is called. Since this is meaningful control flow, it seems very valuable to be able to identify points at which an error occurs without examining the signatures of each function call.

This is currently the case in rust. IO and other effects are frequently implicit. You don’t have to use ? or await they are *sugar. I have frequently seen reinventions of exceptions, unwind nonsense, adhoc interpreted tagged effects, etc..

Explicit syntax for effectfull calls should not be a goal. We don’t actually have that today.


I think it’s incredibly useful today to have both of these annotations.

I frequently scan for all instances of `?` or `.await` in functions (though, unfortunately, for various reasons this won’t show you everywhere these effects are produced).

I would rather not have to rely on an IDE to get that functionality.


I'm not really understanding the comparison (or isomorphism) between coroutines and effects. Feels like comparing Lists with functions, or promises with interfaces.


A coroutine is a computation that can `yield`, suspending itself and passing control back up to the caller. It can then be resumed at the caller's leisure.

An function with an effect (in this sense) is a function which can ask a handler to `perform` some effect for it. This suspends the function and passes control to whichever handler is in scope for that call, allowing that handler to resume the function at its leisure.

I suspect that you're misunderstanding what is meant by effect, because despite buzz about them and backend support for them in OCaml 5, they aren't yet implemented with syntax and type-level support in any mainstream languages I'm aware of.


> An function with an effect (in this sense) is a function which can ask a handler to `perform` some effect for it.

Why does it need to ask a "handler" to do something, why can't it just call a function that does the "action" for it?


By making effects explicit, you reap the benefit of being able to write non-effectful code.

Depending on what your language tracked as an effect, you could make your business-logic always terminate, or perform no allocations, if you had effects for Mutation/GeneralRecursion/Allocation.

But no, I certainly don't understand the function->handler control flow here. It has to be handler->function, otherwise you've got two handlers!


a function will return to it's call site (or diverge), a handler doesn't necessarily have to resume from where it was invoked. There is also (sort of) dynamic scoping, where you don't have to thread the handlers through calls.


I don't know your background, so I don't know at what level to pitch an explanation. Here's an attempt that assumes some knowledge of modern PLs.

Effects require 1) well-defined control flow (think of IO; you need to know in what order output occurs) and 2) manipulation of control flow (think of error handling or concurrency).

We can model effects as a back-and-forth between effect handlers, which carry out effects, and the user program. The user program passes control to the effect handler to carry out some effect, and the effect handler passes control back to the user program (potentially a different part of the user program; think error handling) when the effect has been performed. Continuations give complete control over control flow, so in their full generality effects require continuations (or some equivalent like monads). Coroutines are a slightly stilted form of continuations, that you can model much, but not all, control flow with.


Aren't coroutines generally one-shot, whereas continuations could potentially be resumed to multiple times? This seems to be a relevant difference between these concepts.


That's my understanding, and why you need full continuations to handle all effects. In my mental model a coroutine gives you an execution point that you can resume, but you are not allowed to resume execution points you have previously resumed. You cannot, for example, implement backtracking search with just coroutines as you need to to return to previous execution points. (Look, you can implement anything with anything. Turing completeness etc. This is about implementing it in a natural way using the effect handlers.)


No, the whole point of coroutines is that you can resume them multiple times, otherwise it's just a simple function call.


That's not what "resume multiple times" is referring to here. You can typically only resume a coroutine once per yield, while a continuation generally allows you to return to the same place multiple times.


One lets you save and return to an execution state (program counter and local environment), the other lets you create and call an execution state that is saved between calls to it.

There are obvious implementation differences but I'm not sure it makes any difference here, in both cases you can return to the same execution state multiple times.


The distinction between coroutines and delimited continuations is one-shot vs. multi-shot. The delimited continuation crowd use different language, but imagine an ordinary stackful asymmetric coroutine wrapped around a function call, except instead of just yield and resume, you have yield, resume, and reset. Call the coroutine, it yields from A, call resume, it yields from B, call reset, resume, it yields from B again. You can do that as often as you'd like.

This can in fact be emulated with a coroutine generator and some fancy footwork, but it's a subtly different primitive.


The difference is that resuming a coroutine mutates it, so that the next time you resume the same object it starts from wherever the coroutine next yielded. This may or may not be the same yield point as the last time, depending on the definition of the coroutine.

A continuation is immutable in that way, so it is either an error to invoke it multiple times, or else it will always resume at the same place. Implementing coroutines in terms of continuations would mean capturing a new continuation each time you yield.


Roughly, coroutine = a type of continuation with all mutable state, and continuation = a type of immutable coroutine.


> Feels like comparing Lists with functions, or promises with interfaces

Functions and lists are technically isomorphic, you just replace the function with a list of (domain, codomain) pairs and function invocation then becomes list lookup. This is basically the set theory definition of a function. So yes, this comparison to the article is apt, the article is saying that you can encode effects via coroutines.


This is like monads, is better to not look at the definition and just get examples.

I like the way is presented here: https://mikeinnes.io/posts/transducers/

The main gist is that "effect" allows you to define your own "except" of "try/except/finally"


I make use of monads as effects, and that's why I'm not getting this article.

I wrote "Feels like comparing Lists with functions" for a more general audience, but in my mind I was thinking:

  - Effects are Monads
  - Continuations are one particular Monad [1]
  - Coroutines are probably similar?
  - I would use monadic effects to allow/disallow a function from making use of Coroutines.
  - If my effect system *itself* uses Coroutines how do I use the effect system to forbid Coroutines?
[1] https://hackage.haskell.org/package/mtl-2.3.1/docs/Control-M...


> Overall, coroutines strike me as the most promising way to handle many kinds of effectful functions

This is basically what we do at Temporal, which is essentially deterministic coroutines with externalized effects. This gives another nice property: using coroutines to wrap effects lets the non-effect logic replay/resume durably.


> as far as I understand it is not possible to meaningfully handle the diverging effect

I don't know if that counts, but I think `call_with_timeout(duration, function_that_diverges, timeout_return_value, args...)` handles the diverging effect of its function argument.


The difference of this and an effect handler fwiw is that it doesn't handle it totally - it handles the divergence effect but then produces a partial function.


Why does it produce a partial function? If the argument is total other than divergence, call_with_timeout fills in all "holes" in the set of all possible returns with default values.


I'm assuming your language has types which don't have default values because default values for every type are a billion dollar mistake that no modern language should have.


I'm assuming `timeout_return_value` would be a user provided value that serves as the default. But most effect systems also support a `return` effect that lets change the return type of a function [1]. So you could make it return `Just<result>` when it succeeds or `Nothing` when it hits the timeout.

[1] https://koka-lang.github.io/koka/doc/book.html#sec-return


That'd almost be partial functions with extra steps. Take the Klesili category with the Maybe Monad,and you get partial functions.

Unless you are manually matching on the the Maybe, and thus observing the timeout, then that isn't the case. You'd probably also want a nondetermism effect which cannot handle unless you specifically build your timeouts to be deterministic, which I think Lean 4 does, but you can't go from partial to total with it afaik.


Can anyone explain what exactly the author means by "dynamically scoped" vs. "lexically scoped"?


https://github.com/Chalarangelo/30-seconds-of-interviews/blo...

Exception handling, for example, uses dynamic scoping since you don't know what will be handling your exception when you write code which throws it.

Another way of thinking about it is, with dynamic scoping the value of the dynamic variable must always be on the stack and the closest one is the value that will be used. This is a really good behaviour for global variables since a common source of bugs is some global variables (and I'm considering class members "global" for this) getting changed unexpectedly. If the variable is lexical then it can be very hard to figure out what changed the value (especially when threads are involved) but if the variable is dynamic it's easy: the culprit is in the stack trace.


Just noticed your comment now: Thanks so much for the explanation! However, my question was more about the blog post in particular, i.e. coroutines (async/await) vs. effect handlers: I still fail to see why coroutines are lexically scoped. async/await yield control back to the calling function which might or might not define an event loop or yield control to another function further up the stack. How is this different from handling effects or exceptions? Whose scope does "lexically scoped" refer to here?

EDIT: Ah, reading this comment[0],

> The more fundamental difference between effects and coroutines is that a `yield` in a coroutine always goes to the one unique resumer

maybe I thought too much of Python where async/await are implemented via generators and, unless I'm mistaken, there need not be a unique resumer/event loop.

[0]: https://news.ycombinator.com/item?id=40108636


After scrutiny this seems to be "Coroutines and Effects" in, particularly, Rust.


Given that withoutboats is a long-time major contributor to Rust, I suspect they titled the blog post with the audience of regular readers of their blog in mind rather than a more general audience like Hacker News


It's more than that: https://bsky.app/profile/without.boats/post/3kql3yr3goc23

> Btw this is the beginning of me trying to shift away from blogging about rust to blogging about PL design in general. I find that I have very little to say about Rust that I haven’t already said.


Right, this is a post which kinda assumes you know Rust, but AFAICT it isn't a post specifically about Rust.

In 2014 that would be extremely presumptuous or targeted at a very niche audience, however in 2024 a lot of people know Rust and so it seems much more reasonable.


Indeed. Rust is the language that the smart kids are using to think about higher-order programming concepts. It has begun to supplant Lisp and Haskell in that regard.


My reference point is Rust because I've worked on or in Rust for most of the past decade, but I think the relevance of Rust for programming language design more broadly is that it is the most well-typed widely deployed imperative programming language. In this way it has an advantage over Lisp or Haskell if you're trying to think about how we might statically analyze imperative programs, which is what I'm personally most interested in.


Interesting, I hadn't seen that context!


I just upvoted this because I love the domain name.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: