Hacker News new | past | comments | ask | show | jobs | submit login
Monads and Macros (johndcook.com)
107 points by ColinWright on Oct 3, 2021 | hide | past | favorite | 35 comments



> Few people are excited about both monads and macros; only one person that I know comes to mind.

I don’t understand what the article is trying to convey. Everyone in the PL community know about monads and macros but they are tools and people are rarely excited about tools. The article seems to have a strong Haskell user bias.

For example, both using macro and monads is very common in Ocaml. JaneStreet, the company which uses Ocaml the most, uses both extensively. Ocaml provides advanced macro functionalities (called ppx rewriters) and let-binding overloading to make writing monadic code easier. There is no opposition between these two functionalities.


> Everyone in the PL community know about monads and macros but they are tools and people are rarely excited about tools.

I don't think that's true at all. Considering how much we hear about monads and macros compared to how much they are actually used, people are excited about them. People are excited about their tools, as they should be. There's this narrative that some people push on Hacker News that we are all some kind of cool rational engineers and that we don't act like other humans. This is absolutely wrong. People get emotional about tools all the time. This is this excitement that drives progress. Someone got excited about error messages, and now there is an industry-wide movement to improve them. This could have been done decades ago, but it wasn't because people just learned to live with their shitty tools.

While in the day to day life of a project you need people capable of doing the less-exciting stuff, usually at the beginning of something you need people excited to give birth to it.


> Considering how much we hear about monads and macros compared to how much they are actually used

That’s users. That’s not what I meant by the PL community. I was talking about people actually working on PL.

Monads and macros are both old news and not particularly exciting. I know they regularly crop up on HN. I find that a bit sad because actually exciting things happening in PL like effect systems rarely do.


I've been hearing about effect systems for a long time (decades ?) as well. I don't find them exciting at all. You kind of counter your own argument ...


Agreed I'm completely sick of effect systems too. It's entirely the wrong framing of the problem.


I would actually be glade if you could enlighten us about why effect systems are the wrong framing and what you would consider to be the right one.


Sure. So I've spent a lot of time working with FRP things, e.g. with https://hackage.haskell.org/package/reflex with the github.com/obsidiansystems/obelisk/ web framework. While FRP was sort of deemed to hard / a dead end in the early 2020s, it works well for me, and I still consider it viable.

I dislike effects because superficially they are about similar problems ("My whole-program needs side effects") but no amount of effects system stuff is going to ever get you to anything like FRP. In particular, the eeffects mind seems to be pretty narrow, something like:

"I have a bunch of regular-seeming code that does some 'extra stuff', and I need to manage those extra capabilities in a modular way that respects the principle of least authority."

By contrast, the FRP approach rejects the premise that:

- The programs basically look more or less "normal", but with (good) restrictions on what you can do

- The effects are something "side" or "extra".

It's hard to say exactly what it is about instead, but some guidelines might be:

- Don't just decompose a program into terminating parts (like terminating callbacks + magic event look). Embrase non-termination (well, co-termination), and think of new ways to be modular that don't carry over from terminating whole programs.

- Think in terms of "flows", be like a plumber. Try to directory program the steady state in terms of the large scale architecture, and not a regular functional--imperative program that "carries out" out the flows as a spec.

- - Actor-model-type stuff is wrong for focusing on the "nodes not edges". We should have simple components but rich networks.

- Try to find structure not in the effects themselves, but how you intended to use them. If the effects form sinks/sources, then have them be pipeline end pieces that compose just like the the pure middle pieces.

- Calculus is good for programming, after all. Time derivatives / time integrals are at very least good metaphors. We should all go learn more Control Theory for more things than "memory pressure".

I hope this helps.


Reflex in fact has a bunch of MTL-style classes, and while I recognize the ways which that workflow isn't ideal, it simply has not been a large problem for me in practice. Effect systems are not useless, just low priority to me -- they become more important if one turns all their problems into questions of side effects -- but per the above I don't think that is a fruitful way to reduce all ones problems to one.


Would it be correct to say that FRP is about embracing effects while the regular effects systems are about trying to constrain them?


I see what you mean, but "embracing" effects also means, in a way, shifting the frame of reference so they are pure (from the new frame). That's a really hard requirement, and so in some sense it's more constrained. Certainly the difficulty relates to why FRP has been abandoned in so many places.


Research started more than twenty years ago but actual applications in programming languages you might use are relatively recent. If you find monads exciting but effect systems boring there is little I can do for you.


Lawvere theories have been around since the 1960s, so a lot older than 20 years.


That's fair. I've seen a few mentions of Koka and OCaml 5.0 is going to have effects, though both are relatively niche. I was expecting to hear more about linear types too, I feel like one or two years ago (relatively) lots of people were talking about them, but I haven't seen much coming out of it.


I've done a lot of work with effect systems, continuations, and concurrency. I guess I'd like to rant about it a bit.

An effect signals an interruption of the call stack to do some work. Like an exception, it 'unwinds' to the nearest handler (think catch block), and executes the effect; after this point, the effect can either resume, or continue from the catching handler.

This ability to resume is exactly a continuation. In fact, a lot of effectful languages (Koka, Effekt) model it as exactly that. It's important that this continuation is only valid within the scope of the handler; this is equivalent to a delimited continuation.

Full, as opposed to delimited continuations, on the other hand, can be invoked from any scope. At the least, this requires a copy of the stack, or that plus a subset of the heap. (Functional languages with immutable datatypes kinda get a free pass on this one, but only barely.)

Because delimited continuations can only be invoked in nondestructive contexts, they do not require making a copy of the stack (in fact, we don't even need to unwind the stack at all.)

Except in most cases, we only resume once, and this—a single-shot delimited continuation—is exactly a coroutine, a feature common in many languages.

But there's more: remember that to execute an effect, a handler has to be located. This handler is found by searching backward through the stack. If this sounds familiar, it's because this type of name resolution is known as dynamic scoping (as opposed to lexical scoping).

The ties here run deep. We all know that closures are a poor man's objects. That's easy too wrap your head around. But I think we'll soon realize that effects are just a poor man's coroutines are just a poor man's dynamic scoping are just a poor man's delimited continuations are just a poor man's resumable exceptions, and I'm not sure where the strange loop ends.

But there's a light at the end of the tunnel. What do effect systems have to offer above these other approaches? I'd say there are 3 things:

1. Static typing. Unlike coroutines, resumable exceptions, etc. The type of effects used in a function can be automatically inferred. A lot of these other systems operate under a dynamic assumption, or require an explicit annotation of types at some point. With algebraic effects, the row of used effects can be statically determined with no additional annotations by the programmer.

2. Row-based composition. Building off our last point, effects build a sort of open enumeration over the possible effects raisable at a given point, a row. This row can be generic over further effects, which means that effectful higher-order functions can be composed. This full row can be known at compile time, so that the programmer can know the full set of potential effects in scope at each point in the program. Because these row-based constructions are usually built around a single monad (i.e. the free monad), different effects can be composed without running into traditional monadic composition issues.

3. System Injection. What happens when an effect does not have a handler in scope? This could just be an error at compile time, but this opens up another possibility. Instead of raising an error, unhandled effects are handed off to the host runtime for evaluation. Quite sensibly, effect systems are a really neat way not only to model concurrency, but actual honest-to-goodness side effects. An effect-based virtual machine is really just a glorified effect generator. The host runtime can also expose additional APIs, like an FFI, IO, or access to threads. Most importantly, because these 'syscalls,' so to speak, are just effects, they can be overridden. You could create a handler that rolls native threading requests into single threads, or redirects output to stdout to log files or the network.

This has been quite the rambly rant. I've just been thinking about this a lot recently and need to get it all out of my system


People continue to study metaprogramming / representations of syntax especially with binding, and category theory applications to programming.

Yes, the blog post is bad and just rehashing old aphorisms, but that doesn't mean the broader research agenda stalled out.


As a compiler engineer, I get excited about tools. I think it's important to use mathematical foundations in a way that is both intuitive and transparent to the end user.

Aside from monads and macros, OCaml remains a great example in this area because of its support for type inference and type annotations; on the surface, the code you write is mostly 'untyped,' save for at function boundaries and data definitions. Much how like monads and macros ultimately compile, so will correct typed and untyped code.

I'm not sure how clear I'm being, but in essence: tools should be as lispy on the surface as possible (for ease of prototyping), while being built on haskelly* foundations (for correctness). There is no one silver bullet, it's about finding the right balance for the issue at hand: and as a tool, you should leave that decision to your end-users.

*I use lisp and haskell to mirror the article, but this is true of any tool, imperative or functional, language or otherwise.


Speaking about OCaml too, OCaml throws away the types after the typechecking, so in a way you could even argue that the types themselves are a form of macros over the untyped lambda form, which sounds pretty close to Lisp. The dichotomy of Lisp and Haskell is a false one.

Maybe the dichotomy would be better as "static" vs "dynamic", but about the language/runtime, not the typing. For example, in Lisp, you have REPL-driven development, where you know errors will happen but you can easily correct them, while in Haskell you try to minimize the errors with the help of the compiler and once the system is done you usually have to recompile it.


To that end, there are type systems in some languages (turnstile in racket comes to mind) that are implemented entirely using macros.

Turnstile isn't enough to model certain type systems (for reasons too complex to reasonably cover here), but it's interesting to see that types-as-macros-over-the-untyped-lambda-form has been done to a reasonable degree of success.


> I don’t understand what the article is trying to convey.

Lispers are from Mars and Haskellers are from Venus. That’s it.


Exactly, Haskell's "do" notation is essentially defined as a built-in macro, so if you ever want to use "do" notation in a language that doesn't have it built-in you will need to use macros. (Let-binding overloading is essentially do notation in case that's not clear)


Macros are way more powerful than binding, for both the best and the worst.

Yes, you can use macros to add do-notation to a language. But you can also use macros to add things like type checks and new conditional statements. You can't add those with do-notation.


do is more syntactic sugar for >>= (monadic bind) than macro. E.g.

    do
      s <- readLine "Enter name:"
      putStr $ "Hello, " ++ s
reduces to

    readLine "Enter name:" >>= (\s -> putStr $ "Hello, " ++ s)
But I agree you can use macros to implement syntactic sugar.


"do" is also no longer hard-coded. There are multiple language extensions that allow you to change what it desugars to, which is really cool.


Honestly it's posts like this that make me think the HN lisp bump is out of control.

I have no idea what the hell the post is trying to stay, and as someone that cares enough about both to repeatedly complain that Template Haskell needs to be more like Racket with proper phase separation, I didn't learn anything and just see vague oft-repeated aphorisms juxtaposed.


Does the proposal to allow imports that only apply during template haskell evaluation improve things?


See my comments to https://github.com/ghc-proposals/ghc-proposals/pull/412, where I invoke Racket frequently.

It's the right sort of goal, but we have to go all the way to full phase separation to make it mean something.



For those who wonder why monads would be like burritos, read https://blog.plover.com/prog/burritos.html.

Though a large burrito as a joke to represent both monads and macros is kind of catering to a niche audience.


macros are compilers, monads are interpreters


Trying to wrap my head around this. How are monads like interpreters?


google "monadic interpreter"

One thing monad/bind can do that macros cannot is make arbitrary runtime decisions about what to do next, with runtime information. A macro is constrained by what is known statically in the AST. This is why compiled languages are far easier to optimize (it is merely algebraic rewrites of the AST) whereas dynamic languages have runtime eval.


Lets get concrete and make a simple example State monad.

You have access to an extra variable S, with getS and setS, and each of those return values of our StateMonad.

Using these (and >>= aka bind) you can write something which increments the number in the state by one, yes?

    increment = do s <- getS
                   setS (s+1)
and this increment is a value of our StateMonad.

This doesn't run on it's own though, it needs the "interpreter" to run the state monad, and put in a initial value of the state, and maybe take it out at the end.

That's the way you can think of Monads as interpreters in a very rough sense.

Now, you can do fancier things, where you can set up a "free monad", which is basically going to record everything into a syntax tree, and then you really have an interpreter.

http://blog.sigfpe.com/2006/08/you-could-have-invented-monad...


If you look at each as languages for staging computations, what you stage with macros happens before "normal" computation, so likely happens at compile time, while what you stage with monads happens after, so must be deferred to run time.


Alexis King had a talk about Hackett, a sadly abandoned, metaprogrammable haskell: https://youtu.be/5QQdI3P7MdY


Macros ! Monads !! Monads via macros !!!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: