Hacker News new | past | comments | ask | show | jobs | submit login
The Problem with Macros (ianthehenry.com)
151 points by todsacerdoti on Oct 12, 2021 | hide | past | favorite | 69 comments



As unsatisfying as it may sound, Common Lisp taught me that, truly, none of this function binding capture stuff really matters in practice. Millions of lines of Common Lisp code have been running for decades without running into problems with capture. So I purport solving this problem is akin to solving a 0.00000001% issue if we measure the frequency of encountering this error writing thousands of lines of Lisp per day for a sizable chunk of one’s lifetime. The explanations by Pitman et al. are wise. If someone on my team were regularly writing macros that expanded into non-externalizable, non-PRINTable function objects, I’d probably be irritated. You’re hindering my pretty-printer, you’re hosing my debugging tools, and you’re breaking my expectations of interactive development. What? One big negative the article doesn’t cover is that if you’re essentially embedding function pointers in your macros, redefining that function won’t take effect in previously expanded code.

    (defun f () 1)
    (defmacro m () `(f))
    (defmacro n () `(,#'f))
    (defun g () (m))
    (defun h () (n))

    ; later
    (defun f () 2)
H will always return 1 (bad!). G will reflect the global binding (good!). You would have to re-evaluate the source definition of H to get it to reflect the new pointer. This is contrary to Lisp’s DNA of interactive development and is a big no-no. (You have similar issues if you, a library author, are declaring the functions you provide to application programmers as globally INLINE. It’s not for you, the library author, to choose when things should be inlined!) You can force late binding using FUNCALL on a symbol to look up the global binding

    (defmacro n++ () `(funcall 'f))
if you so please, but it’s not very satisfying. F can never be captured by the local environment there.


I strongly agree here: my experience is that a Lisp-2 with namespaced symbols has very few issues with function-name capture. While, at one point, Lisp-2s may have been for performance or other implementation details, I find that Lisp-2s are much nicer to code in because you just don't worry about name collisions (as long as you know a handful of relatively simple rules about macros).


My experience that a certain systems programming language of Unix origins featuring 1 namespace and an unhygienic token-based macro system also has few issues with function-name capture, even in code bases with seven digit LOCs.

That makes me severely disinterested and skeptical about hygienic macros.

They are too weird for the little benefit they provide. You can't look at a piece of code and know what it expands to.

Hygienic macros break lexical scope, because code produced by a macro being invoked in some file "A.lisp" is seeing lexical variables defined in some different file "B.lisp". Lexical scope must be physically enclosed and contained. If you expand it here, it sees the variables that are here, and not some other ones elsewhere that you don't see here.

Given a nesting like (x (y (z a))), if y binds a, that must be like a brick wall; there is no way that the a reference in z can get around it, to connect with a binding of a produced by x. Either that a refers to z's own binding, or else to y. Yet hygienic macros can perpetrate a lexical-scope-destroying wormhole which can do that: the inner a reference can bypass a definition set up by y, and go to the one in x.

It's a downright security issue, like going around your company's firewall, or escaping a container or chroot jail or whatever.


All I know is that in Python, JS and Clojure is I’ve accidentally written code like:

    def foo(str):
      v = str(1)
      . . .
Which resulted in head-scratching errors like “str is not a function”.

EDIT: I agree that hygienic macros aren't the right way to solve this issue.


With Lisp-2 designs as discussed in the article this is not an issue, as variables and functions are in different namespaces:

  CL-USER> (defun foo (list) (list list))
  FOO
  CL-USER> (foo 42)
  (42)
In this case the function attached to the symbol LIST is applied to the argument with the same name, but that isn't a problem.

To further illustrate, in the above example the LIST symbol is imported from the package COMMON-LISP and has a function, plist etc. attached to it:

  CL-USER> (symbol-package 'list)
  #<PACKAGE "COMMON-LISP">
  CL-USER> (symbol-function 'list)
  #<FUNCTION LIST>
  CL-USER> (symbol-plist 'list)
  NIL


Yeah, I’ve been writing Common Lisp for six years now: Lisp-2s, IMO, make the right trade-offs


Macros don't appear in your code example, so they cannot solve anything.


> My experience that a certain systems programming language of Unix origins featuring 1 namespace and an unhygienic token-based macro system also has few issues with function-name capture, even in code bases with seven digit LOCs.

The "certain systems programming language" with its unhygienic macros hides all the resulting horrible mess in reserved symbols that litter both the macro code itself and the code produced by the macros. This is perhaps tolerable so long as you're not maintaining any of those macros and you never need to look at the code after macro-substitution has occurred.


I'm confused about what you're saying about "breaking lexical scope". Lexical scope means that variable references refer to their lexically-enclosing bindings, ie references in A.lisp refer to bindings in A.lisp. That's exactly what hygenic macros do.


> I purport solving this problem is akin to solving a 0.00000001% issue

It seems like Common Lisp has done quite a lot to solve this problem already, which is why it is a non-issue in Common Lisp.

> non-externalizable, non-PRINTable function objects

Yes, this would be a very annoying technique in a language with non-externalizable, non-PRINTable functions.

> This is contrary to Lisp’s DNA of interactive development and is a big no-no.

Redefining functions interactively is actually a great motivating case for cross-stage persistence. Instead of interpolating the value of the function, interpolate the lexical binding of the function itself, and then redefinitions will be reflected automatically in any expansion of the macro.


It’s ambiguous to me whether you’re suggesting there are compiled languages with printable and externalizable functions.

Given that many useful functions are actually closures over lexical environments possibly shared by other functions, and compiled to native code, I’d be impressed to find a language where such objects are printable and externalizable. I don’t think they are in Janet.


> closures over lexical environments possibly shared by other functions

The same problem exists with lists (or any other mutable object). If x and y point to the same list, and you print x and read it back in, and then modify the list x points to, then (naively) it won't modify y. If you did want to preserve such structure sharing, one approach would be to print the entire environment and make liberal use of #n= notation. (I've occasionally done things like this.)

> compiled to native code

Assuming compilation is a deterministic function of the source code and the environment, and assuming the user hasn't changed the environment, it seems printing the source code should suffice, and the runtime can redo the compilation when necessary.


Functions with compiled code referring to address offsets in the closure environment are not printable and I’d hazard to say they can’t be without compromising something else. The function being purportedly serialized is already in a representation very far removed from its source code, and has mutable state that isn’t just from its closure environment. LOAD-TIME-VALUE (in Common Lisp) is another problem that allocates something akin to a C ‘static’ variable. Compiler macros (in Common Lisp) also make things hairier. Lack of run-time representation of reader macros (in Common Lisp) are the cherry on top—though this is only an issue if you want your serialized representation to mimic the source code closely.

Lists do have issues of sharing data with other lists, but most times the lists that are sharing data are somehow adjacent to one another allowing the printer to collect shared references.

If we allow ourselves the freedom to ignore shared closure environments, L-T-V and other Common Lisp complications, then serializing a function would still be this very expensive recursive procedure to serialize all transitive callees to capture any possible shared state, and I think that is too much to ask for, especially in the context we are speaking (making functions appropriate objects to be printed as a part of an S-expression).

It’s all doable if everything is slow, interpreted, and first-class though. Or maybe you’re serializing FORTH words. :)


> Functions with compiled code referring to address offsets in the closure environment are not printable and I’d hazard to say they can’t be without compromising something else.

Surely these offsets refer to items saved in the lexical or global environment, which were originally named by variables? Then you serialize the variable references and the values they refer to. A shared lexical environment would get serialized like this:

  (let ((x 10) (y '(1 2)))
    (let ((f (lambda (z) (set! x z))))
      (let ((g (lambda (a) (list x y)))
            (h (lambda (b) (f b))))
        (list g h))))
  ; serializes to the following,
  ; with the notation #closure(args body env)
  ; where env is ((var1 val1) (var2 val2) ...)
  (#closure( (a)
             ((list x y))
             (#1=(f #closure( (z)
                              ((set! x z))
                              (#2=(x 10) #3=(y (1 2)))))
              #2#
              #3#))
   #closure( (b)
             ((f b))
             (#1#
              #2#
              #3#)))
The runtime system is presumably competent at converting these into machine code, and can do that either during the read, or JIT at execution.

Yes, most of the time, when you print lists, either there is no shared structure, or the program doesn't try to modify it, so it doesn't matter if you read a portion back in and bifurcate the identity. Likewise, for the majority of functions, either they don't share their lexical environment with others, or they do but not in a way where it matters. (This thread's original use case was printing globally defined functions, which usually have no lexical environment.) The common case should work fine like this:

  (let ((n 0)) (lambda (x) (incf n x)))
  ; serializing to
  #closure( (x) ((incf n x)) ((n 0)))
If you do need to print a list and read it back in, where it's important that this list shares structure with existing objects, and you're not printing and reloading those objects, then either (a b . #<Object ID 43827>), or, if your runtime doesn't relocate objects, (a b . #<Object at address 0xabcde) would be the way to go. (Note that the GC would have to know not to kill the object; you're effectively persisting a pointer to it outside the runtime.) Likewise, if you do need to print a lambda sharing a lexenv with another one you're not printing, you'd go #(closure arglist body (#<binding ID 1234> #<binding ID 1235> ...)). Of course, this only works if you reify the object in the same running Lisp session from which you serialized it—but that applies equally to shared lists and shared closures.

The question upthread seemed to be whether it was possible to have a language that could serialize functions. I've argued yes. Now the counterargument seems to be "well, it's inconvenient and expensive to get full fidelity". This brings us to the question of what use cases we're talking about, and how often you want full fidelity (and what assumptions you can make).

I think one of the use cases was printing macroexpansions in the REPL? And then maybe the readability-requiring use case would be selecting a subexpression and saying (macroexpand-1 '([paste])). The literal functions in the expansion would generally be globally defined—the result of using ",+" instead of "+". Well, I think that's covered reasonably well by having the globally-defined function "foo" get printed as #f:foo or something like that. (Maybe we could use the syntax #'foo to mean that? Hah.) It's less nice than seeing the bare name, but macroexpansions already often contain gensyms and package prefixes.


In s7 scheme:

  > (define f (let ((x 5)) (lambda (y) (+ x y))))
  f
  > (f 4)
  9
  > (object->string f :readable)
  "(let ((x 5)) (lambda (y) (+ x y)))"
Maintaining shared references is not necessary to meaningfully print objects. For instance, conses can be printed and read back, even though mutations will not propagate to the versions read.


> redefining that function won’t take effect in previously expanded code

Yeah, that is an important issue. What do you think of the following solution?

1. The semantics of the language are that, every time a function is called, any macros in its body get reexpanded.

2. The language implementation is expected to notice that, in normal cases, neither the macro nor the functions it calls have been redefined, and therefore the previously-expanded (and -compiled) version can be used; but then to notice when that changes (i.e. when you do redefine something) and generate new code accordingly.

The feasibility of 2 I'm not sure about.

Edit: Regarding the printing of functions, I don't think it would be that difficult to make the pretty-printer, upon encountering a function object, detect whether that function is bound to a global variable, and if so, print the name. (Possibly the name with a "f:" prefix, or in a different color.)


1. might fly in some purely-functional Lisp, but most (all?) real-world Lisps can have side effects in macro expansion. And by "side effects", I don't mean altering some global variable or adding a function to the symbol table - Lisp macros have access to the full language runtime, so a macroexpansion may just as well send some HTTP requests or hit a database.

Ironically, the author of the article was correct in their view that he initially described as wrong. Macros are just functions, typically taking code and outputting code. They just didn't understand the evaluation rules surrounding macro invocation.


By “print”, I mean it in the Lisp sense of “print readably”. In Lisp, PRINT really means “serialize as text that can be deserialized with READ.” In Common Lisp (and Janet), functions print fine in the colloquial sense:

    > (print #'print)
    #<FUNCTION PRINT>
The character sequence ‘#<’ means it’s “unreadable” though.


The data needed for (2) is mostly already available: most lisps maintain a Xref database and Slime already has a binding for `slime-who-macroexpands`[1]. Integrating this performantly into the compiler might be interesting, but I think it's theoretically tractable.

[1]: https://common-lisp.net/project/slime/doc/html/Cross_002dref...


Similar to variable capture with dynamic scope e.g. in classical Emacs Lisp? That's been the root cause of a bug for me exactly once in years of Emacs hacking, and having finally been bitten my radar detects that risk going forward. So much as I prefer lexical scope the alternative seems pretty tame in practice.


I miss dynamic scope in non-Lisp languages. Perhaps defaulting to it for general-purpose programming was a bad idea (one on which most Lisps backtracked), but it's arguably a good choice for an extensible application like Emacs, and it's definitely a tool you want to have available.

An easy way to see this is: dynamic binding is to environmental variables what lexical binding is to command line arguments.


Buffer-local variables are also underrated. There's another universe where this context-switching between objects is what we call "OO" instead of Smalltalk/Self style.


Bravo, my friend, bravo.

The unit testing macros for my Racket project were created so I could create snapshot tests for an API. The custom test-suite and test-case macros set parameters (think React context properties) for the suite and case names. There is a custom `check-snapshot` function which save snapshot files under ./snapshots/suites/{suite-name}/cases/{case-name}. It evaluates each expression and pretty-prints the results separated by newlines. It can use an env var flag to fail any test if results differ from the file, but I usually just always update snapshots and check in Git to see if/how they have changed.

One day I'll improve `check-snapshot` to copy the unevaluated expressions into the snapshot file too so they're very easy to review. This will require it to become a macro. The evaluated expressions are generally 1-5 pretty-printed lines long and stateful sequences of expressions, so it's different from React snapshots which can be pages long to display one value.

Anyways, I'm unsure if Racket macros suffer from the problem described in the post. If I'm not mistaken, this problem is kinda like a problem with unhygienic macros only? The idea of hygienic macros is that identifiers are never captured from the environment where they're used.

In Racket there's also phase separation. The tests.rkt file in my project actually has two test-suite identifiers. One at phase 0 referring to the custom test-suite macro, and one at phase 1 referring to rackunit's test-suite. Good thing the macro isn't recursive, I guess...


Talking about Racket and macrology, I think OP should get in touch with the PLT/Racket team. Matthew Flatt made many talks and papers about macro layers and proper scoping.


Parent comment is likely referring to the wonderfully named "Advanced Macrology and the Implementation of Typed Scheme" paper [1]

If you type macrology into Google you get the definition "Long and tedious talk without much substance; superfluity of words. noun." Perhaps accurate, but too vague ;-)

[1]: https://www2.ccs.neu.edu/racket/pubs/scheme2007-ctf.pdf


I was lazy (sic) but here's a quick ddg induced list regarding Flatt(mapp)

https://duckduckgo.com/?t=ffab&q=matthew+flatt+macros&ia=web


As a relative Lisp novice, I think this article is excellent! I really enjoyed it, both from a technical and also an entertainment perspective. Time to put the rest of the blog post series on my ever-growing reading list….

One minor typo (I thought it was pretty funny): “it seems like it was actually a performance hack in order to make a dynamically typed language performant on the hardware of the early 1890s.” Or, did Babbage’s machine run Lisp the whole time?


In his paper "History of Lisp" (PDF warning: http://jmc.stanford.edu/articles/lisp/lisp.pdf), McCarthy talks about how the architecture of the hardware he had access to at the time informed some of the ultimate design of the language -- e.g. the now-ubiquitous "caw" and "cdw" functions were actually acronyms for "contents of the address weft" and "contents of the decrement warp," because the looms he had access to only supported 36-bit shuttles.

(Thanks for the kind words. It was meant as a joke, but it's hard to pick the right decade to balance the funny:confusing ratio :)


People sometimes mistakenly knitpick my jokes, but I always put it down to writing too much purl.


I shared those with my crafts-inclined partner, who filled me in on what "weft, warp, and shuttle" mean in the context of a loom- I still don't entirely follow, but where-ever this tidbit of computing has it's roots, I love it.


Yeah, I'm pretty bad at picking up jokes sometimes—it seems this one's on me :-)


I assumed this was a deliberate joke.


It seems like a pretty excellent way to refer to the 1980s.


Wouldn’t that be the late 1890’s?


VERY late, using rather large values of “late”.


People have macro use cases more often than they think. I work in devops these days and almost every week I have a situation where I have to generate repeated patterns of code using the language in which Im writing code(Python/Shell/Perl etc).

Many times these are done through what is known as manual work. In my experience if someone is complaining about laborious manual work. Lack of quick automation, they are basically in a scenario where the language lacks features to generate code to do work, which they have to now do instead.

Its for this reason I have to write adhoc perl scripts to generate shell commands, and sometimes even whole shell/python snippets/mini-programs. All because Perl makes it easy to generate text(code). Perl now acts as a macro facility for Shell/Python.

If you pay close attention these patterns will begin to show all up over the place.


This is quite a wild ride. My only Lisp experience is Clojure, and I have written some macros[1], but never dove anywhere near this deep into everything that’s going on.

It’s interesting, I’ve spent a lot of recent personal interest time deep diving into code transformation in JS/TS. These concerns about name collision and shadowing are front and center there, but of course tree-walking AST transform is the ~only game in town[2] and it’s had me pining for first class language macros.

1: The first that comes to mind https://github.com/reup-distribution/espalier

2: I have some ideas on this, but nowhere near fully formed enough to elaborate just now; suffice to say they derive a lot from what I do know about macros.


I'm probably late to the discussion, but if anybody finds this looking up macros later, I found the HOPL IV paper Hygienic Macro Technology[1] to be a thorough look at the subject that hardly gets mentioned. It turns out solving these sorts of problems elegantly isn't some technique that the author has missed, it's an active area of research. Newer Lisps compete by having new ways of constructing macros that avoid these problems (say, Racket's syntax-parse[2].)

[1] https://dl.acm.org/doi/10.1145/3386330

[2] https://docs.racket-lang.org/syntax/stxparse.html


Clojure alleviates this to a great extent by automatically namespace-qualifying symbols within quasiquoted forms, which prevents clashes between let-bound symbols (which are never qualified) and ones from the global scope.

It’s still possible to shoot yourself in the foot (see https://blog.danieljanus.pl/2020/01/21/middleware/ for my personal story), but for that you need `binding`, `with-redefs` or `alter-var-root`, which are all relatively rarely seen.


This is a good example of why dynamic vars are dangerous. They are just as perilous as global state, but are more surprising and more difficult to understand. Prefer passing explicit arguments in all cases. You don't have to even think about the behavior, and you'll never spend 8 hours tracking down a missing binding in a commit from 2018.


Time flies by when writing clojure on a train?


In case anybody's wondering what the heck I meant here, it's a reference to the song Time Flies By from Half Man Half Biscuit whose lyrics the title of the linked blog post also appears to be referencing.


Isn't this problem trivially solved by hygienic macros in Scheme?

At least I'm sure it is by fexpr because with them you have complete control over evaluation.

For those who don't know, fexpr are closures (i.e., lexically scoped functions) that are called with unevaluated arguments (just like macros) and are implicitly passed the dynamic environment (i.e., the call site environment). That way you have all the powers of macros and functions with fexpr, and no scoping problem.

But for the capturing problem mentioned in the article, I'm quite sure hygienic macros are enough.


> I had already, at this point, verified that this is something you can do in Common Lisp. [...] But maybe… maybe you couldn’t do this back then? On Lisp was published in 1993. Maybe… maybe Common Lisp macros were entirely syntactic in 1993? Is there a chance that this is some, like, recent development in the lisp world?

Paul Graham mentioned this (EDIT: something like this, but probably with a different answer) once in http://www.paulgraham.com/ilc03.html :

> What happens when a Common Lisp macro returns a list whose car is a function? (Not the name of a function, mind you, but an actual function.) What happens is what you'd expect, in every implementation I've used. But the spec doesn't say anything about this.

(EDIT: I think "what you'd expect" actually means "raising an error" here. Either that, or implementations have changed since 2003, which is also possible.)

I do suspect that this is the nicest way (from a theoretical standpoint: requiring the least additional complexity in the language semantics) to get "macro hygiene". Then you probably want a different syntax from quasiquote, because otherwise I think you'll put commas on the majority of the symbols. But using "list" a lot (and "append" when you need unquote-splicing in the middle of an expression) isn't very nice. So probably a new syntax is warranted.


Odd, no implementation I tried (abcl, ccl, clisp, cmucl, sbcl) today permits such a form.


Hmm. Yeah, you're right. (Come to think of it, when I originally read it, I think I interpreted "what you'd expect" to be "raising an error", or at least I deduced that after trying it myself in SBCL.) Edited my comment.


I think there might be another issue here: function objects aren't "externalizable" and so you can run into weird errors if you try to compile a form with literal functions.


Why aren't they externalizable? You could print a lambda as #<lambda:(args body env)>. The arglist and probably the body would be straightforward (unless the body contains more literal functions, but then you recurse). Printing the env might involve printing complex or even circularly linked structures, but it's no more difficult than pretty-printing objects in general. (Some things, like a pointer to an FFI object corresponding to an open GUI window, would be difficult to serialize, but there are many use cases that don't involve putting those things into the body or env.)

Printing primitive functions could be something like #<primitive:+>, and when an implementation read in a thing like that, it would look in its own set of primitive functions for one named '+ and use that; with a reader macro I imagine you could make that exact syntax work. (In fact, you could print #<primitive:+> on an x86 machine and read it on an ARM machine, and this could be fine.)

I honestly don't see why implementations haven't done this, other than "it was more bothersome than worthwhile".


If both F and G are closures over x, and you externalize F, and later externalize G, how do you make sure their environments are enjoined when F and G are reified in a different Lisp process?

In Common Lisp, what about LOAD-TIME-VALUE?

Recursive printing of the environment just to serialize a single function seems atrocious and hazardous.


> Which is fantastic; very useful backtrace; no notes. And why is this? Why can’t I shadow this function?

Not sure what the author means by "no notes". The transcript they posted does point to both the exact chapters in both the implementation and standard docs:

    ;   See also:
    ;     The SBCL Manual, Node "Package Locks"
    ;     The ANSI Standard, Section 11.1.2.1.2


The phrase "no notes" is slang from show business. After you're shown a script or a performance you may have "notes" to share indicating what you think is wrong with it. If you have "no notes" then you think it is perfect as-is and needs no changes.

The author here is not saying that the message has no notes. They are saying that they have "no notes" to offer the creators of the message on ways to improve it

See, for example: https://forum.wordreference.com/threads/no-notes.3042292/ https://hinative.com/en-US/questions/15238431 https://www.youtube.com/watch?v=5NVQ8v1P4go


"no notes" is an idiom meaning "I have no suggestions for improvement"

The author was being sincere, rather than sarcastic as your initial parse suggests


I was not suggesting they were being sarcastic. I was suggesting they had not actually noticed that the error message actually answered their question: "Why can’t I shadow this function?".

This is the section of the standard the error message points at: http://www.lispworks.com/documentation/lw71/CLHS/Body/11_aba...

And this is the reasoning for why that section exists, linked to from that page: http://www.lispworks.com/documentation/lw71/CLHS/Issues/iss2...


Having also not spent that much quality time with the hyperspec, even if I've quite enjoyed dabbling in common lisp from time to time, it wouldn't've occurred to me that the spec would explain 'why' as well as 'what'.

"Underestimating the hyperspec" is a persistent mistake on my part sadly, although I at least seem to underestimate it less these days.


I read it the same way. I haven't heard this idiom either.


Syntactic closures are another hygine system that provides the same construction techniques as common lisp macros, while also allowing the macro author to choose which bindings should be set in the macro's lexical environment instead of the expansion's lexical environment (or vice versa).

http://community.schemewiki.org/?syntactic-closures


Thank you for this. My brain is complete mush after days of reading papers about the history of lisp, but this is at the top of my reading stack once I can get my eyes to focus again.


HN not supporting notifications, I am forced (;-) to link you some names/topics that you might enjoy

https://news.ycombinator.com/item?id=28847236


May be unrelated, but that's why I prefer the way JS approach this kind of problem: JS doesn't have macros.

People use callback function to achieve almost the same thing

  function doTexture(texture, fn) {
    beginTextureMode(texture)
    fn()
    endTextureMode()
  }
  
  doTexture(myTexture, () => {
    drawCircle()
    drawRectagle()
  })
At the cost of being more verbose, we get the benefit of one thing less to learn; and simplicity is a powerful feature IMO.

I guess why this is less popular in lispy languages is because

  1. macro
  2. callback functions introduce more indent levels :(


> People use callback function to achieve almost the same thing

There's a QoL difference here with macros - writing out those lambdas can become annoying. That said, the "good style" rule in (Common) Lisp is to prefer lambda-forms in such cases - i.e. cases where the macro parameters are a block of code to be mostly run straight.

In fact, a common pattern for with-macros (of which doTexture would be an example), is the call-with pattern. Example from some random project of mine:

  (defmacro with-logging-conditions (&body forms)
    "Set up a passthrough condition handler that will log all signalled conditions."
    `(call-with-logging-conditions (lambda () ,@forms)))
Which is then used like:

  (with-logging-conditions
    (blah blah)
    (main game code))
All the macro does is, upon expansion, package the code block into a lambda, and passing it to a function call-with-logging-conditions, that does the actual work. So it's like your example, except I don't have to write the lambda myself. This is a trivial case; commonly, macros might accept additional arguments that they process, but eventually they'd still wrap their input body argument in a lambda and expand to a function call with said lambda as argument.

A better use of macros, which you can't replicate in JavaScript[0], would be if you wanted to do something like:

  (do-texture texture
    o O r R x2)
And have it expand - at compile time - to:

  ;; unwind-protect is Lisp's sorta-equivalent of try/finally in other languages.
  (unwind-protect
    (progn
      (begin-texture-mode texture)
      (draw-circle)
      (draw-circle :big)
      (draw-rectangle)
      (draw-rectangle :big)
      (draw-rectangle :big))
    (end-texture-mode))
However silly this looks, this kind of code generation is (one of the main reasons) why you need macros.

--

[0] - Well, you can if you have a toolchain. Babel is essentially a macro engine for JavaScript, but you can only use it at build time.


> (do-texture texture > o O r R x2)

This might be another reason why I find macros less appealing: macros introduce DSL in form of normal s-expression, but they don't actually behave like a function; macros introduce their own mini-language/syntax.

In the last example you provided, I bet the macro implementation would look like a little interpreter? If that's the case, having a function call like

  doTexture(myTexture, ['op1', 'op0', 'opR'])
and let doTexture handle each cases(ops), might be able to achieve the same behavior, right?

I'm not trying to argue that macros are unnecessary, I really want to like them! Just most of the time, I find functions are sufficient enough.


Remember JavaScript before "async"? Lots of boilerplate there. You had to wait until some committee decided to incorporate it into the language.

So "modern" languages without macros have such patches every now and then. Still, it's easy to find boilerplate in programs.

The reason is that a lot of boilerplate is specific to the program's domain, and you're left with cumbersome syntactic patterns and no tools to abstract them.

So yes, you need to learn how to write macros. But, assuming good taste, the simplicity is in their use, not their definition. It's a good tradeoff because macros are used much more often than they are defined.

By the way, the language introduced by a macro often allows arbitrary Lisp code to be mixed with it. The example did not demonstrate this, and that's why you had an easy time thinking up a non-macro alternative (which still has some unfortunate implications, like needing to interpret at runtime).


I've picked a really silly example of a DSL just to illustrate the point in a few lines, but I feel the silliness is obscuring what I wanted to communicate. Next time I'll try to come up with something more useful.

> macros introduce DSL in form of normal s-expression, but they don't actually behave like a function; macros introduce their own mini-language/syntax*

That's the point. S-expressions are structure notation language. The semantics of code expressed as s-expression is something else. There's the "default" one (as provided by #'eval), but macros allow you to work on the s-expressions as data structures. Ultimately, the macro expansion is still evaluated normally, but the expansion might be wildly different from the macro invocation.

This is a feature, not a bug. It gives you the power to add new abstractions to the language, as if they were part of that language in the first place. It's not something you need often, but there are things you can't do any other way. For example, since we're talking JavaScript - think of JSX. In JavaScript, it's a mind-bending innovation, though committing you to use a big code generation tool (Babel). In Lisp, you can do half of it with a macro[0].

A common use of macros is removing conceptual repetition in code. Imagine you have a concept in your codebase - say, a plugin. Creating a plugin involves defining a class extending a common base class, defining a bunch of methods that are identical in 90% of the cases, and a bunch of free functions. Conceptually, that whole ensemble is "a plugin". Lisp macros let you define that concept in code, and reduce your plugin definitions to just:

  (define-plugin some-name
    :some-specific-method (lambda () ...))
> In the last example you provided, I bet the macro implementation would look like a little interpreter?

Such macros are more like compilers. Interpreters execute the code they read; compilers - like those macro - emit different code instead.

> might be able to achieve the same behavior, right?

Yes, except the macro does that at expansion time - i.e. ahead of execution. In practice, this is almost always "compilation time".

> Just most of the time, I find functions are sufficient enough.

Because they are! Even in Lisp, macros are not your default tool for solving problems. Functions are. Macros come out when the best way to do something involves code that writes code for you.

--

[0] - And the other half with a reader macro. Regular macros transform trees before they're evaluated. Reader macros alter the way text is deserialized into trees. Reader macros are very rarely used, because they're a bit hard to keep contained and tend to screw with editors, but if you really want to create a different syntax for your code, they're there for you.


I debated mentioning this explicitly, but decided it was just noise next to the rest of the post. But you should note that, while that macro can be easily replaced with lambdas without much change in ergonomics, there are lots of much more interesting things you can do with macros that do not have equivalent substitutes in a language like JavaScript. (e.g. JSX exists as its own weird pre-processor thing, but you could theoretically just do it with macros instead, and then it could compose with other syntax extensions.)

> we get the benefit of one thing less to learn

But learning programming language theory is the best part :)


Didn't recognize you're the OP! So my comment may not be that off-topic then :D

> But learning programming language theory is the best part :)

That's very true :)


An older Macro, who also had his problems. https://en.wikipedia.org/wiki/Naevius_Sutorius_Macro


"So we’re supposed to be writing a game, right? But in order to make progress, we have to fix a bug. And in order to fix the bug, we have to write a test. And in order to write a test, we have to write a test framework. And in order to write a test framework, we have to understand a thing or two about macros."

You could just fix the bug?

Game development is a different architecture to 'regular' system development (especially if it is a solo project).

If you are testing to ensure the bug isn't reintroduced then you haven't fixed the bug.


> If you are testing to ensure the bug isn't reintroduced then you haven't fixed the bug.

It’s called regression testing, and it’s a (fairly) common thing.


I know what it is - I don't think its relevant in this project




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: