>as a project grows and grows you end up inventing your own little dialect of the language which is opaque to any 3rd party reading your code unless they take the time to unravel your macros.
This is bad use of macros, or an ugly macro system.
Macros, at least in Lisp, made code even clearer to understand; because they let you create the constructs that make the problem domain map more directly, straightforwardly, easily, to the programming language.
So they reduce line count, they reduce need for kludges or workarounds. They allow more straight code.
But this is within the Land of Lisp, where writing a macro isn't something "advanced" nor "complex" nor "esoteric". In the Lisp world, writing a macro is 95% similar to writing a run-of-the-mill function.
No true Scotsman would ever write macros in such a confusing manner!
But really, this is a recognized problem of Lisp, and has been called the Lisp Curse. [0] One is never programming in "just Lisp", but rather in Lisp plus some half-baked DSL haphazardly created by whoever wrote the program in the first place.
Also, don't confuse readability with understanding. Yes, DSLs are typically easier to read, but only after you come to understand the primitives of the language. When every program has its own DSL with its own primitives, even programs that do similar things... That becomes quite a burden.
> Also, don't confuse readability with understanding. Yes, DSLs are typically easier to read, but only after you come to understand the primitives of the language.
This is also true of functions. Without reading the body, you don't know if it's just going to return the sum of the two integers you passed to it, or if it's going to change some global variable, launch the missiles, and then return a random int.
Yes, macros are more powerful, and therefore you need to be more careful with them. But they are still much better than what ends up being used instead. With languages that don't have macros, you end up with complex frameworks that use runtime reflection, or code generators that run as part of the build system (which end up being an ad-hoc, messy macro system).
Or, some horrible solution where you embed a DSL by interpreting trees of objects, which effectively represent an AST. In this case, the embedded language doesn't follow the language rules, but it seems like it does, because you're looking at the implementation of the interpreter, instead of at the syntax of the embedded language.
If I understand "the Lisp curse" correctly, the claim is that Lisp often winds up with a "half-baked DSL" because making DSLs in Lisp is so easy. You can do it without putting very much thought into it, so it's easy for the original author to just slap something together.
Note well: This is my understanding of the claim. I take no position on whether it is true.
Why does it need to be "half-baked"? Why do you assume that writing a good DSL is impossible for most Lisp users? Are you sure it's actually the case?
Writing and maintaining a good DSL is like writing and maintaining bug free code. You always start with the best of intentions, but human fallibility and entropy are always pulling you in the wrong direction.
This doesn't mean that the attempt is not worthwhile. But it does mean that you should expect eventual failure.
As other commenters have replied: It's half-baked the same way Java tends towards half-baked enterprise design lasagna and/or design pattern bingo. It's easy to do, and most won't question it.
Also, yes I would propose that a good DSL would be difficult to write for most programmers of any type. Not because of any inherit deficiency in the programmer, but rather because we tend not to spend enough time in a single domain to understand it well enough to write a good DSL.
>Yes, DSLs are typically easier to read, but only after you come to understand the primitives of the language.
Quoting user quotemstr here:
"Every program is a DSL. What do you think you're doing when you define types and functions except make a little DSL of your own? (...) Programming, in large part, is an exercise in language creation."
That's just wrong. A DSL defines a new language syntax. Most programs don't do that. You might have to learn what functions do but you don't need to learn an entirely new language when you read a Go program for example.
As IshKabab hints at, there's a huge difference between defining a domain within an existing language and defining a domain-specific language. Yes, types and functions do define a domain, by detailing the data compositions and operations available. But those data and operations work within the confines of the existent language.
Isn't loop essentially a macro (probably special op) that a lot of people hate, specifically because it is a sprawling dsl? Because lisp has an expressive macro system, it can lead people to that. The existence of macros requires an at least above average dev community, because it's one thing when a decently-designed but controversial dsl like loop is default to the language, it's another when every project can roll out their own poorly considered and implemented dsls using macros when simpler abstractions would have sufficed.
Most lisp docs will tell you to use macros only when necessary, because as great as they are, they have inherent issues that aren't fixed just by having a good macro system.
This is bad use of macros, or an ugly macro system.
>Most lisp docs will tell you to use macros only when necessary
One also declares variables when necessary, and one also creates arrays when necessary, etc. But imagine a programming language that doesn't support arrays. It would be a nightmare if you need to do certain scientific computations.
So in the same way, yes, not having a (proper, Lisp-like) macro system surely hurts a lot, once you realize how it makes certain problems become really easy.
And, by the way, one should write macros when necessary. In Lisp, we're using macros most of the time!
>Isn't loop essentially a macro (probably special op) that a lot of people hate?
And other Lisp programmers like the LOOP macro, since it can allow for very readable and concise code to do something simple that should stay simple to read.
Examples:
(loop for x from 1
for y = (* x 10)
while (< y 100)
do (print (* x 5))
collect y)
There are some legitimate reasons to dislike loop. It's a high level construct, yet it has unspecified behaviors. A program can be nonportable on account of some manner of using loop. It's been the case in the past that Lisp applications ended up carrying their own private loop implementation which would behave the same way everywhere.
It certainly does not look like it is one way or the other; it is something you have to know. If you don't know anything about loop, but have a belief that it is doing a Cartesian product, or a belief that it is not doing one, your belief has no rational basis either way. You can infer which one is right from the output.
loop doesn't do cross-producting; if you know that, there is no mistaking it. All the clauses specify iteration elements for one single loop.
"for x from 1" means starting at 1, in increments of 1.
"for y = expr" means that expr is evaluated on each iteration, and y takes on that value.
y could be incremented on its own, but then that example wouldn't show the "for var = expr" syntax, how one variable can depend on a combination of others.
The LOOP macro has actually seen a lot of work in designing and implementating it, including a syntax spec. Some people don't hate it because it is a DSL, but because it is a bit different from the regular Lisp macro, in that it requires more parsing and its clauses are not grouped by parentheses/s-expressions. Plus: understanding the relationship of the clauses is not really that simple, since there is some implicit grouping and dependencies.
This is what a LOOP macro looks like in code:
(loop for i from 0 below n
for v across vec
when (evenp n)
collect (* i v))
This is what a typical Lisp programmer would prefer:
(loop ((for i :from 0 :below n)
(for v :across vec))
(when (evenp n)
(collect (* i v))))
The clauses would be grouped in a list and each clause would be a list. The body would then use the usual Lisp syntax and the WHEN and COLLECT features would look similar to normal Lisp macros.
The LOOP macro is historically coming from Interlisp (70s), where it was a part of a certain language design trend called 'conversational programming'. The idea was to have some more natural language like programming constructs, combined with tools like spell checkers and automatic syntax repair (Do What I Mean, DWIM). From there this idea and the FOR macro was influencing the LOOP macro for Maclisp. The LOOP macro grew over time in capabilities and was then transferred to later Lisp dialects, like Common Lisp.
There are actually Lisp macros which are even more complicated to implement and even more powerful, but which create less resistance, since they are a bit better integrated in the usual Lisp language syntax. An example is the ITERATE macro: https://common-lisp.net/project/iterate/
Thus it is not the complexity or the functionality of the macro itself, but a particular style of macro and its implementation. I personally also prefer something like ITERATE, but LOOP works fine for me, too.
The advantage of something like ITERATE or even LOOP is that they are mostly on the developer level, and not fully the implementor level. A developer or group of developers can develop such a complex macro and can integrate it seamlessly into the language, making the language more powerful and allows us to reuse much of the knowledge/infrastrucure about/of the language.
Implementing and designing something like ITERATE and LOOP requires above the usual dev capabilities. Generally macros require some of that, since it makes it necessary that the dev can use or program on the syntactic meta-level. It's there where language constructs are reused, implemented and integrated.
Lisp docs will tell you to use macros only when necessary? They tell you to WRITE macros only when necessary. Since many Lisp dialects have a lot macros, you have to use them anyway. Most of the top-level definition constructs are already macros. If we use DEFUN to define a function, we already use macros.
In my experience actual Common Lisp uses a lot of macros. I also tend to write a fair number of macros.
But generally good macro programming style is slightly underdocumented, especially when we think of various qualities: robustness, good syntax, usefulness, readability, avoiding the obvious macro bugs, ...
Macros are very useful and I use them a lot, but at the same time one needs to put a bit more care/discipline into them and some help of the development environment is useful...
I wonder if in a parallel world we would have a library system similar to the one used in Node.js with npm, and many installable libraries would consist of a single macro.
Macros, at least in Lisp, made code even clearer to understand; because they let you create the constructs that make the problem domain map more directly, straightforwardly, easily, to the programming language.
Macros are a tool for creating abstractions. In the mind of the author, abstractions are always clearer. Others who have to work with those abstractions legitimately may or may not agree.
In the case of authors who think that abstractions are an unmitigated good, I seldom agree with their abstractions. And I don't care whether they are implemented as deep object hierarchies, macros, or functions with a lot of black magic. If you are unaware of the cognitive load for others that are inherent in your abstractions, then you are unlikely to find a good tradeoff between conciseness and how much of your mental state the maintainer has to understand to follow along.
The best macros are the macros you don't even know exist. Take a look around [0] where thing like if, def, pipeline (|>) and common "base" entities are defined simply as a construction of the Kernel special forms (the parts that won't be expanded by macros). Using the language you'd not know that they were macros because they're in the language of the problem, and do not leak their abstractions. A macro is just a tool and like other tools should be used when it's the right one to use, not because you contorted the problem to fit the tool.
>So they reduce line count, they reduce need for kludges or workarounds. They allow more straight code.
>But this is within the Land of Lisp, where writing a macro isn't something "advanced" nor "complex" nor "esoteric". In the Lisp world, writing a macro is 95% similar to writing a run-of-the-mill function.
I agree with you but I think there are two aspects to code readability: A/ is it clear what the code does (the intent) and B/ is it clear how it does it (the implementation).
I think macros can help massively with A (hiding redundant code and clunky constructs) at the cost of obfuscating B. The thing is that if you want to hack into the code at some point you'll need to understand B too.
To take a very simplistic example imagine that you're reading some CL code and see something like:
(let ((var 42))
(magicbox var (format t "side effect~%"))
(format t "~a~%" var))
Now for some reason you need to figure out what this does. Maybe instead of the 2nd "format" call there's a function you're currently working on and you want to know how it's called.
So, without running the code or looking at what "magicbox" does, if you assume that it's a function call, then you can expect that this code will print "side effect" when magicbox is called (since the parameter will be evaluated before the call) and then whenever it returns the 2nd format will display "42" since var is not modified between the let and there.
You run your code and you see that it only displays "magic" instead. Uh, What happened?
Well there you're lucky because there's only a single statement to consider, the magicbox invocation. You ask emacs to find the definition and you see:
(defmacro magicbox (var unused)
(list 'setq var "magic"))
Now you might say that it's a contrieved example and a bad use of macros and you'd be right, but I'm sure you could find real world example of macros with similar side effects that are well justified. Maybe it makes the code a whole lot nicer and maintainable. Still doesn't change the fact that it makes it harder to figure exactly what's going on. If this magicbox call was in the middle of dozens of statements it might take you a while to figure out why some variably seems to magically change its value.
> If this magicbox call was in the middle of dozens of statements it might take you a while to figure out why some variably seems to magically change its value.
Not really. Since the call explicitly names var you could easily narrow it down to that by going through all the places in the code where var is mentioned. If all the other expressions concerning var are non-destructive, you have your smoking gun: it must be magicbox. If var is mutated all over the place in that code then you have to work out whether it is magicbox or something else.
The thing that's nice about macros is that they happen at compile time. If you want to know what a macro is doing, just ask your editor or language implementation to expand the macro invocation. Then you will see exactly what code is being generated.
This is bad use of macros, or an ugly macro system.
Macros, at least in Lisp, made code even clearer to understand; because they let you create the constructs that make the problem domain map more directly, straightforwardly, easily, to the programming language.
So they reduce line count, they reduce need for kludges or workarounds. They allow more straight code.
But this is within the Land of Lisp, where writing a macro isn't something "advanced" nor "complex" nor "esoteric". In the Lisp world, writing a macro is 95% similar to writing a run-of-the-mill function.