Hacker News new | past | comments | ask | show | jobs | submit login
Lisp: it's not about macros, it's about READ (2012) (jlongster.com)
154 points by lisper on July 17, 2016 | hide | past | favorite | 99 comments



It's all about the read. But a different one. The human ability to read code.

You can write very dense code in Lisp (and the author makes the point several times on how little lines of code you need for this vs. that). However, who's going to read that and understand it?

Especially problematic with code that looks like a regular s-expression but has a completely different semantic thanks to macros.

Lisp is great if you want to write code. Reading it...you mileage may vary


> a completely different semantic thanks to macros.

Most of the time, you use functions. Sometimes, you need to define "WITH-X" macros or "DO-Y" iterators, or a custom "DEFINE-Z" to shorten declarations (I tend to use macrolet for that).

    (defun rgb (r g b) (format nil "#~@{~2,'0x~}" r g b))

    (defparameter *background* (rgb 10 10 30))
    (defparameter *foreground* (rgb 90 90 255))

    (with-html-output-to-string (s)
      (:html
       (:head
        (:title "Hello")
        (:style :media "screen"
                :type "text/css"
                (str (cl-css:css
                      `((:body :background-color ,*background*
                               :color ,*foreground*)))))
        (:script :type "text/javascript"
                 (str
                  (ps:ps  
                    (setf (ps:@ window onload)
                          (lambda () (alert "Message")))))))

       (:body
        (:h1 "Heading"))))

The above is an example of what I consider a heavy usage of macros, and yet somehow I find it readable.

(the output is here: http://pastebin.com/raw/S5cb5ufC)


For me the #1 rule for macros is to never, ever use one if you can use a function instead. With this in mind, surely you could just write

    (html
      (head
        (title "Hello")
      (style :media "screen" :type "text/css" ........))))
What is the value of a macro in this context?


The macro is documented here: http://weitz.de/cl-who/#syntax

In particular, each list beginning with a keyword is transformed into an (X)HTML tag. It is an easy way to handle all current and future HTML elements without defining a function for each element and their attributes. You might be thinking: "we could generate invalid HTML", and yes, it is true. Other libraries exist to generate HTML, but this one is quite popular.

The other interesting thing is that by processing the tree at compile time, you can make some optimizations. For example, the following form:

    (:html (:body (:h1 "Hi")))
... is expanded as:

    (WRITE-STRING "<html><body><h1>Hi</h1></body></html>" *STANDARD-OUTPUT*)
... because it does not depend on external data. However, if instead of a constant string, you place (str (something)), you have instead:

    (WRITE-STRING "<html><body><h1>" *STANDARD-OUTPUT*)
      (LET ((CL-WHO::*INDENT* NIL))
        NIL
        (STR (SOMETHING)))
    (WRITE-STRING "</h1></body></html>" *STANDARD-OUTPUT*)))


I think the idea is that the macro expands into an efficient template. Instead of having to always build the same constant strings for every request, you can expand into a bunch of WRITE-STRINGs, with the dynamic parts in between.


A little more interesting javascript code, in place of the above:

    (alert
      (list (ps:lisp *foreground*)
            (ps:lisp *background*)))


Making proclamations based on assumptions and projections without having any practical experience is foolish, and this is exactly what your post illustrates I feel. You assume Lisp macros make Lisp code hard to read because you project and fuse traditional IDEs/editors with Lisp code but this is not what takes place in reality.

Common Lisp (the language and its environments) is interactive and has extensive documenting, cross-referencing, macro-expanding, tracing and debugging capabilities. In fact, Symbolics Genera is still the state of the art in this domain and absolutely nothing today comes close to it.

In today's tech, Common Lisp and Smalltalk (e.g. Pharo) environments (note that I don't use the words IDEs or editors) are the kings on the hill when it comes to rapidly understanding huge codebases since they are image-based, and built to be used for that exact purpose. When working with code in an image-based language such as CL or Smalltalk, you 're working with a higher-level (and thus easier to understand and shape) representation of that code, similarly to how someone works with clay. It's completely unlike working with source as text, edit/compile/test/rerun cycles and so on. Even modern languages that have REPLs like Python, Ruby, Javascript and so on completely miss this point, since they're not really interactive languages and thus the REPLs they have are gimmicks.


> You can write very dense code in Lisp (and the author makes the point several times on how little lines of code you need for this vs. that). However, who's going to read that and understand it?

Lisp macros do not depend on whole-program/file/module compilation. You can take any piece of Lisp code and macroexpand it whenever (for example, SLIME has shortcut keys to show the macroexpansion of the expression at cursor: https://www.common-lisp.net/project/slime/doc/html/Macro_002...)

This makes Lisp code much easier to understand than C++ with templates and operator overloading. Almost every time a Lisp discussion comes up on HN, or anywhere else, someone takes it upon themselves to troll "Lisp is hard to read!" (today, that winner is you). But for some reason that never seems to happen in discussions about C++.


Yeah, it really depends. Powerful language features let you write code that is extremely clear since you can shape the language to your domain. It's not just for making obfuscated abstractions or code golf.

Since this topic comes up a lot, I'm often curious about what experiences people have with reading and writing Lisp code... My own experiences tell me that almost all code, in all languages, is difficult to read and understand—and I often long for Lisp-like features to help me make programs more clear.


> You can write very dense code in Lisp (and the author makes the point several times on how little lines of code you need for this vs. that). However, who's going to read that and understand it?

> Lisp is great if you want to write code. Reading it...

When I read that APL and Haskell came into my mind. Lisp and Scheme code are pretty well readable. The most readable languages are verbose, Ada and Pascal for instance.


> So what are macros? All they are is read packaged up nicely into formal system.

Macroexpansion happens during a different phase altogether, and would work exactly the same even if programmers had no access to the reader at all. The reader transforms an unstructured character stream into structured data, while macroexpansion transforms already-structured data into different structured data. (Read-macros are another story, but it's clear they're not what the author had in mind.)

Too often these kinds of posts only add to the confusion they're attempting to clear up.


tldr it's not about macros it's about homoiconicity;


Surprised op didn't refer to it by name. We can handle big words.



How is this any different from `eval`, except that lisp's syntax is particularly well suited to being `eval`'d by virtue of being so easy to parse?


Eval executes the code, while read only parses it. (incidentally, in lisp, to get same effect as eval in JS/Perl/whatever, you pass the string into read-from-string and then the result from that into actual eval)


As mentioned, `eval` executes code. While macros are generally about manipulating code for eventual evaluation, `read` enables more than that, for instance static analysis.

The distinction is also explicit in the term "REPL" (read eval print loop), though many REPLs just pass the input to an interpreter.


I think the main benefit comes from being able to build tools to transform, refactor, etc. your code in very little effort.


Yes, it raises the abstraction level to allow 'meta' programming as the normal level. Brown university had a chapter on their old PAIP book saying that they voluntarily skipped parsing because it's of no interest, so they used a scheme as a basis for classes.


The difference is that in most languages eval() parses a character string as code and evaluates it, whereas in Lisp you evaluate language objects (e.g. lists of symbols) rather than strings.


This article helps explain the how, but doesn't really cover the why. Coming from a functional programming perspective (Ruby, Elixir, Clojure [yes I know]), I don't see where macros become useful or anything more than a curiosity outside of a couple situations:

1. Implementing a DSL, which is its own beast and is more often a bad idea than not; or

2. Actually, I can't think of anything else.

Granted, I might be spoiled by modern conveniences already implemented in Clojure, such as cond. It's just hard to see what macros can give me that I can't accomplish with functions. If code is data, then aren't functions, which operate on data, already what we're looking for?


"Implementing a DSL, which is its own beast and is more often a bad idea than not"

Those using languages supporting that disagree. Let's assume the DSL is syntax embedded in a language like LISP or Haskell so you don't get stuck in the DSL. Essentially, there's not much difference between a DSL and library in terms of what's required to use it. You still learn what you type to achieve a certain effect. You still can use rest of language. Main difference is both (a) matches the problem domain more closely and (b) limits you to expressing just what you need to solve the problem.

These are why the BASIC-oriented 4GL's were quite successful. LISP, too, given it's also easy to process. REBOL & recently Red take it further. Kay et al are working wonders where a whole system is expressed in readable code that's a tiny fraction of code in a non-DSL system.

The combo of easy-to-modify language and DSL toolkit (esp Racket or Red) also leads to methodologies like this one from sklogic where it's easy to do components with a series of DSL's that are thrown together & many pieces autogenerated:

"The method is very simple:

* describe the problem in plain English (maybe with some diagrams)

* iterate it a few times until you have a syntax you think is unambiguous enough

* strip this syntax from all the sugar you just introduced in order to define an AST

* find DSLs in your toolbox that are potentially close to the one you're building and cherry-pick the necessary language components from them

* write a sequence of very simple transforms that would lower your source AST into a combination of the parts of the ASTs of the target DSLs of your choice

* Done. An efficient DSL compiler is ready, with a language designed as closely to your current view of the problem domain as possible.

* If you later find that your DSL is inadequate and your understanding of the domain was insufficient, than just start over again, this entire process is so cheap and simple that it does not really matter."

It also helps with verification. You can define DSL's for specific types of OS components or app software that make correct-by-construction easier. CertiKOS is doing that:

http://flint.cs.yale.edu/certikos/publications/ctos.pdf

http://flint.cs.yale.edu/certikos/certikos.html


Fantastic article, but I still don't quite understand what kind of real-world tasks on typical projects become easier with AST manipulation. After all, how often do you write a debugger?


> Fantastic article, but I still don't quite understand what kind of real-world tasks on typical projects become easier with AST manipulation. After all, how often do you write a debugger?

A good example comes from Python: decorators had to be added as a special extension in 2003 (https://www.python.org/dev/peps/pep-0318/) and the `with` statement in 2005 (https://www.python.org/dev/peps/pep-0343/). Both are very useful for web programming, but do not need PEP committees and years of work when you have macros (examples: https://github.com/vsedach/cliki2/blob/master/src/accounts.l... https://github.com/vsedach/cliki2/blob/master/src/wiki.lisp#...)


Theoretically every project is easier with AST manipulation due to being able to create a high level DSL that directly maps to your problem.

In practice, I think good DSLs are hard to make and so most projects never get the full benefit. I don't have any production lisp experience though(i'm a hobbyist lisper) so I could definitely be wrong there


It always struck me as odd that many programming languages have standard printers that will output any data structure expressible in the language, but they don't have accompanying readers - this forces one to roll one's own or use a completely separate serialization technique. This is useful even when not in a homoiconic language for day-to-day tasks.


Okay, I'll put something heretical out there:

Lisp just isn't that much better than our current languages. Sorry.

Sure, when Lisp came out it was "advanced". Garbage collection, AST parsing and manipulation, macros, packages, a standard library, modules, homoiconic data structures, etc.

However, the good features of Lisp are now normal features of any modern language. And some of those features have gone into the dustbin for good reasons. And some of them (static vs dynamic typing) are personal taste.

I remember using Lisp in 1984. It was completely eye-opening relative to the BASIC, assembly, and C I was using up to that point. However, it didn't run well on small machines that we plebians were using back then. So, we plebians moved back to assembly, C and possibly Pascal.

Lisp STILL has the same problem on small processors. The embedded world is desperate need of a good language for rapid development, and yet Rust seems to be the only new contender? No offense to Rust, but is that the best we can do?

Where is a good dynamic language that is always online and runs in <8K of flash/RAM?


> However, the good features of Lisp are now normal features of any modern language. And some of those features have gone into the dustbin for good reasons. And some of them (static vs dynamic typing) are personal taste.

I definitely agree that with each passing year, languages that are not recognizable as being related to lisp have more features that mad lisp special in the past.

Features I regularly use that are good, but aren't "normal features of any modern language" (some are implementation details, but common to most CL implementations)

1 CLOS

2 Homoiconicity

3 A good interactive debugger with incremental compilation

4 Generalized references

5 An equivalent to reader macros

In addition, while some form of AST parsing and manipulation may exist in many modern languages, it is typically much less accessible than in LISP. I think this is largely because of the lack of homoiconicity, but also partly a culture issue.

There are probably other features that I don't use that others make good use of as well.

As a last comment, it baffles me that #3 is something that is extraordinarily rare to find outside of the lisp, smalltalk and forth families, but is regularly mentioned as a huge productivity boost by programmers that use it.

There's actually no reason I can think of that non-pure statically typed languages couldn't have such tooling either, with the restriction that global variables (including functions) never change type.


Some more stuff:

1. The condition system including restarts to recover from errors.

2. A uniform syntax that can be fluently manipulated with paredit-style modes.

3. VM image saving/loading.

4. Runtime access to the compiler and very complete reflection.

5. A package system (ASDF) that lets you dynamically recompile and reload an entire system with all its dependencies just by calling a function.

6. First-class namespaces.

7. Well-defined multiple dispatch.

8. Metaobject protocol.

9. Language specification with conceptual clarity.

10. High-quality optimizing compilation despite extreme dynamicity.

I think the list could go on for a while...


I think VM image saving/loading is still debatable as to whether or not it's good, assuming that it's the primary way of generating a binary image, as it is in most lisps.

A lot more work has to go into guaranteeing reproducible builds, and it tends to be a fairly non-compact representation of your executable. If CL were a little bit less customizable at load-time, that would allow for much faster FASL loading, which in turn would make the reliance on saved-images less important.

Obviously there are uses for that feature, and having a feature doesn't by itself make a language worse, but when that feature is a square-peg, but still the best fit for a particular round-hole, it causes problems.


I found it pretty easy to generate images in a reproducible way: just load your packages, run whatever setup you want, then save the image. Compactness hasn't been a problem for me but I suppose it could be. Admittedly I don't have much real experience with deploying Lisp applications in any serious way; I'm curious about what problems people run into.


Have you read any of the XCVB papers? They dig deeper into the issues with generating reproducible images, in particular with parallel builds.


An interesting case for #3, is that something similar exists in the Ruby world(https://www.youtube.com/watch?v=4hfMUP5iTq8), but I don't think it's gaining wide adoption, there're certainly people using it but they really aren't the majority.

More extreme case lies in the Lua/OpenResty world, they used to have some interactive debugger but since really so few people are using it, the debugger just got abandoned. Everyone is just accustomed to print-based debugging.

BTW: I actually agree that #3 can be quite useful, I'm using pry quite extensively when I'm working on Ruby projects. But I guess different communities might just have different preferences.


I find with good unit tests a debugger is just not necessary. I haven't used a debugger in anger for many years now. In fact, if I run into a problem where I feel that I need a debugger, I refactor the code so that I don't need a debugger any more. In doing that, I usually find my problem ;-)

But you are right. Different strokes for different folks.


Honestly I think unit tests and interactive debuggers are really for different purposes: unit tests are more used to guard existing code that has been written, while a debugger can be more useful tinkering with new libraries before writing anything production ready.

This is especially true for Ruby(and maybe for Clojure?), since in many cases documentations for libraries are not so great, so you will really have to try the library a little bit to get to the results you want.


I know a lot of people in Lisp and Smalltalk communities that use unit tests still use the interactive debugging environment. In fact, many of them write their unit tests in the REPL, and then paste them into their test-code. There are even a couple of unit testing libraries for CL that are specifically designed for doing this.

[edit]

I also recommend that you watch the video xuejie posted (https://www.youtube.com/watch?v=4hfMUP5iTq8), as it's about using a debugger to write code more than using a debugger to fix code.


> 4 Generalised references

Sounds interesting, could you expand or point me in the right direction? Do you mean this https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node80.html ?


The basic idea is to have a uniform way of setting values to variables or calling a setter method. That's very helpful for writing macros that need to both read and set a value.

For example, the `+=`-operator found in many languages is implemented as a macro in CL (INCF). It needs to read the value of a place, increment it and then set the new value to the same place.

Doing that with a variable is easy. Using non-lisp syntax

  x += 5 
expands to

  x = x + 5
Doing that with object slots is easy too, if you don't mind accessing the slot directly. However, if you want to have getter and setter methods (which may not even correspond directly to a slot in the object) the expansion would look different.

  o.setFoo(o.getFoo() + 5)
or if the getter and setter can have the same name

  o.foo(o.foo() + 5)
Since the expansion is different, the macro would have to figure out whether it's dealing with a variable or an accessor method and behave differently. With generalized references you can use the accessor method as if it was a variable

  o.foo()      // calls the getter
  o.foo() = 5  // calls the setter
Now you easily can expand

  o.foo() += 5
to

  o.foo() = o.foo() + 5
just the same as you would with a variable. Behind the scenes you still have separate getter and setter methods, but the macro doesn't need to worry about that.


I feel like functional lenses have vastly improved upon this idea now by (a) being first class, (b) being completely composable, and (c) being general enough to include notions of access into sum types and having multiple targets.


Regarding 1, Perl 5 copied the CLOS in Moose.

Perl 6 has that and a lot more of the goodies (multiple dispatch, etc), from parent and the brother comments.

But sure, I'd love to find a job where I can use Lisp or start with Scheme.


> Where is a good dynamic language that is always online and runs in <8K of flash/RAM?

Early in my career I wrote code for embedded systems (small autonomous mobile robots) that ran on 8-bit processors with memories around this size. The coding was done in Lisp-like DSLs with compilers written in Lisp. Because the DSLs were Lisp-like, there was no need to write parsers. READ was the parser. And from there writing the compiler itself was pretty easy.

BTW, the Lisp I was using (Macintosh Common Lisp) ran on a Macintosh II with 8 MB of RAM. (This was the late 80's, early 90's.) That was a huge amount of memory back then. Today, not so much. Before that, I ran Coral Common Lisp on a Mac Plus with 1 MB of RAM. I recently ported TinyScheme to run on an STM32. So yes, you can use Lisp in embedded systems.


Sorry, I can't agree with you on this one. Lisp has multiple dispatch - how many languages you know have multiple dispatch? Many Lisp implementations compile to machine code that is very efficient for a dynamically typed language, e.g. SBCL is way faster than other equally dynamic languages like Python. CommonLisp is ANSI standardized. How many languages you use are ANSI standardized? Lisps have full garbage collection that can deal correctly with cyclic structures. Many other languages have only reference counting or some RAII management and falsely advertise them as advantages? How many other languages have metaobject protocols and can be extended arbitrarily? And since you mention it (I don't this really deserves mentioning, though), how many other modern languages are homoiconic?

The list could go on and on. I'm not a big Lisp fan, but you have to give credit where it's due. Most newer languages do not even have half of the features that modern CommonLisp implementations offer. You're right about memory consumption, of course, but using Lisp for embedded devices has always been a stupid idea.

The problem of Lisp is certainly not expressivity, and also not speed, memory consumption, or the syntax. It's not even the dynamic typing as long you use a good unit testing framework. The real problem is that every program becomes a DSL and even seasoned Lisp hackers cannot read other programmers' code. Large Lisp programs too easily become a big hacky mess that nobody but the original inventor understands. And the community aggravates this problem, because they really are nasty, bearded hackers who'd just write their own OOP implementation over the weekend when they aren't in the mood for using one of a hundred already existing ones.

Lisp code is just not maintainable enough. My 2 cents.


Is this based on real world examples (like, Lisp projects that you know have failed because the code has turned into an unmaintainable mess), or just using your intuition to extrapolate along the lines of "uh, this language is very flexible, most people are idiots and use excessive flexibility to hang themselves ergo this language would lead to unmaintainable code"?

...because by the same reasoning you could reach the conclusion that all dynamic languages lead to unmaintainable code, therefore nobody should use them :)


To some extent, that is true.

Take some Python lib that does heavy use of reflection and peek at it's source code.

Those features do make code harder to maintain. But with Lisp you not only have more powerful metaprogramming features, you also have a community that embraces and recommends their use, instead of weighting their usefulness against their maintenance cost.


The unfathomable DSL is a problem with undocumented code in very important places, not with LISP. LISP is merely powerful enough that people can have this problem. You can force the "bearded hackers" to write documentation and not make a mess in any language.


Emacs is pretty easy to explore and extend.


> Sure, when Lisp came out it was "advanced". Garbage collection, AST parsing and manipulation, macros, packages, a standard library, modules, homoiconic data structures, etc.

The big advantage of Lisp is that is has homoiconic code, and that it's easy to work with that code, and that's it's easy to extend that code.

Lisp doesn't actually have a very good module system: but ASDF extends it so that it does. Lisp didn't actually have any OO features, but CLOS extended it so that it does.

That's the advantage of Lisp (and Lisp-like languages) which I've never encountered elsewhere: you can grow and expand the language. You don't have to wait for some committee or gatekeeper: you can do it, right now. Your way may not be the best, but you can do it.

I really miss that in other languages.


> I really miss that in other languages.

Elixir has a pretty good lisp-inspired macro system. And it's hygienic by default, too.

And Elixir's pattern matching actually makes manipulating the forms even easier.


LFE, Lisp Flavored Erlang, is much closer to Lisp, and not just in name. And, LFE is compatible with both Elixir and Erlang. [1]

To quote, Robert Virding, one of Erlang's creators and the creator of LFE:

I would say that one restriction with the type of macros that elixir has, irrespective of whether we call it homoiconic or not, is that they can only manipulate existing forms in the AST, they can't define new syntactic forms. In elixir this means that you basically work with function calls. There is syntactic support for making the function calls look less like function calls but the macros you define are basically function calls. In Lisp you are free to create completely new syntactic forms. Whether this is a feature of the homoiconicity of Lisp or of Lisp itself is another question as the Lisp syntax is very simple and everything basically has the same structure anyway. Some people say Lisp has no syntax. [2]

[1] http://stackoverflow.com/questions/35060080/erlang-vs-elixir... [2] https://news.ycombinator.com/item?id=7623991


Yes, that's true, but they were saying they wanted macros in non-lisp languages. I gave an example of a non-lisp language that has lisp-like macros.


Granted. I guess it boils down to degree of 'Lisp-like' when talking macros. Another article on HN today about Racket, Template Haskell and macros. I think the whole idea is that Haskell doesn't really need them, since it has other ways of achieving similar ends thereby making TH macros a bad fit for Haskell.

I like Elixir a lot, but I have the luxury of not depending on coding for a living, so I am learning LFE because I like how it is as much a Lisp as it can be when constrained by the BEAM. Truthfully, I could stay away from all of the distributed languages like Erlang, Elixir, LFE, Pony and others, since I don't really have a use for them (yet - looking at ABM Agent-Based Modeling).

But I keep at LFE by trying to duplicate the book 'The Handbook of Neuroevolution Through Erlang'. There are at least a couple of other people who have started it in LFE and the other in Elixir. A good fit for ANNs.


> they were saying they wanted macros in non-lisp languages.

He; I don't have multiple personalities


Yes, I didn't mean to suggest you do :)

I was using the singular they: https://en.wikipedia.org/wiki/Singular_they


Ouch, HN stripped the smiley face I appended to that. So I'll add a textual one now: grin


This is actually one example I'd like to refer to when people are talking that alternative languages with macros can achieve Lisp's flexibility: https://www.pvk.ca/Blog/2014/08/16/how-to-define-new-intrins...

Till this point I haven't seen anything similar in languages other than Lisp. Maybe Elixir can do this but I lack enough knowledge to judge.

BTW: that does not mean I'm a Lisp zealot, been there but has been away for quite some time.


How's that better than inline assembler? It seems to me that it is highly compiler (i.e. SBCL) dependent.


I agree, it's essentially an inline assembler, but the fact that it integrates so well with SBCL tells us that the SBCL team really did an awesome job here.


Clojure has a pattern matching library https://github.com/clojure/core.match. And I think clojure is the most widely used Lisp dialect now.


There is one for Common Lisp too https://github.com/m2ym/optima


Yes, I've used it, and it's quite nice. But it's not quite the same. In Elixir (and of course Erlang), pattern matching is pervasive. You use it when defining functions by defining multiple heads, each with a different patter. You can also use it when binding variables. For example:

{:ok, result} = compute_data(some_value, another_value)

If compute_data doesn't return a pair where the first value is :ok, it will throw an error.


Not by a long shot, Emacs Lisp is the most widely used Lisp dialect, by far. Clojure isn't even close.


Maybe if used means, "to create web applications".


You can implement pattern matching in lisp


> Where is a good dynamic language that is always online and runs in <8K of flash/RAM?

Forth immediately comes to mind with those constraints. 8K is an order of magnitude less storage than what the IBM 704 that lisp was first implemented on around 1960 had.


Forth is very cool and I agree with what you said, but it is arguable how useful it can be although OP didn't specify that he cared about that (he hinted at what would be a better replacement). I think there are a lot of things you'd have to implement from scratch or attempt to find a collection of words for, which might not work on your Forth. So if you're Chuck Moore then you're good. If not, I'm not sure if the extreme advantages (tiny size, REPL, ~speed, high ceiling/low floor) make up for the fact that you're essentially working in a vacuum.


You can squeeze a FORTH kernel into 2K, so 8K really is quite a luxurious size. I actually wrote FORTH code very early in my career and wrote lots of things you might not imagine doing. For example I wrote a 3D star field animation system for a planetarium in FORTH in the late 80's.

I think programmers (especially these days) over estimate how much advantage they get from reusing other packages. Because it becomes so much more difficult to refactor your code when you can't change some interfaces, you may end up with more (or at least more complicated) code than if you wrote something that is tailor made. In languages where you have a lot of facilities available, it is one of the hardest decisions to get right. When you are forced to write everything yourself, then it's easier to get the choice right ;-)


At work we use Clojure, and I find it much better than Ruby, but not because it has more features. Actually because it has fewer features, namely fewer "magic" features which were meant as a convenience, but really end up just making almost all Ruby code very convoluted, e.g. having tons of unnecessary implicit indirection. That plus immutability makes Clojure code much easier to keep clean over a long period of time. And being able to edit s-expressions is actually easier and quicker for me than editing line-based code now that my muscles have memorized paredit.


This is the biggest appeal for me as well. I find that Clojure hits the sweet spot between being simple and flexible.

It has small and consistent syntax coupled with pervasive immutability. I find that when I work with Clojure, I'm rarely thinking about the language itself.

Since the core language is very small, it's un-opinionated. This makes it easy to extend it in the way you need for your particular domain.

Many languages today have the same features and allow you to do everything Clojure does and more, however most of these languages are also far more complex.

I've come to realize that language complexity has a huge impact on productivity and code quality. It's incidental mental overhead that distracts you from the problem you're solving. The more syntax and rules you have the harder it becomes to tell whether the code is doing what you think it's doing.


We also use Clojure at work alongside Ruby, and I'm continually pleasantly surprised to find the benefits of using an immutable-by-default language. I like the functional programming aspects of programming in the large and small, but those advantages pale in comparison to the idea that data should be immutable except when absolutely necessary.

I dabble in Elixir in my spare time, and that language appears to yield similar benefits. Again, immutability is the key, not necessarily the functional focus. While immutability is more difficult to implement in OO languages, I would be curious to see what that looks like.


This is more related to a mindset than a list of features. By the way, "modern" and "new" have different meanings[0]. Even if a good deal of features from Lisp are incorporated in newer languages, what makes Lisp interesting is how everything is designed to bring a cohesive dynamic environment. Trace, debug, macroexpand, change the readtable, change the pretty-printer, change evaluation rules, types, conditions and restarts, special variables, around/before/after CLOS methods, the MOP, etc. all those little things contribute in making programs that can evolve, not in a hacky way but as part of the language philosophy. I also like the idea that programs should strive to be correct first and fast second, while effectively providing ways to profile and optimize code, tune or deactivate the GC, etc.

[0] I switched from Python to Lisp because I wanted a modern language (http://tapoueh.org/confs/2014/05/05-ELS-2014)


Red is a new "full stack" language that is pretty nifty. Think of it as imperative, but still homoiconic like lisp (or Rebol). It comes with a complete binary interpreter that is less than a MB. It makes heavy use of DSL's. In fact, the language is written in a DSL called Red/System that is a native AOT compiled language similar to C, but with Red syntax. Red programs can use pre-compiled binaries from Red/System, use the interpreter for parts of the program without types, and a JIT for the parts where you declare types. All in all it is pretty awesome that it has a cross-compiler targeting Mac, Linux, Windows, Android, FreeBSD, and several other platforms.

Edit: It is still in the "beta"ish phase, but the team is making rapid progress. Although I don't think the systems language itself will replace C, you get one language that can in theory do everything well (web-pages, to AI, to embedded robots, to parsing [see the Parse dialect which is like regular expressions, but readable].


It was never about the features -- it's about their integration. The fact that you can get get 20 ad-hoc features piled on a language doesn't mean they come as naturally as they do in LISP.

And "AST parsing and manipulation", "macros" and "homoiconicity" are very far from being "normal features of any modern language", except if by modern you mean "any language that has them". Most languages currently in vogue (by which I mean extensive commercial/IT use and made in the past 10-20 years, not COBOL or C++) don't have those.

>Lisp STILL has the same problem on small processors.

So? We have languages at all levels of the CPU-power spectrum, plus we use Javascript and Java and Python in small processors all the time, and Lisp is either very near (e.g. to Java) in speed or flies all over them.

Except if you mean really restricted embedded processors, but then again, nobody said LISP is the best option for them.


> Lisp just isn't that much better than our current languages. Sorry.

No need for apologies -- it weakens your argument and makes you sound unsure.

Lisp is that much better than most languages. The isomorphic syntax used to represent both data structures and program code is an oft-touted feature that few languages lay claim to (so-called, homoiconicity). The reason this concept is wonderful is that it allows us to use syntactic substitution with the same rigour as Leibniz and Tony Hoare. We can take any expression, substitute parts of it with other expressions and give it a symbol. And because the rules of substitution in Lisp are well defined it works all the way down.

Where you run into trouble is with the dynamic environment. CL decided it was enough to give programmers a function to generate unique symbols. Others such as Scheme insisted on a hygenic macro system. The difference is that one requires the programmer to be more careful in order to not restrict the forms one can write.

This is why macros are cool.

Other features I miss (that aren't unique to Lisp):

- Incremental compilation (Eiffel)

- Conditions and restarts (Multics)

- Circular data structures

Update: grammar.


> with the same rigour as Leibniz and Tony Hoare

I think that may be the first time that has ever been written (not least because Leibniz was no model of rigour …).


Correct and, now that I think about it, funny.

Leibniz of course turned out to be right (or had the right intuition) but he failed to provide rigorous definitions of limits, functions, etc.

I only made the connection as some models of assignment I've seen use a rule of syntactic substitution from Leibniz in the method of proof.

Rigorous if you're a programmer not used to using proofs.


Nokia released and feature phones still being sold under its brand are made with lisp. Ofcourse it's not the whole platform, mainly the UI and some glue components but essentially, all major work was written in lisp with some extensions.


Oh wow, I didn't know that - which models/series specifically?


> Where is a good dynamic language that is always online and runs in <8K of flash/RAM?

Forth


You can do amazing things with Forth for its footprint ( [compile-time] forms, etc.) - but it's not dynamic, it's typeless.


> Where is a good dynamic language that is always online and runs in <8K of flash/RAM?

Forth? (EDIT: Sorry, aidenn0 (https://news.ycombinator.com/item?id=12112686) and jdmoreira (https://news.ycombinator.com/item?id=12114394) brought that up already. I guess this just shows that it's a very natural answer to the question.)


>some of those features have gone into the dustbin for good reasons

I'm curious what features you're thinking of. I know PHP put lexical scope into the dustbin, but languages since then haven't. Looks like the wrong decision in retrospect. Python put multi-statement lambdas in the dustbin but I don't think there's consensus that that was for good reasons.



Ha! LISP macros and read both work because of a very simple bug in the original definition of LISP. Don't even bother with it or Scheme or Racket --- they're all utterly broken and needlessly confusing!


Which bug are you referring to?


The definition of quote is broken. I personally explained it to John McCarthy. He agreed.


Care to elaborate?


OK, you asked! LISP was originally developed as a language for writing recursive functions of symbolic expressions (S-expressions). It was roughly based on lambda calculus, the language developed by Alonzo Church. But roughly is the key word. S-expressions are given by the context-free grammar:

S ::= A | [] | (S . S)

One can write data structures this way and lists by using the abbreviation:

(S1 . (S2 . ( ... ( SK . ()) ...))) == (S1 S2 ... SK)

The terms that manipulated these S-expressions were called M-expressions. They were first-order terms. McCarthy's key idea was to use a conditional in conjunction with a label form to define recursive functions (in the service of various AI applications). The M-expressions were defined roughly as follows:

M ::= S | x | if[M; M; M] | f[M; ...; M]

f ::= lambda[[x1; ...; xn]; M] | label[g; M]

(I'm not 100% confident that I remember the exact details on this syntax but the idea is correct!)

McCarthy wanted to show that his new language was Turing-complete. So he wanted to exhibit a universal function APPLY (derivable from f above) such that for any function f and arguments M1; ...; Mk such that f[M1; ...; Mk] evaluates to S-expression S, well, given a representation of f[M1;...;Mk], lets call it, hat(f[M1;...;Mk]), well

APPLY[hat(f); hat(M1);...;hat(Mk)] would evaluate to hat(S). This is pretty much a standard formulation of the recursion-theoretic argument. In order to close the sale, McCarthy had to exhibit such an APPLY and also the hat(.) function. Sadly for all of us LISP lovers (!) he botched the definition of hat(.)! It left people utterly confused for 30+ years. Such a shame.

Fixed a couple of typos. Sorry! Fixed one other typo! It's been a while...


I still don't understand. You said the definition of quote is wrong, but didn't mention it at all in your elaboration.


Fair enough, I felt I was droning on but it's true that I didn't show the key mistake. Here it is.

If you want to represent an arbitrary M-expression as an M-expression, it's most natural to use S-expressions for the representation language, these are the -values- in M-expression LISP. (In lambda calculus we have more choices, normal-forms or weak-head normal-forms). McCarthy defined hat(.), naturally enough, by induction on the structure of M-expressions. For each M-expression, we need an S-expression representation. (Note that we use uppercase symbols for the symbolic constants and lowercase symbols for identifiers.) Here goes:

hat(S) == (QUOTE S)

hat(x) == X

hat(if[M1; M2; M3]) == (IF hat(M1) hat(M2) hat(M3))

etc..

But HOLD ON! The S-expressions have inductive structure(!). The definition of hat(S) should have been:

hat(A) == (SYM A)

hat(()) == (NIL)

hat((S1 . S2)) == (PAIR hat(S1) hat(S2))

There are sensible mathematical properties that this latter representation has that the former doesn't. It's a bit of a long story. But the bottom line is that QUOTE, was defined erroneously. (And John McCarthy burst out laughing when I explained it to him.)

RM

apologies, more typos.


I've written a Lisp compiler that handles quotations this way, so that QUOTE is not part of the target language. (Not that this was my own idea.) I can sort of see why you'd call McCarthy's s-expression Lisp a mistake, in that it adds something (QUOTE) not in the original M-expression Lisp, which you can do without. It still seems to me like a strange thing to emphasize, but OK.

BTW, the M-expression syntax for IF was "test -> consequent; alternative" which got encoded as a COND expression. (Doesn't matter, I'm just pointing it out as long as I'm commenting.)


Thank you for reminding me, McCarthy also invented COND which eventually led to the great modern pattern matching forms.

An ironic side-story of this that may or may not be of interest:

Because QUOTE was mis-defined, McCarthy had to hack his definition of APPLY/EVAL to get it to work. One consequence of this hacking was that the S-expression LISP "defined" by his version of APPLY/EVAL was a higher-order language while the M-expression LISP that he was attempting to model was strictly first-order. So in his S-expression LISP he could write the MAP function (called "mapcar" back in the day) but the syntax of M-expressions leaves no way to express MAP.

I find it so ironic that it took this little representation error to lead to LISP having the essential property of lambda calculus. (Guy Steele fixed most of the trouble with the grammar and introduced proper lexical scoping in Scheme but he didn't catch the quote bug.) It's also fair to say that M-expression LISP wouldn't have changed the world as S-expression LISP did.

I don't know if Paul Graham reads HN but Paul once wrote a book on macros in LISP. As far as I know, he doesn't know this story about QUOTE. It doesn't seem to have slowed him down.


I would also like to read more about this.


I published a paper on this in the ACM Transactions on Programming Languages and Systems (TOPLAS) back in 1992. The title was "M-LISP: A Representation-independent Dialect of LISP with Reduction Semantics". No need to read it, the punch-line is above.

Fixed yet another typo.


Here's the paper, just in case others are like me unable to grok the punch-line: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.40.4...


Akkartik: the paper of mine that you cited has a bunch of theorems following the program laid out by Gordon Plotkin in the single best paper I ever read: "Call-by-Name, Call-by-Value and the Lambda Calculus" - a truly profound piece of work that is still worth careful study. But my TOPLAS paper on the topic of LISP can safely be skipped --- the punch-line, as I said, is that the amazing genius John McCarthy messed up the base-case for the definition of his hat(.) function. Stuff happens. In the process, he invented (the very buggy) LISP which was the essential bridge between the true source of sensible computation --- (typed) lambda calculus --- and modern and future software.

(And for what it's worth (ha!) the architect for my present residence was the amazing Terry Heinlein, nephew of Robert Heinlein (of "grok" fame.))


I can say an image with sbcl (lisp implementation) with a load of functions and libraries. I can't save an image with python, ruby and javascript.


You can checkpoint at a lower level https://en.m.wikipedia.org/wiki/Application_checkpointing or using a virtual machine.

But in practice it's not that useful to checkpoint a running program because of external state. You can't checkpoint sockets when they are connected to some other machine.


An article that mentions abstracting things and then immediately uses CAR and CDR...


Nope. Lisp is about extending itself with new special forms and embedding DSLs into itself ( SETFs, LOOPs, DEFSTRUCTs, even whole CLOS, to name a few).

With macros one could introduce new special forms, which differs from mere high order procedures by having its own evaluation rule implemented as a macro. In particular some of its arguments could be left unevaluated before application, which gives one laziness and other nice things (for an ordinary procedure, according to the general evaluation rule, all the arguments will be evaluated in order before application of the procedure).

Lisp isn't about hipsters. It is about fundamentals of programming.

Here is a simple illustration:

http://karma-engineering.com/lab/wiki/Bootstrapping8




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: