Hacker News new | past | comments | ask | show | jobs | submit login
State of the Common Lisp ecosystem, 2020 (lisp-journey.gitlab.io)
311 points by lelf on Feb 8, 2021 | hide | past | favorite | 133 comments



I think this article (of sorts) is definitely helpful for onlookers to Common Lisp, but doesn't provide the full "story" or "feel" of Common Lisp, and I want to offer to HN my own perspective.

Disclaimer #1: I've been working professionally as a Common Lisp programmer---not as a contractor!---for the past decade. I have a vested interest in the language and hiring for it.

Disclaimer #2: I am going to ignore commercial implementations of Lisp here, which provide very useful and advanced features, like GUI development, a user-friendly IDE, paid support, etc. [1,2]

So let's get started. Common Lisp's best feature is that it allows you to be insanely productive at the "raw programmer" level. You can write, edit, and debug code very quickly and incrementally, and end up with safe & performant code.

There's a price to pay: currently the best-in-class experience is still Emacs and SLIME (which come nicely packaged here [3]). As an Emacs fan, that's the best news, but to my fellow PyCharm/VSCode/vim users, it's terrible and alienating news. My colleagues who aren't Emacs users managed to learn just enough Emacs to be productive in a week, but they still frequently fired up their editor of choice in times of need.

It really is worth underscoring that the Emacs+SLIME experience truly fits Common Lisp development like a glove, and the experience, in my opinion, is better than almost every mainstream editor environment out there.

Common Lisp's worst feature is that it feels like just about everything imaginable has a catch. I don't mean "there's no free lunch", I mean that things just plainly don't feel cohesive or "100%" most of the time. To name a few examples:

1. GUIs: If you want to make a traditional, native GUI using open source solutions, you're stuck with really goofy libraries that are non-obvious to get working. As the article points out, you have options. Lisp actually has a renowned framework called CLIM, but I consider the open-source implementation McCLIM [4] currently only principally useful to hobbyists and hackers.

2. Deploying applications: Almost every implementation of Lisp has some way to create an executable. But very important aspects that real people care about in production are missing, inconsistent, or poorly documented. For example, almost no open source implementations of Lisp have first-class support for signing binaries on MacOS. Almost no open source implementations have a "tree shaker" to remove unnecessary cruft from the executable. Almost no open source implementations make building a shared library practical.

3. Libraries: Many libraries don't do even usual things people might want to do. The linear algebra library MAGICL [8], for example, doesn't at the time of writing have a way to solve the matrix equation Ax=B. This isn't due to laziness of the authors or lack of foresight, but rather that it's a library that's just not used by enough people to see regular, high-quality contributions as an open-source project. I'm sure MAGICL solves problems for the authors, but the authors haven't taken it upon themselves to make a general, useful, and quasi-complete library for matrix programming in Lisp.

These examples are just examples, maybe not even the top examples.

There are many things I wish for Common Lisp, but there are two I think I wish most.

First, I wish Common Lisp implementations put in work so that they could play nice with other programming languages. Google [11] recently came out with support for protobufs in Lisp, which is nice, but I feel something deeper is needed. I think Common Lisp implementations supporting building C ABI-compatible shared libraries would be an insanely big step forward; it'd mean that Lisp could feasibly used by every language out there. Right now, the closest we've got is Embeddable Common Lisp, an implementation of Lisp which makes embedding Lisp within C relatively painless, but as usual, it has many catches [12].

The way I've coped is to produce stand-alone command-line applications, or to build servers with HTTP APIs. But it feels icky, especially if you're working with Python programmers who want to `import` stuff and not run servers just to get some code to work.

Second, another thing that I constantly hope for in the Lisp world is for more "hyper productive" programmers to join it, or programmers whose livelihood depends on it. Of course, since Lisp is used by hobbyists, you see tons of hobbyist code. To be sure, a lot of this hobbyist code is perfectly fine. Usually it works, but it's just a tad incomplete. However, in my opinion, the worst thing about hobbyist code is that it usually doesn't do something useful.

What does "useful" even mean? I won't claim to be able to define this term in a one-size-fits-all fashion, but "useful" to me is about getting practical computing work done. The further away from being concrete the library is, typically the less useful it is. For example, a typical Lisp programmer will have a penchant for writing a domain-specific language for parsing binary files (cool!), will open-source that code (cool!), but then nobody---including the author of said library---will actually use it to, say, write a parser for GIFs [5]. When somebody does come along to write a GIF parser, they're likely not going to use this general binary parsing framework, but hand-roll their own thing.

In Lisp, it seems popular to solve meta-problems instead of problems, which is partly due to the fact that Lisp lets you think about problems at very high levels of abstraction using its advanced object system, the meta-object protocol, and macros.

(One of my biggest "pet peeve" projects in Lisp, second only to "utility libraries", are documentation generator libraries. As soon as somebody figures out that documentation strings can actually be programmatically queried in Lisp, they invariably write a baroque "generator" that spits out HTML. I've never, not a single time, ever, used a documentation generator for doing real, paid work. I think one Lisp programmer I know uses it nicely is Nicolas Hafner, aka Shinmera, who uses a documentation generator simply to augment his long-form documentation writing. Staple [9] is one example library of his, where you can see some generated documentation at the bottom.)

"Useful" also has to do with how a library is consumed. In the Common Lisp, a library like this [6] is typical. It's a bare page (be it on GitHub or otherwise) that provides no examples, no indication of dependencies, etc. Not all libraries are like this, but you run into it frequently enough.

The Common Lisp ecosystem lacks a certain "go-getter" philosophy, needed to forge through "boring" work, that some other language ecosystems seem to have. To cherry pick one example, though I don't use it, Zig [7] comes out with interesting stuff all the time that's genuinely useful. Andrew Kelley, its main developer, is putting tons of hours into getting details around deployment right (e.g., cross-compilation). Little about Common Lisp prevents a motivated person from making equally productive-enhancing strides with the language, but I find that either (a) the interest isn't there or (b) the interest is there but the interest is for developing weird, esoteric stuff in Lisp.

(My favorite example of a "productive stride" that happened in Lisp is the following. For context, people talk about all the time how difficult it would be to port a Lisp compiler to a new architecture. I myself have clamored for documentation on how to do it with SBCL. But, out of nowhere, some grad student named Charles Zhang came out with a port of SBCL to RISC-V. Not only did he port it, he's maintained it with 100s of new commits, making it more performant and less buggy [10].)

Common Lisp is an amazing language purely from a practical point-of-view. As I said, to me, it's bar-none the best and most productive language to use if you want to "sit down and write code". The implementations of Lisp, like SBCL, are marvels. Lisp code, once you write it, will work forever (seriously, decades). The #lisp channel on Freenode is nice and helpful, and there are so many amazing people in the community. In Lisp, it's seamless to inspect assembly code and work with the world's most high-level, meta-object systems all at the same time. But the ecosystem mouthfeel is still off, and Common Lisp would greatly benefit from programmers obsessed with making the language more useful to themselves and others today.

[1]: LispWorks: http://www.lispworks.com/

[2]: Allegro CL: https://franz.com/products/allegro-common-lisp/

[3]: Portacle: https://portacle.github.io/

[4]: McCLIM: https://common-lisp.net/project/mcclim/

[5]: There is a GIF parser though called SKIPPY! https://www.xach.com/lisp/skippy/

[6]: MIDI: http://www.doc.gold.ac.uk/isms/lisp/midi/

[7]: Zig: https://ziglang.org/

[8]: MAGICL: https://github.com/rigetti/magicl

[9]: Staple: https://shinmera.github.io/staple/

[10]: Charles Zhang's SBCL commits https://github.com/sbcl/sbcl/commits?author=karlosz

[11]: CL-PROTOBUFS: https://github.com/qitab/cl-protobufs

[12]: Poorly documented, performance isn't very good, it's maintained by essentially one person, the rituals needed to use ECL-built libraries are more extensive than necessary, build times are insanely slow, ...


> Google [11] recently came out with support for protobufs

Not only:

> Doug Katzman talked about his work at Google getting SBCL to work with Unix better. For those of you who don’t know, he’s done a lot of work on SBCL over the past couple of years, not only adding a lot of new features to the GC and making it play better with applications which have alien parts to them, but also has done a tremendous amount of cleanup on the internals and has helped SBCL become even more Sanely Bootstrappable. That’s a topic for another time, and I hope Doug or Christophe will have the time to write up about the recent improvements to the process, since it really is quite interesting.

> Anyway, what Doug talked about was his work on making SBCL more amenable to external debugging tools, such as gdb and external profilers. It seems like they interface with aliens a lot from Lisp at Google, so it’s nice to have backtraces from alien tools understand Lisp. It turns out a lot of prerequisite work was needed to make SBCL play nice like this, including implementing a non-moving GC runtime, so that Lisp objects and especially Lisp code (which are normally dynamic space objects and move around just like everything else) can’t evade the aliens and will always have known locations.

https://mstmetent.blogspot.com/2020/01/sbcl20-in-vienna-last...

also, ASDF's main author spent his career at Google.

> the best-in-class experience is still Emacs and SLIME

Really I want to mention again that Atom's SLIMA is getting very good. The Sublime plugin and a VSCode one are coming close.


dougk is one of the few people currently and immediately capable of such "long strides" I spoke of. He is continually improving SBCL in remarkable ways, especially under-the-hood work that just makes all Lisp applications better without additional work on the programmers' parts.

I attended the Vienna conference where he discussed such matters, and he has great ideas, but many of his play-nice-with-UNIX tools are still very bleeding edge and require you to be SBCL-developer-level in-the-know to use them.

I hope later this year that I'll be able to open up a full-time role for SBCL development as a part of my own job, in a similar way that Google employs dougk to work on SBCL. The only trouble will be finding a qualified applicant who can do the full-time work...


Just a correction; I'm the Charles Zhang you mentioned and I'm an undergrad graduating this semester (looking for full time work after!), not a grad student. Also I don't recognize your username but I probably know you if you were in Vienna...


Whoops, my apologies for the mistake!


> Common Lisp's worst feature is that it feels like just about everything imaginable has a catch.

You've put into words something I've struggled with saying for a while, and the exact reason I could never really enjoy working with Common Lisp.

The example I always return to is the lack of pervasive use of CLOS throughout the standard. Something like generic-cl[0] feels like it should be a baseline part of the language. As it is, it's a great library. But people are hesitant to even use it, because the result of making everything generic is a loss in performance!

Of course, [0] talks about optimizations they've done. I don't know how it compares to using the methods in the standard as opposed to the lib. Regardless, these libs only help when writing code, not reading it. Though don't get me wrong, that's a big deal! It's very helpful! But you wish everyone would play by those rules.

[0] https://github.com/alex-gutev/generic-cl


This seems like a consequence of the language never getting another standard. If it did, and given the way Common Lisp systems work, I think there could've been something like cl2 and cl2-user packages which could've pushed more towards that generic style that CLOS promotes, and maybe eliminated some of the elements that were redundant or didn't quite fit with everything else (is it `(function sequence index)` or `(function index sequence)`?). Those conflicts make sense given the history of the language (it was created to help connect several branches of the lisp family). And the older standard would've been easily preserved in the cl and cl-user packages.

But the language is essentially frozen in the existing standard, and can only be extended by community consensus right now.


If you want to regain performance, add-in type declarations, cl-generic will inline its functions: https://github.com/alex-gutev/generic-cl/#optimization


If I knew the types beforehand at compile time then I don't really _need_ the generic functions per say.

Is there a CLOS implementation with inline caches for generic method dispatch?


Almost all of them have something like that: Robert Strandh has a technique that looks promising[1] and is being implemented as a library, I believe. CLOS’s customization hooks allow you to switch out the implementation of “generic function” on a case-by-case basis which can be used to implement this sort of optimization.

[1]: http://metamodular.com/SICL/generic-dispatch.pdf


Do you mean something like this? https://github.com/guicho271828/inlined-generic-function

Even if you know all the types, you still have polymorphism that can’t be efficiently monomorphized without an exponential space penalty.


I don't think that's what I was looking for. That talks about dispatching generic functions at compile time.

> you still have polymorphism that can’t be efficiently monomorphized without an exponential space penalty.

Wouldn't it be linear at each call site only by the number of types that actually reach that call site? Plus you don't have to cache _all_ types at the call site, only the most frequent ones.

In Java and JS for instance, dynamic dispatch is JITed away at hot spots, and can be un-jitted later to reclaim space if the spot is no longer hot.

I generally don't care if a dynamic dispatch is slow if I'm only calling it a few times, but in a tight loop I want it to be inlined as much as possible.


Perhaps one could write a type-inference macro. That would leave you with something like Julia, broadly speaking.


Thank you for this write up. Mostly agree - been using Common Lisp seriously (almost every day as part of my work). At our company almost all NEW back end work is done in Common Lisp as the first choice.

FWIW, in my computing career there have been two crucial pivot points. Exposure to and programming and using Unix (SunOS/HP-UX and later Linux) and its powerful mode of working was the first epiphany. And later Common Lisp, and the degree of freedom it enables in expression of code and ideas, as well as its speed and interactivity model: REPL/SLY, EMACS etc. I have become a much better programmer because of my exposure to this mode of working.

When I was first starting though, my frustration with the language was large, and it was compounded with learning Emacs (which I love and curse at on a daily basis). Libraries seemed poorly and very inadequately documented for someone new to the language, especially compared to Python for example. But the newish culture of hosting open source projects in Github is improving and READMEs/project descriptions are getting better. And, doc strings are in the code themselves, mostly...

Common Lisp gets criticized for not changing. Sometimes it can be a bit verbose (setf (gethash mykey my-hash-table) my-value). A lot of programmers will instantly gravitate to "that's a lot of words to set a value in a hash table". But it really isn't the problem it turns out to be in real life because you learn to read it pretty damn quick, and you save key strokes elsewhere (for example using loop or CLOS). So it balances out. However, I have come to learn that having a rock underneath that has no problem running years old code is a HUGE benefit. Things generally continue to work with the updated compilers and that reduces the cost to maintain the code base. I trust the language to stay solid underneath. You can extend it, but you generally don't need to use reader macros and other features to solve most problems. So while the world is changing around you, you aren't updating your python2 to python3. Instead you can stay focused on app level changes.


Setf is a fun example of an amazing freedom in naming. You name how to address something, and setf is how you set things at that address. No longer do you have to have two names for one address, depending on if you are getting it or setting it.

And completely agreed on stability. I love that all of the code samples from all of my lisp books still work. Seems that number approaches zero for all the other languages. :(


Counterpoint: Lisps are productive even if you always save all your work into files and rebuild an image, as if you were working in C++.

The result of working in Lisp is nice code, and that code continues to be nice as the months and years go by, long after the interactive session in which that code had been developed has been forgotten.


Thank you for this overview -- the article itself doesn't seem super useful unless you already have this context.

I glanced at CLIM, and it's very much still traditional verbose, imperative GUI code. Someone should really take a look at doing a Lisp version of the SwiftUI concept. It seems to be made for s-expressions.


Note that declarative syntax for specifying GUI properties is actually old hat, too. (Even if we ignore things like HTML and CSS).

I mean, for instance, oh, Windows resource files.

Glancing through CLIM, I also don't see anything like that sort of separation between GUI logic and form.


Sure, the lineage of declarative UI is pretty long. I'm aware of Windows resource files and the Mac's resource files that inspired them; there may be earlier antecedents on 8-bit micros.

SwiftUI provides a fundamentally different way of thinking about UI composition and managing data flow in your app. Much of the GUI logic simply disappears. Syntactically and functionally, it seems like something structured like it would be a great fit for a cross-platform Lisp UI framework.


> Common Lisp would greatly benefit from programmers obsessed with making the language more useful to themselves and others today.

It's never going to happen.

Lispers were saying this same tune back when I last was using Common Lisp in 2003.

Even back then there were better Lisps than Lisp. Perl is a better Lisp than Lisp. Perl gave the world CPAN and, I believe, the first real mainstream usage of a garbage collector (this is before Java and JavaScript, which I guess one could argue were more mainstream but possibly only due to the fact the internet was a much smaller place compared to today).

Since then we've gone through PHP, Python, Ruby, JavaScript (ES6/node), Clojure, Scala, C#, and many other languages slightly off my radar (Rust, Go, Swift, Kotlin, etc.). There is simply no niche Common Lisp could occupy today.

The last standard for Common Lisp is ANSI Common Lisp that came out in 1994. With no new movement in modernizing the language, there is simply no point. It's dead, and the effort isn't worth it. Why modernize a crufty Lisp when there is Clojure? Or many other Lisp-ish languages. There is even R6RS/R7RS Scheme today.

> First, I wish Common Lisp implementations put in work so that they could play nice with other programming languages.

Same song, 20 years later. I remember painfully hacking together "alien" calls (CMUCL's foreign function interface) to use Gtk+. Yes. There really were no Common Lisp bindings for the most popular open source GUI toolkit in 2003 during the era that is probably Common Lisp last great chance at making it out alive. CL was going through a resurgence period and you still couldn't get Lisp to talk to anything. My god, it sucked. And their only documentation was HyperSpec.

If anyone is not aware of HyperSpec, I implore you to Google it now and see what the state of Common Lisp was around 2001-2005. I have nothing against Kent Pitman. But the HyperSpec was not doing Lisp any favors.

The other thing that burned Lisp was everything was a secret society. HyperSpec was free-ish. Sort of. But not really. It was not a community document. If you were just getting in to Lisp it was already recommended to get the proprietary Allegro Lisp or LispWorks. CLISP and CMUCL where the only open source game in town and they weren't receiving much love. Even SBCL was not going anywhere fast. A good segment of the old Common Lispers were against open source at the precise time they needed to embrace open source. They were stuck in a proprietary world.

I could go on, but I'm going to end this rant here.

I suspect I'm not alone in being burned by Common Lisp.


I vehemently disagree with your premise. Lisp and its ecosystem have been improving through thousands of hours of volunteer work. The Lisp ecosystem is VERY different than it was 15 years ago and is MUCH better. But the standard for what counts as good, in my opinion, has also increased.

I’ve not and never been burned by Common Lisp. I don’t know how you could be burned by it either. I can see how canceling Python 2 could burn someone, but I don’t see how an evergreen standard like that of Common Lisp could. I’m not hurting by there not being a new standard. Functionality is de facto standardized just fine and we live with it just fine. Lisp is like the only language with a useful web-accessible standard. If you ask a Lisper how frequently they visit Stack Overflow or random tutorial websites, you’ll find it’s much less than others.

Lisp isn’t a lost cause. It has outlived just about every other thing out there. It just doesn’t outlive these other things with tons of fervent enthusiasm.


You can not like the language, but frankly:

'Why modernize a crufty Lisp when there is Clojure?'

is a strange comment. Those two languages have more differences than they have common ground. "It's got parentheses, and all languages with parentheses are equal" is a poor basis for critique. Now, you also speak about Perl in the same way, so ... ?!


At the request of John McCarthy, Lisp’s creator, no single language that is a member of the Lisp family is to be intended to be the definitive dialect; that is, none is to be called just “LISP.”

The CL folks have done exactly this.

A serious way to measure the strata layers is to write exit(2) portably without using external packages.


We can call whatever we want Lisp. And everyone does. McCarthy, wisely, didn't want people to bicker over this. But it would be a reasonable assumption that by Lisp, one meant Common Lisp.

I'm not sure what you're getting at with exit(2). The CL standard does not include an exit function, because it would be limiting. For example the meaning of exit for an operating system is unclear. But most implementations include a simple exit function. You can do what exit does, more simply in CL than in C, using unwind-protect. You can use the FFI built in to every implementation to write an exit that exactly duplicates what the C stdlib exit(3) and _exit(2). With a very small set of macros, it is portable to all CL implementations. You can even execute the processor abort instruction if it has one. But of course you can't do that portably. But exit written in C is not actually portable, but only looks that way due to a bunch macros.


Another thing I don't understand is how can threads be a library in CL ?

Threads cannot be implemented as a library https://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf https://dl.acm.org/doi/10.1145/1065010.1065042


If you're referring to bordeaux-threads, that's a compatibility library that gives a consistent interface to the various CL implementations' threading interfaces.


I think those kinds of detail-oriented "practical strides" come when people are using a language for their real jobs. So then the question that should be asked is, "why aren't people using CL for their jobs?"

Of course there's also a chicken-and-egg element


I find it a bit opposite - "on the job" code is often the ugliest, 80%-done code I've dealt with, with minimal docs, unless a very rare environment not just allowed but pushed for completion.

Things written for public consumption from the outset are cleaner.


I think you're over-projecting your personal experience

> The Common Lisp ecosystem lacks a certain "go-getter" philosophy, needed to forge through "boring" work, that some other language ecosystems seem to have.

"Boring" work that creates something actually useful almost always happens in the crucible of real, pressing problems. Hobby projects tend to focus on what's fun or interesting. People don't want to take the time to bring something the rest of the way into "useful" territory if they don't actually need to use it for some purpose


A lot of Common Lisp code that ends up opensourced is either 80% solutions for things that were necessary for some project (sometimes commercial), and IMO the rare cases where things really got polished - they got polished later on with intent to publish.

On the job code, whether lisp or not lisp... I'm more used to managers preventing writing anything more complete or documented, except for once in few months forcing everyone to write BS low-quality docs because "documentation" is a line item on the contract.

My pleas to set aside time to document and polish things are usually ignored, my pleas for a technical writer on staff to help us keep docs current and well written have never been listened to so far.


My own anecdote is that I've never worked at a company where code quality was chronically devalued; that's only ever been a temporary state of affairs for a specific push, after which time is made available to pay down some of the debt

We're both speaking from limited sets of experiences, and neither is likely to be representative of the whole industry


Thanks for clearly articulating the problems I see with the LISP ecosystem as well. You beautifully show why having a good programming languages is only a small part of what makes a programming language successful.


Portacle looks cool, but I was unable to make it run under Big Sur's draconian unsigned binary policy. (I got an emacs server running, but not CL, git, etc.)


Thanks for the amazing write up!

> There's a price to pay: currently the best-in-class experience is still Emacs and SLIME

Does viper-mode work well with this setup for vim users?


I'm a full-time vim user (for over 20 years) that uses emacs for SLIME only.

viper-mode is terrible for me, as it's a "vi like" layer rather than a true vim emulation; it hits a sort of "uncanny valley" where My fingers try to use vim keystrokes and then fail horribly.

The good news is evil-mode actually is a vim emulation layer and I can switch back-and-forth between evil-mode and vim with very few issues. There are only 3 things that ever trip me up, and none of them are major:

1. Find and replace is different (but IMO much better) in evil-mode, since it preserves case when doing case-insensitive matching (:s/foo/bar will change "Foo" to "Bar"). Since emacs shows the substitution live, and I like this better, I haven't bothered to fix

2. C-a and C-x don't increment/decrement numbers in evil-mode like they do in vim. I mainly use this in macros, I suspect they may interfere with other keybindings, and sometimes it doesn't "do what I want" in vim

3. Yanking to the default register also yanks to the clipboard (+ register) and primary selection (* register). This behavior was changed after I adopted evil-mode, and I hate it when it matters, but it doesn't matter too often.

I'm sure there are obscure vim features that are corner cases similar to the above 3, but I still give evil-mode a firm "recommend" for vim users wanting to to lisp dev.


I'm not really an Emacs user, but I'm lisp-curious. I just assumed viper-mode was the best Vim plugin for Emacs. Now it looks like that's not the case.


I hear yes but I don’t have experience. The particular editing commands aren’t what make SLIME nice, it’s the interop. Viper should be fine.


As another commenter said: evil-mode is very good. Once I “got” emacs, I systematically translated my vim config and have never looked back. evil-mode still has a couple annoying differences, but it’s complete enough that they’re mostly things you wouldn’t notice unless you’re looking for them.


Thank you very much for this detailed and insightful post. Much appreciated


what you are saying is "use lisp if you don't want any dependency". it's a practical use case


I’m not saying that. I use Lisp and my Lisp programs have dependencies and my other programs depend on Lisp. I’m just saying that (1) Lisp can still stand to integrate better in today’s OS/language scene, and (2) Lisp could benefit from people writing additional high quality libraries. (Lisp has many high quality libraries, but it needs more.)


I really appreciate this effort. I'm coming back to Common Lisp after a long absence and I'm very interested in understanding what the most relevant projects are these days.

Can be hard with Lisp due to the long history and strong backwards compatibility. I've started using Screamer for the first time today and that was written around 30 years ago...


I built LibHunt recently, and it could help you (potentially) with discovering relevant lisp projects.

i.e. I'm tracking all links posted on Hacker News and Reddit. With that data in place, you can find out the projects people are linking to and their alternatives https://www.libhunt.com/l/common-lisp


Some of the happiest days of my programming life was during a brief stint in grad school doing AI programming in Common Lisp (circa 1992). The purity of functional programming and the rapid implementation of algorithm PoC's was pure joy. The pragmatics of life have kept me from using CL since them -- I wish this effort the best success.


Nice write up. A small extra note for the community section, I've really appreciated Planet Lisp's (http://planet.lisp.org/) RSS feed. It aggregates a bunch of other feeds so you can more easily keep track of what's going on in the Lisp world.


Out of topic, but as somebody who interested in lisp generally, I have a question, from which dialect would you suggest to start? I am a little bit lost. I am considering clojure, racket and common lisp. I am reading at the moment sicp book and do exercises in `racket sicp package` which is again another dialect. After that want to start with something modern. What was you way to lisp?


Clojure - Learn this if you want a job programming in a Lisp family language. It's the most reliable one for achieving that goal. Good library support due to being hosted on top of other languages (like Java). It's an opinionated language, generally pushing the functional style and immutability.

Scheme - Great amount of good learning materials. Smaller language, gets to the core of writing in both a lisp and functional style quickly. Pure Scheme code should trivially work across implementations. If you ever deal with external libraries your choice of implementation becomes more critical.

Racket - You're using this already. Also a good language, and you can reuse the Scheme resources on top of it because it has language modes that are compatible with Scheme. Like with Clojure, you're dealing with a singular implementation so at least that choice is gone for you. Libraries should "just work".

Common Lisp - Like with Scheme, implementation choice does matter somewhat. Multiparadigm, so you're a bit freer to choose how to implement your program (you can adopt a more procedural, functional, or OO style to suit your needs or wants at the moment). It's less opinionated than Scheme and Clojure. Some good learning resources out there, some of the Scheme materials translate well (in that they work) but don't teach good CL style.

To answer your specific question:

I first learned Scheme helping college mates who were a year or two behind me as GT switched to Scheme as its first CS course language after I took it. I really learned the lisp family with Common Lisp in grad school from an AI course that was using Norvig's Paradigms of AI Programming, and I kept using it as a hobby language ever since.


I think you should also add to Scheme: great for embedding a lisp DSL in another app. Scheme's minimalism means there are a number of excellent options for working this way. I use S7 myself.


Probably, but it's not an option at this point. There's a 2 hour edit window on comments.


Apologies, what I meant was "I think this ought to be added", not "hey, go do this thing I want!" :-)


To me, the entire point of Lisp, the thing that sets it apart from other languages, is unhygeinic macros. Since that is my view, I can only recommend Common Lisp. CL is the opposite of modern, but just because something is modern doesn't make it good, nor does old make bad.


As someone who has written a lot of CL and now deals mostly in scheme, why is an unhygienic macro system important? I don't think it matters much, and frankly I think something like vanilla syntax-case isn't hygienic enough. Either I don't want to care about accidentally capturing identifiers (and be very explicit when I do want it - Day like in srgi-72), or I want to have a system where I deal with explicitly (defmacro)

Which I am using matters very little.


The unhygeinic part isn't as important as the fact that CL's macros (and other macro systems) are not simple pattern matching and substitution. But the lack of hygiene completes the circle for the concept that the program is simply a data structure, and macros are just imperative functions from syntax to syntax, and they can do anything that a normal function can do, like make HTTP requests.

The destruction of the distinction between your code and the compiler, and the recursive relationship between the reader (and CL has reader macros, another dimension), macroexpander, and evaluator, is what is so mind-opening about lisp to me, and CL embodies that trifecta with the most ideological purity (imo).


If you are arguing defmacro vs. syntax-rules you are indeed correct. The lowest common denominator of the scheme macro world is indeed a pattern substitution one (with hygiene). No procedure application at expansion time allowed.

However, all schemes include a lower level macro system that allows you to break hygiene when you want to, most of them by asking explicitly (explicit renaming macro systems being an exception).

r6rs, a standard that was disliked by some, standardises a lower level macro system called syntax-case that superficially looks the same as syntax-rules but gives you all the power of unhygienic macro systems (in guile for example, the old unhygienic defmacro is implemented using syntax-case. A very trivial thing to do, I might add).

So, to re-hash: syntax-case macros are procedural. They allow for the full power of scheme at expansion time, and allows you to introduce bindings unhygienically using datum->syntax.


As an attempt at an answer: Unhygienic macros offer you more "power". With care, or another macro, you can express exactly the things you want with hygienic macros in a system like CL which lacks them, but the reverse isn't true. I cannot think of anything off the top of my head which would be particularly useful with unhygienic macros, but there probably is a use-case. My best guess (and just a guess) might be the SERIES and SCREAMER packages which redefine defun, but I could see it going both ways. A hygienic macro within the SERIES package would know the local definition and use it, but one from without would use cl:defun.

With either hygienic or unhygienic macros this can create problems as in either case you may think you're using a particular defun when you're really using the other.

NB: My CL macros are written carefully so they'd be roughly equivalent to the hygienic macros of Scheme, but with more boilerplate. At least when it comes to most variables, but I don't take care to ensure the functions used in my macros are not replaced in the local context where they're used.


I don't believe anything that can be written with unhygienic macros can't be written in a hygienic macro system. Macros written using implicit renaming is written just like CL macros, and with enough libraries you can do something similar with syntax-case.

Heck, implementing an unhygienic defmacro over a regular lambda is something like 7 lines of syntax-case. In guile you could even use lambda* to get optional and keyword arguments.


So, again, I don't think I know for certain why I'd want to do this, but here's an example of different behavior with the two:

  (defmacro my-not (x) `(not ,x))
  (flet ((not (x) x))
    (my-not 't))
In a hygienic macro system, that local definition of not will have zero impact on the execution. In an unhygienic macro system, that local definition is used instead of what the macro writer probably intended. That's a stupid, trivial example that would hopefully never be written. My best guess for wanting something like this, per my previous comment, would be a system like SCREAMER or SERIES which could make use of this local redefinition to produce a different-than-standard behavior.

How would you write a hygienic macro which would make use of the local redefinition of not without also having to pass the redefinition to the macro?


I would use syntax case and introduce unhygienic bindings. That is the deal with all low-level hygienic macro systems: hygiene by default. Break hygiene explicitly.

In syntax-case that would be (datum->syntax syntax-object-where-i-want-to-introduce-binding 'not)


This is right, but shadowing symbols in the CL namespace like this is usually not allowed: the compiler is free to optimize a lot of these things by inlining definitions and such, so this may not do what you think (unless you shadow CL:NOT)


"Usually not allowed": rather the behavior is undefined, it is not a conforming program.

(and in practice the error is "Lock on package COMMON-LISP violated when binding NOT as a local function", you can lock your own packages too if you want).

Also, you generally mind your own symbols, you don't rebind symbols from other packages (hygienic macros are more useful when you only have one namespace).


In a lisp-1 like Scheme it's a lot easier to shoot yourself in the foot with unhygienic macros.


I don't think that matters. Either you write a macro that is correct that won't break because of unwanted intersection with the place of expansion, or your macro is wrong.

Edit: barring code-walking macro of course. Once you start using those, the bets are off.

Edit2: of course, I don't think overwriting core forms count either. That is poor bedside manners :)


Rather than rehash the ancient argument once again, I defer to this post:

https://www.xach.com/naggum/articles/3225161536947499@naggum...

(KMP was on both Common Lisp and Scheme standard committees; Erik Naggum was an eloquent, if controversial, Lisp expert)


They both (in this case):

1. are at least a little bit wrong.

2. Never use FLET or LABELS

I think we can rule out #2. Function capture is a legitimate problem that does (admittedly rarely) happen in real-world code.

The thing that makes writing hygienic macros possible in CL today is the package system. It's disallowed to rebind functions from the CL package, so if your macros only rely on non-exported symbols from your package and symbols in the CL package, then you are free from any hygiene problems that aren't caused by code that is clearly bad locally (that is code that binds internal symbols of other packages; any "::" in use should be immediately flagged by a code review.)


> if your macros only rely on non-exported symbols from your package and symbols in the CL package, then you are free from any hygiene problems that aren't caused by code that is clearly bad locally

Not quite true, recursive macro invocations that create bindings can step on themselves and force you to use GENSYM, even if your criteria are satisfied.


The "interesting" macro hygiene problems can't be solved with GENSYM. I was assuming it was understood that proper macros should use GENSYM for generated bindings.


First of all, KMP argues that the problem does not exist in a lisp 2. This is obviously wrong, even though the risk is smaller. I would argue that a macro that acts correctly 99% of the time is still wrong. Lisp-2 doesn't solve this. Gensym and packages do.

Erik says that gensym and packages solve the same issue that hygiene solves - something I said all along. His main gripe seems to be with the single namespace, which most people dont see as a problem. If you really really need to call your variables `list`, then by all means: use multiple namespaces.


Correct me if I'm wrong, but Lisp-2 vs. Lisp-1 has nothing to do with hygiene, it just splits Scheme's single problem (lexically binding values in macros) into two problems (lexically binding values in macros, and lexically binding functions in macros).

The real problem, which Naggum includes, is the lack of GENSYM (if scheme indeed lacks it), and lack of first class symbols, as he mentioned.


> it just splits Scheme's single problem (lexically binding values in macros) into two problems (lexically binding values in macros, and lexically binding functions in macros).

Bingo. Lisp-2 and GENSYM are attempting to solve the same issue in two different use cases. And, if I may add, GENSYM looks a bit lazy and half-assed next to the scortch-the-earth multiple namespaces of Lisp-2. It's like they blew up Lisp with dynamite and then sat down and said "whatever" when the same problem appears in macros.

> The real problem, which Naggum includes, is the lack of GENSYM

Guile Scheme has both GENSYM and DEFMACRO (unhygienic). I think quite a few Scheme systems err on the side of practical concerns.

But that doesn't erase that fact that GENSYM is an ugly hack to get macros to work.


> Lisp-2 and GENSYM are attempting to solve the same issue in two different use cases.

Now this is… post-hoc, Lisp-2 is the original form of LISP and predates the concepts of macros.

> And, if I may add, GENSYM looks a bit lazy and half-assed next to the scortch-the-earth multiple namespaces of Lisp-2.

I would agree. However in reality we have a system that is half-assed vs a system that nobody likes to use.


> predates the concepts of macros

Though LISP macros are quite old, from around 1962...


It's somewhat orthogonal, yes, but a lisp-1 just has more ways to run into the name collision.


Most schemes have a gensym as a part of their lower level macro facilities. It is a part of r6rs (which at least has generate-temporaries, I don't know if gensym is actually standard. All schemes I have used have it).


I miss Eric Naggum. His rants were absolutely epic. I was terrified of inadvertently offending him on comp.lang.lisp so I always thought extremely carefully before posting anything :)


I think he was a bully. An intelligent bully, but still a bully. For example: He told me I should get a late abortion (as in suicide) when I as a 14 year old disagreed with him about guaranteed tail recursion. Now, it sounded a bit better in norwegian, but still a pretty horrible thing to say.

His spirit lived (lives?) on for a long time, and it made staring common lisp a much shittier experience.


There are various hybrid schemes that support CL style macros. I'm using S7 myself, because embedding in a C host is what I'm after. I love it, it has support for CL macros with gensym and first class environments, and embeds easily in C.


What's S7? I've never heare of it.


A reference to this Scheme implementation:

https://ccrma.stanford.edu/software/snd/snd/s7.html

And from GP's own site (presumably, name is the same):

https://iainctduncan.github.io/scheme-for-max-docs/s7.html


It's a minimal embeddable Scheme created by Bill Schottstaedt at CCRMA, the Stanford computer music centre. it's similar in scope to Guile or TinyScheme (from which it was originally forked), and is linguistically similar to Clojure and Janet in that it is fundamentally Scheme, but borrows heavily from Common Lisp (keywords, environments, CL macros). It's mostly used in computer music projects (Common Music, Snd, Radium, and others), but not only - there are some folks on the mailing list who use it as an embeddable DSL for "regular" engineering projects too. I built Scheme For Max on S7, and have been very happy with the choice for my constraints.


A note on Clojure: it’s a lisp but the language has some important differences from (all?) other lisps that are inherent to its design. The two key changes are immutability by default and the sequence abstraction[1], with syntax differences being a close third. Sequences in particular mean that some pretty fundamental lisp functions don’t operate the same way.

There are two ways to look at this. One is that they encourage better coding practices, the other is that they are restrictions that diminish the power of the language and make you jump through unnecessary hoops to get stuff done. Basically, “guard rails” vs “training wheels.”

I can’t say much more since Clojure is the only lisp I have played with (and I don’t see this changing, I quite like the language design). Code written in Clojure is not trivially portable to other lisps, and vice versa. I think there are differences in what you can do with macros too.

No value judgment here, I just think it’s important to know that these non-trivial differences exist when choosing which language to explore.

[1] https://clojure.org/reference/sequences


I started with SICP to understand Lisp conceptually (it also taught me a lot about abstraction). Then I worked through Practical Common Lisp to learn the ecosystem a bit better and programmed some tools for my own tasks. Finally I worked through “Write yourself a Scheme in Haskell” which deepened my understanding and actually ended up with me spending more time with Haskell than with Lisp (it was a gateway drug to more structured functional programming). Overall this sequence was a very satisfying experience and I now find myself spending most of my time with Erlang which feels a bit like middle ground.


What kind of applications do you want to build? What non-lisp languages do you usually work with?

I think you will find that Clojure has the largest community and the widest use for commercial applications but, depending on your specific interests, either racket or common lisp could be a better fit.

In summary, I think we need more information about your goals.


Thanks, good points. I am working primarily with nodejs / typescript. So, yeah lisp is a totally different paradigm for me. In terms what want to build, backed and cli tools would be a good start for me.


I'm going to say Clojure is probably what you want. Via clojurescript, it has very good interoperability with the tools you're already using and it's possible that you might even choose to use it on the front end in the browser. That, coupled with the size of the community suggest to me that it is the winner here.

For the (browser) front-end, there are lots of neat idiomatic-clojure libraries like https://reagent-project.github.io/

For command-line scripting, your program will ultimately mostly access the filesystem or other OS functionality via interop with the host platform (either Java or Node).


In that case, Clojure, ClojureScript, and Babashka ( https://github.com/babashka/ ) is perhaps the best set of options for you. This are three implementations of the same language.

Otherwise, and especially if you want to use C library interop, I'd suggest Racket. Racket is also relatively close to Clojure in philosophy.


I second that.


Common Lisp with Emacs and SLIME if you want to do "ordinary programming" and want to really "feel" what Lisp is all about.


I like this as a general idea for users (and potential users) of programming languages. See also Sergey Tihon's tireless efforts with F# Weekly: https://sergeytihon.com/category/f-weekly/.


The best progamming book I have read so far is "On Lisp". Reading that book and coding Lisp for sometime was the best time I spent programming. Unfortunately, the whole thing was purely academic.

Thanks for the effort and the information you put together.


Note that PG has made "On Lisp" available for free: http://www.paulgraham.com/onlisptext.html


It looks like someone else found and re-added the diagrams: http://www.lurklurk.org/onlisp/onlisp.html


A question: How does CLOS rate in terms of "functional purity". Or does it matter? The biggest selling point of Haskellers seems to be that Haskell is a "pure" functional language. Are there practical problems caused by mutable data in CLOS?


I'm not sure I understand the question. You can write CLOS code in a pure-functional manner, or you can choose not to.

Generic-functions are, in general, more useful for some types of functional programming than the typical single-dispatch style used by most other languages.

Haskell and common lisp are at nearly opposite ends of the spectrum on many design decisions (Haskell is a Lazy, curried, typed language with lots of syntax and custom infix operators; Common Lisp is a (usually) eager, mostly[1] untyped language with little syntax and only prefix operators)

1: I say mostly because CL does have type declarations, but it's not defined how (or even if) the declarations are enforced in the CL standard. SBCL is arguably a typed implementation of CL, but even then, the fact that you can't in any useful manner[2] describe a type that is "A list of items of type X" in CL puts it in strong contrast to Haskell's rich type system.

2: You can use a "satisfies" type, but satisfies type declarations are ignored at compile time, so even in SBCL they are largely just syntax-sugar for assertions.


I think everybody agrees that immutable data-structures are preferable to mutable ones if it is possible to get by without mutation in practice. In Haskell it seems it is difficult to mutate data, whereas in Lisp not so.

If immutability is good then it would seem it is best if you can enforce immutability at the language level.

I'm just wondering whether mutability is a downside of Common Lisp?


CLOS doesn’t require the use of mutation and in fact has explicit options to make data read-only. So it’s a matter of personal discipline whether you mutate or not, though I’d say it’s entirely idiomatic to mutate.


CLOS is about generics, so is about the type system, and is in practice pretty much orthogonal to purity. CLOS was originally implemented using macros, which are semantically pure.

CLisp supports but, unlike Scheme, does not really encourage programming based on functional purity. Only the CLisp implementations that support full tail recursion are OK choices for pure functional programming.


> CLOS was originally implemented using macros

it was also implemented originally with variables, functions, types, system specific data types, numbers, symbols and a bunch of other things, including itself.

> CLisp supports but

Note that CLISP is an implementation of Common Lisp. The language itself is abbreviated as CL, not CLISP. If one says CLISP, then one only means CLISP, the specific implementation. The name CLISP comes from the implementation being implemented using C.


There was a recent article with snippets in Haskell and Lisp: http://www.lispology.com/show?3FY9 "Haskell-like patterns in Lisp"


Enlightening, thanks


One thing that really bugs me about CommonLisp, is that docs for the supposedly popular libraries are often broken, because they relied on a 3rd party service QuickLisp. A great example of this, is the linked “Clack Getting Started Guide” in the article isn’t even referenced in the clack library repo. This guide than proceeds to link to the broken QuickLisp docs. It’s madness, especially for people that WANT to help move this language forward and increase adoption.


Do you mean Quickdocs? That was a passion project of someone who decided to stop maintaining it. It’s open source and can regenerate docs to be up-to-date. The author simply decided he doesn’t want to maintain it anymore due to other life/health circumstances.

Quicklisp has nothing to do with documentation. That’s a distribution mechanism.


I wrote the Clack getting started guide. I fixed the broken link you pointed out.


I recently tried learning CL, and was dismayed at the tooling. While Emacs + SLIME seems to be the recommended way to go, yielding great experience; I could not realise that because I could not get SLIME to work with EMACS, and couldn't get any help in resolving it.

This was after having it work on a previous macbook. Then changed the laptop, tried installing it and got stuck. Led me to give up learning it at all.

I'd like to add better onboarding to the list of things to improve


The article does mention Portacle, which is very easy to install on MacOS. I'd recommend you try it to see if there's any trouble.


I agree that was an issue. Now the SLIMA plugin for Atom is very good and getting very close to Slime. The ones for Sublime and VSCode are getting close too. Dandelion for Eclipse allows to get started very easily (but is less complete).


maybe you just need a Linux laptop, like some old Thinkpad with Ubuntu. Install time is 15 Minutes or so. Emacs is included by default.


Interestingly, not much about ML. Surprinsing for lisp which, if I understand correctly, has roots in AI...


The AI of the first Winter had nothing to do with ML, rather expert systems and symbolic processing.

Python is starting to look like the Lisp of the second Winter.

https://norvig.com/python-lisp.html


> The AI of the first Winter had nothing to do with ML, rather expert systems and symbolic processing

Between 1988 and 1992 I worked for a UK company participating in a multinational project to use Common Lisp to build an expert system building tool. After creating the thing, we worked with clients (internal and external) to try to solve real customer projects. Our conclusions matched others, and contributed to the AI winter:

* the rule-based expert systems were extremely brittle in their reasoning

* once you got beyond toy problems with small rule sets, you needed some programming skills in addition to the domain expertise that you were supposedly encoding

* you sometimes spotted the actual algorithms that could be coded in a conventional language rather than rules + inference engine.

We eventually abandoned the AI side and kept going with the underlying Lisp, and started to get real leverage in the company, rapidly becoming a prototyping / risk reduction group who delivered usable functionality.

[Edit] We were using Lisp processors embedded in Macintosh hardware, with outstanding (for the time) IDEs and thanks to the Mac interface, we could create some really slick apps for the end users. One of our Lisp systems that got rave reviews internally was a drag-and-drop network modelling tool that replaced a frightening mass of Fortran and data entry spreadsheets. No AI/ML at all, but it really improved the throughput of our network modelling group. As we were a comms company, this got positive reaction from senior management, offsetting the non-return on investment in the rule system.


Are you referring to the NuBus boards mentioned here[0], the Symbolic MicroExplorer or MacIvory? Those look interesting!

https://en.m.wikipedia.org/wiki/NuBus


It was the TI MicroExplorer. We also used Symbolics machines. By the time I left we had switched to conventional Macs running Procyon Common Lisp [0], which had a stunningly good native IDE.

[0] http://www.edm2.com/index.php/Procyon_Common_Lisp


I don't think Python has the same issues. It was very popular as a scripting language long before it became good at numerical and data science work. It has always been free/open source and never required special hardware to run. Even if ML tanks, the impact to Python would be minimal.


Are we not heading for a 3rd AI Winter? The first being in the 1970's and the second in the 1980's and 1990's (which I experienced)?


I don't think there was a post-70s winter. There was just not enough 'there' to over-hype; it was toy problems that did not even try to masquerade as real solutions other than in sci-fi and popular culture. Luminaries in the field definitely made a name by pumping out speculative paper after speculative paper, but IMHO there was more of an ember waiting to spark than there was a fire consuming all of its fuel.

By the late-80s and early-90s you have venture-backed companies, big institutional efforts, grifters who had honed their pitch in academic tenure-track positions prior to moving to richer waters, and the first real claims being made regarding just-around-the-corner deliverables that would change everything. Maybe I am jaded from experiencing that same winter, but from what I recall the prior decades were more consumed with people making broad claims to try to establish intellectual primacy more than making claims about what could be delivered.


The term showed up in the 80s, but there was an earlier (70s) failure in AI. There were a lot of grand ideas and promises, people really did anticipate AI moving much faster in the 60s and early 70s than it did. Since a lot of ideas (both regarding AI and CS in general) were in their infancy, the limits of computers (fundamental limits) weren't yet fully recognized, but also the hardware was itself creating limits (non-fundamental to the field) that weren't escaped until the 80s and 90s. See chess AIs of the 90s finally "solving" the problem, which would've been technically conceivable (how to do it) in the 70s but totally unrealizable (unless, maybe, you hooked every computer of the time together).


doubt it's going to happen this time. the current crop of AI research is producing a lot of real results in all kinds of fields. voice transcription, recommender systems, translation, object detection -- all cases where neural networks have gone from research to widespread commercial deployment.

there might be some kind of a contraction when people realize they're not going to get HAL-9000, or that you can't use deep learning to replace every white collar employee. but the results this time are much more "real".

I don't think we will get another AI winter where the entire world completely gives up on the research area.


By 1980's and 1990's, are you referring to the Japanese 5th Generation project?


No, there was general failure of AI due to overhype in 1980s, hitting hard in early 1990s, with some marking a big chunk of the hype starting after (non-hype) papers about Digital's XCON paper.

Some of it was due to non-delivery of hyped predictions, some of it was due to sudden disappearance of military funding that helped spread the hype.

The 5th generation project was part of the hype, but at least in the west, lesser part of the winter.

The original AI Winter was, iirc, related to UK report in 1960s/1970s.

The second AI winter killed a lot of advancement in computing in general, combined with new waves of programmers coming from very limited environments who were never exposed to more advanced, capable techniques - by the time they got to play with them, AI winter was in full swing and they were pushed out as "toys".

While expert systems in naive form were definitely not the answer to many generic problems, they are often part of the answer, and the amount of time I hit cases of "this could be done as rule system and be clearer", or even "Why can't I depend on features of 1980s XCON in 2020??? It would have saved me so much money!" is simply depressing.

(n.b. I suspect a significant portion of why many modern shops don't even look toward features like XCON - which ensured correct configuration in computer orders - is because the norm became that customer pays for anything like missing necessary pieces)


Interesting. Thanks for the detailed answer.


The focus of "AI" has shifted over time from symbolic processing (something lisps star at) to neural network machine learning, which requires more brute-force power.


I think the issue isn't "brute force power" (which python doesn't really have compared to CL if you look at the language itself), but rather the quality and completeness of numerical routines and GPU support. Matlab was very popular for early ML because it had the former.


Python is doing the actual lifting with C or Fortran, so I'd say brute force still applies quite a bit.


Same for the Lisp libraries which rely on CUDA, OpenBlas, etc.


I would say AI roots and research are lisp, as 99% of all early research work was done on lisp.

Lisp was the language people like Richard Stallman or John McCarthy(the inventor of Lisp) used at MIT AI laboratory:

https://en.wikipedia.org/wiki/MIT_Computer_Science_and_Artif...

Everybody used Lisp there and young people that learned from the masters learned lisp too.

But that was AI 1.0. Then came the AI winter and 2.0 spring with GPUs that were programmed in C dialects and gave incredible levels of raw power.

So, as a high level access to low level C code, python was picked by most people.


It would be great if Python had a PEP to automatically compile down to a binary executable file, and resolve all dependencies for it.


I just don't get the appeal of Lisps. They look so ugly and the syntax seems crazy

((((((((((((((((( by the way


That much stacking of opening parentheses is never seen, only ))))))).

Typing )))... is easy; just hold down the ) key to repeat, and watch the cursor jump back and forth due to parenthesis-matching. When it flicks back to the correct target, release, and backspace over any overshot parens.


tastes differ. I'm a lot more averse to:

   end
  end
 end
end

or in JS:

   });
  });
 });
});


Yes both are ugly, which is why Python is so great. Go is also nice.


Agreed. Though I'm mostly a scheme and C hacker these days, I did Python professionally for about 13 years, and that is a very nice side effect of the whitespace thing. On the other, Python has no decent equivalent of Lisp's let, and complex or nested Clojures of anonymous functions are butt ugly in Python, so now I'm happy with my parens.


Look crazy to fresh eyes. I've found it to be a non-issue the more I write lisp code. With structural editors like parinfer, paredit, smartparens, etc. it becomes a whole lot better.


Practice quickly gets you past that point. You read Lisp by looking at the indentation. You write Lisp using tools that modify the code structurally, rather than on a character-by-character basis.


Use C macros long enough and you'll see one of the reasons why Lisp has merit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: