I wanted to love Common Lisp, but as a Vim user every day was a struggle. One typically uses plugins (Slimv, Vlime) that contort buffers in bizarre ways in order to simulate the SLIME EMacs REPL — if not, they will lose out on the interactive development experience that is so central to CL.
Being tied to either EMacs or an enterprise solution like LispWorks to get the full language experience was ultimately a non-starter. I’d love for someone to build an alternative CL development experience that could work in a wider range of text editors and IDEs.
There is a lot to learn from CL, but I think it can be hard to access for most developers.
As a vim user, I’ve been using doom emacs for the past 3.5 years and haven’t looked back yet. I really enjoy the Common Lisp experience while using doom as well.
I should also note that I’m looking forward to CLOG being ready as an in-browser IDE for Common Lisp soon. It’s a really neat open source tool for developers if you haven’t heard of it yet:
https://github.com/rabbibotton/clog
I appreciate the message you're conveying (I also switched from using vim for years to emacs for years, probably for good) but man, we have to stop attaching tools to our identities.
You're not only more than a vim user, you don't even use vim!
I would argue Evil mode (which Doom Emacs includes) is an implementation on Vim. Only instead of being an implementation in C, it's an implementation in Elisp.
I equate Vim not with the weird configuration language or source code of the original Vim project, but with the interaction language it uses – and Evil uses the same one.
Evil is more than "a vim compatibility layer" -- it is a reimplementation of Vim closer in spirit to nvim than anything else.
Some tools are closer to being identities than others. Being a 'vim user' is more like being a Dvorak user, it lives in muscle memory. 'vim user' is overloaded here, it colloquially means "user of the vim text editing language" rather than (just) "user of the text editor, vim".
I'm about intermediate at wielding vim, and the VSCode plugin implements enough of the language for me. But I'm fluent enough that being deprived of it is unpleasant, and I won't willingly edit text using a program which doesn't implement a decent vim mode.
It's not like computer languages or applications: the right number of text editing command-languages to be good at, like the right number of keyboard layouts, is one. Typing on a keyboard, and editing text, is my job. I don't want to waste time and productivity on learning several ways to do it, these are means to an end.
This being HN, someone might come up with some valid reasons to master more than one keyboard layout, particularly a "weird" one while retaining fluency in a standard one. Granted, call it a concession to an imperfect world.
Funnily enough the main keyboard I use is a split, columnar, Dvorak board. I also use a lot of tools that people like to attach to their identity: SwayWM (tiling wm user), Lisp and Scheme (Lisper and Schemer), GNU/Linux... I do not see these at all as a part of my identity. I think making these a part of my creates an aversion to change, and therefore progress.
Historically I have even used distros that people heavily identify with and call "end game distros", like Arch Linux, Gentoo, and NixOS. The former two I used for years each. I eventually have landed on GNU Guix, since for me it has worked better than anything else I've tried for my needs.
I really mean it when I say we should identify with our tools less. You may consciously identify with tools that you want to solidify as a part of yourself even with the potential aversion to progress which that may include. I can't think of any tools I would do that for though.
Think of vim-navigation as an idea without conceptual attachment to any concrete editor. Vim-navigation can be used even without any editor at all. Some people (like me) use vim-navigation in their browsers, terminals and their WMs. I control my music, windows, apps, etc., with h/j/k/l. My Neovim config is minimal because I rarely use it. Yet in Emacs, I use it extensively. Perhaps even more broadly than when I wasn't an Emacs user.
When people say "I'm a Vim user," they usually mean the concrete editor Vim/Gvim/Neovim because it is really difficult to replicate Vim outside of Vim. Every single plugin, editor, or IDE is just an emulation, with rare exceptions like Helix (which from the get-go was designed to have good support for it) and Emacs, which, due to its nature of being malleable and super-hackable, allows very nice integration of Vim features. Seasoned vimmers know that there's pretty much no vimming unless you're using Vim/Neovim https://x.com/garybernhardt/status/902956444596617216?lang=e... The caveat there is that they never tried Emacs with Evil. Evil is surprisingly very good. Probably the best implementation of vim.
And when you combine these two ideas - Vim navigation and Lisp, you get something truly amazing. Anyone who fervently detests one concrete implementation in favor of the other probably has a shallow understanding of either of these ideas. Write enough Lisp, and instead of getting angry at people trying to "vimify" your beloved Emacs, you'd be saying: "Cool, another testament to how awesome Emacs and Lisp are..."; learn enough Vim, and you would want to have it everywhere, not just in your favorite editor.
Common Lisp in Doom uses Sly. If you want to stick to Slime and prefer vim ergonomics to those of Emacs there is also Spacemacs, which I just learned even has its own Wikipedia page: https://en.m.wikipedia.org/wiki/Spacemacs
Take a time and choose one. The world is not limited only by Vim and Emacs. However, I prefer emacs these days (happily switched from Vim a decade ago).
The author of “Let Over Lambda” dislikes Emacs and does not use it. What’s more, he didn’t use Slimv or anything like it or even Vim at all. He used a more basic vi, maybe nvi.
So perhaps the interactive experience is not essential. You can just edit and compile like you would in C. That’s not what I do—-I use Slime—but it is possible. Also, a lot of the interactivity is required by the standard to be built in to your Lisp’s REPL, so you can do quite a bit if your REPL isn’t primitive. SBCL doesn’t even have readline but you can use rlwrap.
> The author of “Let Over Lambda” dislikes Emacs and does not use it.
I have no idea what Hoyte like to type in, but why does it matter what text editor he uses? Einstein didn't had any computer, not even a calculator. Do we have to use paper and pencil for all the calculations just because Einstein did? Our physics teacher in gymnasium, forced us for 4 years to do all calculations on tests by hand, at four decimal places, with that exact excuse: Einstein didn't have a mini calc. Non of us have become a Nobel prize taker in physics :).
> Also, a lot of the interactivity is required by the standard to be built in to your Lisp’s REPL, so you can do quite a bit if your REPL isn’t primitive.
Mnjah; not so much really. Using at least SBCL from plain command line really sucks. If you mistype something you have to retype everything, no history, etc.
> SBCL doesn’t even have readline
If you are on some *nix OS, you can get a long way by just using SBCL with built-in sb-aclrepl + linedit. Aclrepl gives you "command-like" stuff, similar to ":" in Vi (or M-x in Emacs), and linedit adds cursor motion, history and some basic completion. I would still not type entire programs in repl, but for running and testing the code it is a basic survival kit. For me personally it is enough.
There is also cl-repl package which gives you native bindings and some extras if you want to go all-in readline from within the lisp itself.
It compares pretty well: it has all the essential features, and some more. It only lacks a couple keybindings in my eyes (shortcut to call the function at point in the REPL, shortcut to "change-package" from a lisp file). It has:
- the interactive debugger
- the Lisp REPL
- so we can have a full-featured Lisp REPL on the terminal with: alias ilem='lem --eval "(lem-lisp-mode:start-lisp-repl t)"'
- the same compilation, evaluation, code navigation keybindings and error reporting
some more:
- when we evaluate an expression, it will show a loading spinner during that time and then the result in an overlay.
- a built-in LSP client that is known to work with other languages (and syntax highlighting for many languages)
- some tools, still more rudimentary than Emacs: directory mode, find file in project, project tree side view, Git tool (shows status, does interactive rebase)…
- it is in the process of having co-editing in Lem itself. The developer(s) are beta-testing a collaborative web-based version of Lem: https://github.com/sponsors/cxxxr
This works real well. I honestly don't see the point of slime, it feels like it was written so that people didn't have to use the terminal, but the terminal works just fine for me.
I even wrote a :make plugin for it, which works well enough:
Understandable, but it also seems like there is a disconnect between the philosophy of vi and the philosophy of Lisp. Vi is designed to be purely a text editor, and not an environment for building text based applications. Common Lisp being so inherently interactive, seems to require a dynamic, interactive text editing environment. Like Emacs.
But that is old. Vim can easily interact with lisp real time. It would and probably never will be the same as emacs which is a lisp env. Still it is good enough.
GNU Emacs nor XEmacs also can not behave like ZMACS on lisp machines - there's just as much disconnect as between (n)ViM and lisp image. It's always a remote, RPC-like relation.
In fact, I'd say that a certain ancient Erlang mode for Emacs resulted in closer relationship between Emacs and Erlang, as it made Emacs into a process in OTP cluster.
Back in my CL days I loved Slimv. It had its warts, but was miles beyond having a shell with a REPL open. I imagine it has only gotten better since (I haven't touched lisp in almost 10 years now). That said, I never gave Emacs a serious shake so I was probably missing out quite a lot on what a good interactive experience could be.
I really like it for other lisps, but I haven’t used it for Common Lisp. I wouldn’t say it “contorts buffers in a bizarre way,” although in my experience, different Vim users have their own take on that.
I would be very interested to read about the limits of conjure. In my mind it is a fabulous tool, with vim-sexp it is very productive to write clojure code with and a pleasure to use.
I had similar issues trying to get used to EMACS when I was playing the CL. Add to that the fact that I tend to use Windows at least half of the time, and I wanted an alternative.
Emacs is an ethereal substance. You cannot "use" it. Just like with magic - there are no users of Emacs, you can be skilled practitioner or a beginner but you don't "use" magic - you apply it to create or to destroy. To slay dragons, to amaze and terrify uninitiated ones
Seriously though, Emacs is hard. Especially these days, when the constant flow of distractions is intense. People these days do not have the patience for learning anything that takes them longer than an hour to grok. They'd rather duct tape things with "left-pad" solutions and call it "it just works"™, and if [insert fav editor here] doesn't support something, they lose any incentive to even try things.
I think I should've tried better to emphasize sarcasm. Emacs is an incredibly powerful tool. And honestly, I don't understand people who try to learn it only to complain shortly after how hard it is compared to VSCode, and they don't want to "waste their time." Dude, you chose to be in this field. A text editor is a single, the most important, the most consequential tool of your professional lifetime of a programmer. LIFETIME!!! Why the heck not spend some of that time learning it?
You still need to tinker all the time - that's part of our jobs. You need to configure and use various tools, for note-taking, Zettelcasten, GTD, TODOs, Pomodoro, Habits, Anki-Cards, etc. You need to tinker with your terminal, write various scripts, in awk, bash, perl, python, etc. You need to manage your Postman collections, nix files, dotfiles, PDF annotations, gpg and ssh keys, shell history, browser history, browser settings, git settings, package management, jupyter notebooks, etc.
Now, Imagine if you could really learn just a single tool that could help you manage all that—one integrated environment where you can take notes, organize tasks, manage your schedule, write code and annotations, write scientific papers, automate workflows, handle version control and secure communications. Some versatile, extensible, and powerful editor that can be customized to fit all your needs, where you turn endless tinkering into a streamlined, cohesive experience, making productivity feel natural and effortless. Where if you embraced it, you'd discover how it can transform your workflow into a harmonious symphony of efficiency.
That tool does exists, but of course, why would you screw around it endlessly? After all, you have to write code instead...
Emacs is essentially a Lisp-machine. Yes, I understand that this deviates from the traditional notion of a "Lisp-machine" since it's not hardware-based; nonetheless, it still fits the bill.
If you're eager to learn Lisp (not just Common Lisp, but any dialect), why not take advantage of a Lisp machine for that purpose? When you say, "I don't want to learn Emacs," you're basically saying, "I don't want to learn Lisp." You need to know Lisp to grok Emacs, you need to write it, you need to embrace it; otherwise, you're not using Emacs, you're just a passenger.
i wrote several projects in CL and overall disagree. and thats ok!
i prefer rackets batteries-included and library system, nice local docs, ide from this century (albeit basic editing capabilities) compilation options etc for my personal projects
i certainly understand the appeal of cl and even use lispworks in embedded work. but some tools have better ergo for me depending on time/effort/whatever
Given that you said you had lots of friction with the IDE (equating development in Common Lisp with using Emacs) I'm curious what Common Lisp projects you managed to complete in that state. I also use Racket, and I think it is a fine language, but for my use cases it doesn't really come close to the experience with Common Lisp. Still, alot of interesting projects in that community. Also, curiously, previously said you totally gave up on Common Lisp, but now you say that you use Lispworks, and for embedded of all things. How interesting! Would love it if you shared what you do.
One thing I can do in Common Lisp which I can't do (almost) anywhere else (but including Racket) is proper REPL-based interactive development. Specifically, I can update state without needing to recompile my program. For my use case this is unmatched for prototyping.
As regards low-level stuff, this is very implementation specific (which is OK for low-level programming). Specifically, SBCL allows me to write much faster code than Racket. Particularly there does not need to be a speed cap or even penalty for using a high level language.
Yeah, I read this article with interest up until I got to the part where he admits you're gonna have to use either vim or emacs. I have no desire to subject myself to the pain of either of those editors, so I guess CL isn't for me.
Curious why lisp's REPL is frequently touted as an incredible language feature e.g.:
> Support for this style of interactive development doesn't just come from some fancy editor plugins — it's baked into the bones of the language.
> So how do you actually get this wonderful interactive experience?
I've only ever programmed in interpreted languages (R, ruby), so I can't really understand how or why a REPL is so great since to me a (console|REPL|interpreter) is a standard feature (nothing extraordinary). Perhaps because I haven't had to work in a language without the convenient and immediate ability to execute arbitrary user inputs (as a REPL or interpreter can), for example a compiled language.
Had you worked in Fortran, Pascal or Algol or C, and been forced to think linearly to a deck of cards, and a job queue, and do all the marshalling of the IO into that job queue through some horrendous syntax of JCL of some kind, then the experience of being in a REPL might be more momentus. Instead you're a fish swimming in clear water not understanding why water is so unbelievably amazing if you haven't been in it before.
Of course LISPians want to make their REPL very meta, compared to any REPL, and it is: its degree of self-introspection, and the potential to modify the REPL is a REPL on steroids experience. But just being a REPL, is pretty damn amazing if you had to uplift from write-compile-assemble-marshall-coordinate-queue-run-cleanup "before"
(I did learn on punched cards in the 70s. Bugs hurt at a 20 min production-to-run cycle, some people were a drive away from the batch queue, and a 1 day turnaround was good.)
The REPL can be much more than a prompt where you execute code. In fact with many editors you can evaluate blocks of lisp code directly. Networked REPL's are amazing, specially when you're fixing code on machines that are difficult to physically access, like in space. The ability to query the current value of a symbol (function, macro, variable, record) and replace it at runtime without clobbering the runtime state is a great boon to the development process.
Also Common Lisp and a few Schemes allow you to interpret and compile a symbol's value (to speed up execution).
more like "batch compiled language". In many Lisps one would incrementally compile code. Common Lisp has already three functions built-in: LOAD, COMPILE and COMPILE-FILE. How they work is depending on the implementation, but LOAD typically can load source and compiled files, COMPILE compiles a single function in memory, and COMPILE-FILE compiles a source file to a compiled file (native machine code or some byte-code). SBCL for example compiles with these functions every thing to machine code, which it does anyway, by default.
One of the things which make this useable for incremental work is the symbol table, which holds all symbols and its values&functions. One can change the function of a symbol at runtime and global function calls go through this symbol table (-> late binding). So an update to a function has effect on its next call.
When using Python, the REPL does not feel remotely integral to the experience. For example, my Django project, when an exception occurs in debug mode, has a nice page with a backtrace et cetera. However, I cannot resume, or poke around using a REPL, or even view details of local variables. I feel like if Django was implemented in Common Lisp this would not be a problem at all, if not for technical reasons then for cultural reasons.
If you install the django-extensions package and use runserver_plus instead of runserver, when you hit an exception you get a live REPL in your browser where you can poke around and inspect at will. It's not CL-level power, but it's mighty useful.
For non Django code, the IPython REPL with the %pdb magic enabled drops you in a ipdb debugger on an exception. Doesn't allow resuming but still very useful.
The difference, as I understand it, is that the REPL is the running Lisp, and all of its code. You can inspect or redefine any function inside of Lisp itself, live, no matter how integral, in the same way you input any of your own code.
The interactive mode of interpreted languages is more of a sandbox, from that perspective.
The REPL often is also the default debugger. On error one gets a debug REPL at the point of the error (no unwinding), one debug level deeper. The debug REPL is basically a normal REPL, but with some debug features (stack movement, ...) and the ability to call predefined restarts in the code.
> Curious why lisp's REPL is frequently touted as an incredible language feature e.g.
Another language that’s great in that regard is Smalltalk/Pharo. It’s the Minecraft of programming languages. Your program start as an IDE and you morph it to what you want. Feels like building a car while it’s running.
Yes, touting the REPL of Lisps is a bit misleading. It's better to say that Lisps and Smalltalk(s) are deeply interactive programming experiences, in ways that most languages aren't yet.
It's not that misleading, though, true, both provide various interactive development environments, with Smalltalk versions very much GUI tool based (with tools like the System Browser).
Typically when one starts a Lisp development system, it will always start some kind of visible REPL. Either it runs it in a Terminal (most UNIX Lisps have a terminal REPL), runs it as a tool in its own GUI-based IDE (MCL, CCL, Allegro CL, LispWorks, Corman Lisp, ..., Symbolics Genera, CL + McCLIM) or runs the REPL frontend in a connected IDE (-> SLIME + GNU Emacs is a prominent IDE for CL, earlier this was done using subprocesses called "inferior Lisps" like in ILISP+GNU Emacs).
The conversational REPL interface (typing expressions, evaluating the expression and getting a response) is widely used.
Smalltalk has the "Workspace" or a Playground (-> Pharo) as similar tools.
Uhh, yes, I'm familiar. I write Clojure for a living.
What's misleading is touting the acronym "REPL", instead of touting why that's important. Everyone who's used to the weaker "REPLs" of other languages don't understand what the big deal is, because they think they already know, so it's important to move past the acronym when explaining its value.
I can think of a REPL use case I don't know how to do easily in another language. Doable yes, but easily and conveniently, no.
I want to build a little tool for the command line: meaning it should be runnable at any time I pull down my terminal, easily started and stopped. In it, I want to listen to a Telegram room. Basically the idea is, in this Telegram room, things are announced, and I want to look up some of them. It could be anything: sneakers, toys, concert tickets. But let's say crypto projects, as there is a running stream of them all day. (Doesn't have to be if you don't like crypto but let's go with it for now).
So I'm printing this list of crypto projects to the screen. Every so often I want to hover over one and get more information about it. For example, pretend there's a project called PGPalooza, one of many that's being printed in this list that's spilling out in real time on my terminal screen. I scroll up to PGPalooza and run a command on it, and then it runs some commands and opens up a few browser windows that I can look at to learn more about it.
This is easy to do in Lisp, in an intuitive natural way, that I don't think I could do with the same ease in other languages. You could think of this as demonstrating the benefits of interactivity in action. I can scroll up and modify the PGPalooza text to read lookup("PGPalooza") and then press Ctrl-E at the end to return one type of data, and financials("PGPalooza") to see another. It's also very easy to modularly add commands to it too, to expand on this.
What most answers here are missing is pointing out the unique difference between a LISP REPL and an interpreter console/shell kind of thing, which is that it fits seamlessly into the languages design. I often see Javascript devs quite unimpressed by a REPL - "I have a console, too". And while Javascript admittedly comes close, its just not the same.
What makes LISP REPLS special is precisely that there is nothing special about them.
There are no extra ceremonies, caveats, or layers that turn a LISP into a REPL. Its conceptually a fourliner, merely replacing the compilers input of parsing text files with reading user input, while providing access to a modified environment after each step. This is possible not because of the REPL, but because of how LISP is designed. It understands going into packages, importing stuff, inspecting values all naturally, not via some "interactive" mode / layer. You are speaking to the compiler directly and it understands your language. For this reason it is much more reliable and natural to embed a REPL in your development process in LISP, than in other languages.
> you can even modify library code, maybe even the runtime
this would be difficult (and not recommended) but not strictly impossible in languages other than lisp (e.g. read library files, edit them, re-write them, re-load the library).
But to play devil's advocate, isn't the language feature that makes this easy in lisp its homoiconicity, as opposed to its REPL?
* The environment for top-level definitions in the file is almost identical to REPL. I.e. there's not much special about a file, thus you don't need a context of a file to submit a new definition.
* Symbols provide an indirection mechanism which isn't tied to a specific location.
* CLOS actually put a lot of effort into doing something reasonable on redefinition.
Python module system is based on dicts. While it would seem like it would be easy to manipulate in runtime, module 's dicts are constructed by module-level import statements, and re-constructing the right state of all modules after an update might be tricky.
OTOH if you update symbol's function in CL, every user of that symbol will pick up the new definition. It's also more performant than dicts: caller has a pointer to a symbol and needs to do just one indirection, there's no lookup at runtime.
Another feature of CL, specifically CLOS, is that it actually has a formal protocol for changing classes.
A formal protocol you can tap into. While hardly omniscient, I don’t know of another language that does offer this. I don’t even think Smalltalk has this. (Smalltalk has a fundamental primitive related to this called “become:”, which can be used for this purpose, but that’s less formal than what CLOS provides.)
What does this mean? It means that you can change the structure of classes, AND their instances, in a live, running system. Not just their structure, but how the system transmutes from the old structure to the new structure. How the conversion is done.
What does this have to do with the REPL? It’s part and parcel of the kind of environment and functionality of the system that is exposed by the REPL.
The REPL is not just a console that you can type into, and use backspace, and what not. It’s the door to the very rich world underlying the system.
And this is the key point. In CL, the REPL is not an afterthought. It’s not an add-on. It’s a core competency. Much of this no doubt came from the Lisp Machine experience, where all you had was a running image that, like a surgeon, you had your arms elbow deep into. A system where you could not trivially just stop and restart for every little change. A system where you had to have the ability to change the tires on a running vehicle.
The vast majority of modern REPL environments don’t have that burden, so when things get tough, they can punt. “Eh, just restart and do over.”
It’s absolutely fair to consider whether that quality is actually still germane in the modern era. Hard to imagine a scenario where you might do something like a change like this on a running server, especially in our age of “cattle, not pets”.
But the legacy is still there from that past time. A time when not only were they pets, they were coddled and spoiled. So the folks back then had to think this stuff through.
That is why I always give Lisp enviroments and Smalltalk as example, when someone comes up with the common excuse that Python cannot have good JITs, as it is too dynamic.
If AWS had an R&D Labs division with zero expectation of profit I think you could sell them an idea of Elastic Lisp Machine where a deployment is not a restart but whatever you need to do in the lisp world. (Patch an image…? No idea but sounds cool!)
The REPL is not an add-on in the sense that it comes with the implementation, but it is in the sense that if it were not provided, it could be implemented using only things available standardly in a Common Lisp implementation.
The language features that make this possible are a) late-binding and b) resident development tools (compiler, debugger, interpreter, inspector, trace, ... are all included in the runtime).
Not all kinds of late-binding is good. Python uses dicts which has more runtime overhead than symbols, but make it hard to develop code interactively. Consider
foo.py
def foo1(x): return x+1
bar.py
from foo import foo1
def bar1(x): return foo1(x)\*2
I can modify foo.foo1 in REPL:
foo.foo1 = lambda x: x+3
but bar.bar1 will still use the old definition!
bar1 lookups foo1 via a dict which is constructed when bar is imported, so it won't be updated unless I reload all files.
So dicts are just wrong for this, there are too many of them. Symbols are basically perfect for this as they provide an easily manageable point of indirection and also much more performant.
Not really. The runtime, plus your code is a virtual machine. It’s similar to docker container in that regard. The REPL is your shell. And you can poke around everything. Another language that has that is Smalltalk/pharo. In comparison, the Python REPL feel like the os part was burned on ROM.
An important thing to remember is that both R and Ruby have directly, wholly, imported the REPL approach (not just having an interpreter you can type at, like Python) from Lisp/Common Lisp - with R arguably fitting as Lisp-family language if with different syntax (Ruby's other inspirations were Smalltalk obviously, and less known was being Perl replacement)
It's blindingly obvious to me that Ruby's basically "put smalltalk and perl in a blender with a tab of good acid" (and I mean that as a compliment).
The main reasons I bounced off ruby and stayed with perl were
- variables popping into existence on first assignment
- lack of block based scoping (i.e. perl's 'my', or ES6 'let')
- the OO model is ... restrictive ... and trickier to bend [1]
- less amenable to syntax plugins (e.g. you can implement async/await as a library in perl, not so much in other things)
I'd note that Elixir kinda manages to steal good parts from both ruby -and- perl and I find it much more comfortable.
Also that other than the two scoping related ones, my complaint mostly boils down to "can't bend the language in insane ways" and it's fair to think that letting people do that is more trouble than it's worth.
(i.e. please take this comment in a spirit of being why -I- ended up still preferring perl, rather than a claim that anybody else should ... but after 'let' in lisp and 'my' in perl, I'm spoiled for any scoping model that doesn't do that ;)
[1] single inheritance would be survivable if it had based itself on a trait-based smalltalk but modules are ... not really enough, and the achievable aesthetics when building stuff on top yourself aren't quite my thing ... this is my fuzziest claim and you're welcome to file it under 'personal taste'
There's a lot of language semantics which make the REPL convenient to write interesting amounts of code in, Gilad Bracha details some in <https://blog.bracha.org/primordialsoup.html?snapshot=Amplefo...> (with the message reflected in the medium, embedding the Newspeak IDE in the document for examples). It's unrelated to compiling or interpreting; many Common Lisp implementations will compile code from the REPL too.
I really like Common Lisp! Unfortunately, I can’t really find a use case for it. I usually develop web services and there Elixir and Erlang are winners in my book. For command line tools I use Rust or C. Can someone recommend types of applications where Common Lisp shines? I guess one can always write the business logic in CL and expose it via Unix socket.
As far as use cases are concerned, I think of Common Lisp as a higher level C with much better ergonomics. My particular use case is prototyping low level computations for performance tuning. Common Lisp excels at prototyping. I guess if you need to implement something for which there already exist 20 other libraries there is not much point, which is also the reason I'm not a huge fan of Rewrite in Rust movement.
CL is very general purpose. Where it shines IMO is: the development experience (interactive, good compile-time type warnings with SBCL), the stability, the delivery of applications (compile to a self-contained binary), the overall feature set.
I use it for web back-ends and little scripts. Everything's stable, my Sentry dashboard is empty. [edit: sorry, just had a stupid production error in my Python app] Some friends love Elixir and do great things with it, but I'm in your reverse situation: I don't have enough incentives to switch. Elixir's REPL is good but it's only a REPL, not an image-based experience. Its Emacs modes are rudimentary. Phoenix looks great, but it's very opinionated (and very much code-generator oriented, strange). I can't copy-paste a binary to my servers (or other users). The Torch admin dashboard looks cool, but I just made my own in a week-end (and again, I'm afraid by code generation). LiveView looks awesome but it's its own little world and I might approach it with HTMX and HTMX websockets.
TLDR; I can do both web apps and CLI tools with CL and switching must be worth it too.
I mean code generation that writes code to files. I'm afraid they bit rot fast and make upgrading harder. Macros generate code on the fly, not in my source files.
I tend to regard code generation as being something that should -feel- like a macro to use, just one that optionally materialises the results so you can ship the code without a dependency on the generation logic.
I am very conscious that most code generation in the wild is ... Not That.
Unlike Scheme, CL macro are glorified template based code generator. If your not careful to use facilities like gensym and packages it easy to accidentally capture already binded terms at call site.
Thank you! I read some time ago that compilation to binaries is not really a solved problem. Also the ecosystem was very lean. At the same time you need to include a dependency for everything because the standard library is also lean.
You’re mostly right about Elixir but Phoenix is not opinionated, at all. It’s code-generation oriented only if you want to use the generators. I don’t use them.
ack, thanks. I still want to see what I'm missing, so I'll keep trying.
Compilation to binaries works fine. A SBCL binary will weight ±35MB (compressed, 120MB uncompressed). This includes everything (compiler, debugger etc). A bigger app with dozens of deps doesn't grow too fast, like ±40MB for a web app. I think that's in the ballparks of a growing Go app. LispWorks allows to shrink the binary size to ±4MB (but then you can't connect to it remotely and live reload the running image). Start-up times are very fast.
It's true you need to choose and include a dependency for common tasks: HTTP client, JSON and CSV… As for "lean", well, it depends. See awesome-cl. CL's ecosystem is richer for some tasks than other emerging and trendy languages, poorer in some areas. Interfacing with Java or Python is possible (ABCL and LispWorks, py4cl2).
In fact, some of the presently annoying warts can be derived from late decisions on the theme of "we already have a bloated standard, let's not add more".
Over time a lot of them became de facto extensions (MOP, gray streams), or there are de facto extensions that fill the same niches in different way (ASDF)
This is an odd question. Web services are great with CL. But you already have languages you prefer. So yes, you must be interested in CL to want to replace one of those.
It's a great article. Since 2018 though, we have more tools and resources so we can enhance it. (I copy/edit a comment of mine from last thread)
## Pick and Editor
The article is right that you can start with anything. Just `load` your .lisp file in the REPL. But even in Vim, Sublime Text, Atom/Pulsar, VSCode, the Jetbrains suite or Jupyter notebooks, you can get pretty good to very good support. See https://lispcookbook.github.io/cl-cookbook/editor-support.ht...
> if anyone is interested in making a Common Lisp LSP language server, I think it would be a hugely useful contribution to the community.
He doesn't mention this list, what a shame: https://github.com/CodyReichert/awesome-cl => the CL ecosystem is probably bigger than you thought. Sincerely, only recently, great packages appeared: CLOG, sento (actors concurrency), 40ants-doc, official CL support on OVH through Platform.sh, great editor add-ons (Slite test runner, Slime-star modules…), Coalton 1.0 (Haskell-like ML on top of CL), April v1.0 (APL in CL), a Qt 5 "library" (still hard to install), many more… (Clingon CLI args parser, Lish, a Lisp Shell in the making, the Consfigurator deployment service, generic-cl)…
His list is OK, I'd pick another HTTP client (Dexador instead of Drakma) and another JSON library (jzon or shasht), new ones since 2018 too, but that's a detail.
While I'm at it, my last shameless plug: after my tutorials written for the Cookbook and my blog, I wanted to do more. Explain, structure, demo real-world Common Lisp. I'm creating this course (there are some free videos): https://www.udemy.com/course/common-lisp-programming/?coupon... You'll learn CL efficiently and support an active Lisper.
## Web Development
See the Cookbook, and the awesome list. We have many libraries, you still have to code for things taken for granted in other big frameworks. I have some articles on my blog. I have a working Django-like DB admin dashboard, I have to finish the remaining 20%…
We got dynamic library delivery tool for SBCL (sbcl-librarian). There's a rumor from the European Lisp Symposium that a feature beginning in "co" and lasting in "routine" is coming to SBCL.
Allegro CL (proprietary) got a new version running in the browser…
I love the concept of the REPL and live editing and such, but my everyday work experience using a debugger sounds like it’s about 80% of the way there?
I can break, inspect, run expressions and generally figure out what’s going on by experimenting. Maybe not edit code directly, but I honestly never feel the need. Looking and poking around is in the vast majority of cases all I need.
I suggest: do you find yourself often quitting the debugger and re-running the same test, the same task from the start? You can avoid that in CL. Just leave the debugger open, go to the buggy line ("v" on a stacktrace in Slime), fix the function, re-compile it on the fly (C-c C-c in Slime), come back to the debugger, resume the program from the stackframe you want, now it passes.
If the first steps were taking 10 minutes, you just saved yourself some time.
You don't need special needs for that, it's a fast workflow I use everyday.
Another important point: we debug everything while keeping the current state in memory (in the image). We don't have to re-create our test data. All the program state is kept live, until we restart the image.
In a lot of cases... probably not in my experience. I think it can make a big difference in some cases though.
What REPL were you using though? Something to think about is that the Lisp and Smalltalk REPLs are a lot more powerful than Python in that I think when an expression fails, the program itself can be edited and it will start where it left off. You can't do that with Python.
Edit: the article does a good job of explaining this as "It's like if you could run your code in a debugger with a breakpoint at every single line that only activates if something goes wrong!"
Modern (proprietary) tooling comes really close to that experience, but I can see how it is liberating compared to editing text files, waiting for compilation and preying it works. I guess modern tools learned from the best.
You have probably never even considered writing a function that calls another, not yet defined function, and just moving on because your trail of thought is headed elsewhere at that very moment. Fifteen minutes later you are testing your new function and the debugger breaks on the call to your still undefined function. "Oh right", you think, "I need to transmogrify the foo here". So you define the transmogrify-foo function and continue debugging right there.
This isn't contrived or exotic in any way. It's just second nature when writing lisp in an interactive fashion.
I'd recommend studying some existing macros and spending time recreating them. Starting with simple use-cases of the macro and working up. Repeat this with a few macros and you'll eventually learn how they operate. Let Over Lambda is also a good (opinionated) book on macros (most of the book is available for free), and of course there's PG's On Lisp (jump to page 95 in the PDF if you're already comfortable with Lisp).
Are you simply staring intently at the macros, trying to make sense of them, or are you actually utilizing the macro-expanding feature in your editor? Or perhaps you're grappling with writing macros yourself? Macros can be tricky, which is why Clojurists often avoid creating new ones unless there's a compelling reason to do so. There's an old book called "Mastering Clojure macros", it's tiny - fewer that a hundred pages. It helped me back when I was learning Clojure.
JSON is a messy format, you'll always have to know your requirements well to be able to pick a fitting parser for it.
Are you going to parse a simple, small key-value-structure? Pretty much any library will solve it for you or you could invent your own simple parser.
If you need to stream gigabytes of complicated JSON and do sophisticated transformations back into JSON or something like that, then you'll have to evaluate several libraries and have a look at how they translate into CL datatypes. Some might reduce a combination of false, null, the empty list to NIL, which could lead to information loss and surprises.
This piece is enthusiastic but a bit subjective. I personally had zero troubles with JSON in CL, just chose one library and use it. Would also disagree with some his recommendations, e.g. Drakma has a lacking interface for error handling and Dexador is nicer for use in prod.
I recently posted my observations about taking an interest in Common Lisp and a few people kindly pointed me to this resource, which was very, very helpful.
If you, like me, are drawn to Lisp because of its reputed elegance, just bear in mind that Common Lisp is probably the least elegant of the Lisps, though probably the most mature. It uses dated mnemonics that won't be intuitive to someone coming from modern languages, and in many ways it's surprisingly hacky. I've been pointed at Scheme, Clojure, and Racket for examples of more elegant Lisps, with Clojure probably being the most day-to-day useful of the three (albeit sadly it runs on the JVM - talk about the Beauty and the Beast).
The two big (but very basic) observations that made Lisp click for me: (A) Those nested parentheses directly indicate the AST of a Lisp program - Lisp syntax doesn't just feature lists, Lisp syntax is those lists (Lisp = LISt Processing). A Lisp program is just a single list made up of other lists. There's almost no other syntax you have to learn. Operators are just functions, which are just lists. (B) Working with a REPL allows you to climb inside of a program in a way that working with an 'Algolian' language doesn't. All the functions are there for you to play with. Using a REPL seems to be iterative; programmers will work in text files (in e.g. emacs) and then jump into a REPL every now and again to test and prod and poke at the code.
Cumulatively, those two insights I think define what is special about Lisps. Since everything is a list, you program by defining more complex lists around simpler lists, which - like Legos - you can freely manipulate with a REPL. This allows you to build programs from the inside out, compared to Algolian languages. You can build all the bits and pieces well before there's any main() method to unite them. It's not theoretically impossible to work with C or Rust like this, but the tooling around Lisp makes this easy and fundamental.
On the flip side of this, how one is meant to distribute finished Lisp programs is very unclear to me. It appears that to ship a Lisp program you basically ship the entire development environment, plus all your source code, and point the former at the entry point to the latter. I'm almost certainly missing something obvious here though, and I'm happy to be enlightened on this point. I honestly don't know how one would distribute a closed-source binary where they don't necessarily want the end user to be able to use a REPL on the code. There appears to be a proprietary compiler called LispWorks that can do this, but with prices in the four figures that's far too dear for a merely intellectual curiosity about Lisp. (There's a hobbyist edition of LispWorks, but my policy is not to spend time learning tools I couldn't turn to professional use.)
> Lisp syntax doesn't just feature lists, Lisp syntax is those lists (Lisp = LISt Processing)
Lisp syntax is on top of lists. Lisp has as basic forms functions, built-in operators and macros. Each new macro implements syntax.
> On the flip side of this, how one is meant to distribute finished Lisp programs is very unclear to me.
In SBCL you would compile all the code to machine code. A program is then a single file with the Lisp runtime (providing threads, memory, garbage collection, OS, ...) and a snapshot of the Lisp heap (all the Lisp objects in memory), which is called a Lisp image. That's an executable. If you start the executable, it starts the Lisp runtime, restores the heap from the file and then hands over control to a defined entry point in the heap.
ECL (another Common Lisp implementation), compiles all CL code to C and then uses a C compiler to create loadable compiled C code. The runtime then loads the compiled code on startup.
LispWorks does something similar like SBCL, but LispWorks can remove unneeded functionality from the heap plus can optimize the needed space. Thus the applications can be more static and also can be smaller. Imagine one would write a, say, CAD application in LispWorks. It could start into a GUI application and the user may never see any Lisp REPL. But a REPL could be a tool for the user, where she/he can script the CAD system.
> Clojure probably being the most day-to-day useful of the three (albeit sadly it runs on the JVM - talk about the Beauty and the Beast)
If you persist with it, later you find out that Clojure kisses the JVM, and it turns out that the JVM has been a beautiful prince all along under the Java curse.
I think it's a combination of reasons: people aren't used to the syntax, so it turns some of them off from giving Lisp a try, Lisp machines didn't take off and their competition didn't have great Lisp support, when computers became more powerful and better suited for Lisp it was seen as older and not as hot as the new things like Perl and Java, and there's still the perception that because Emacs offers a truly first-class Lisp experience that other editors won't work so well for it.
And of course because of those various reasons the Lisp community is rather small so it creates the vicious cycle of 'the community is too small and there are no libraries so I won't use it' into 'there are too few people using it so there's a small community and less manpower to make various libraries'.
> if Lisp is so good, why isn’t its use more widespread?
There are over 39 thousand McDonald's restaurants worldwide, yet there's no general consensus on whether that's really good food. Opinions on McDonald's food vary. Some people enjoy it for its taste, convenience, and affordability, while others criticize it for its nutritional value and health impact.
Popular doesn't necessarily mean good, and vice-versa.
Consider classical music as another example. It's praised for its intricacy, artistry, and emotional richness, yet it lacks the widespread popularity of genres like pop or rock. I'm not a musician, but I believe any aspiring artist would gain immensely from studying classical music. Similarly, any programmer would find significant value in learning Lisp. The concept of Lisp is genuinely captivating (don't confuse it with Common Lisp, I'm talking about the general idea of Lisp, not a concrete implementation). Good ideas have enduring relevance. Much like classical music, which, though never mainstream, perpetually retains its allure, Lisp remains perpetually attractive, even if it never reclaims mainstream status.
The other way around: its use might be more widespread than one thinks, because it is still used in the industry, it is still chosen by new companies. https://github.com/azzamsa/awesome-lisp-companies/ (nothing official here) For instance, CL seems a corner stone for quantum computing these days.
And this comment is asked every time a Common Lisp submission comes up. Perhaps go and read prior answers?
> If Lisp is so good, why isn’t its use more widespread?
What does "good" and "widespread" have to do with each other? Mildly orthogonal concerns. As a tech junkie, one experience that has been consistent all my life is "the best stuff is almost never widespread". When I was in to digital cameras, the "best" camera was rarely the widespread one - both pre-DSLR and after. The best file management utility is not the widespread one. The best radio is not the widespread one. The best TV is not the widespread one. The best SW for a task is often not the most widespread one.
I know I'm not answering the question. Instead, I want you to ponder the faulty premise in your question. Instead of tying your question to "widespread", why not simply ask why people like Common Lisp so much?
It's like asking "If that job is so awesome, why doesn't it pay well?"
The "Lisp Curse" is a concept suggesting that the power and flexibility of Lisp lead developers to endlessly customize their development environment rather than focusing on building end-user applications. While this notion has some anecdotal merit, implying that Lisp's versatility can sometimes divert focus, it's not a universal truth. Successful Lisp projects exist, and many developers thrive using it productively. The alleged "curse" is more a commentary on the potential for distraction with highly flexible tools, not a definitive outcome.
It's an essay, not an academic paper. The entire "Lisp Curse" essay can basically be counteracted with Big Lebowski's "Yeah, well, you know, that's just, like, your opinion, man."
The author first says that CL people usually avoid dependency hell:
>When programming applications in Common Lisp people will often depend on a small(ish) number of stable libraries, and library writers often try to minimize dependencies by utilizing as much of the core language as possible.
But then try to expound on CL's extensibility using libraries:
> No one has been clamoring for a new version of the specification that adds features because Common Lisp's extensibility allows users to add new features to the language as plain old libraries
Very contradictory, and these two paragraphs are in two adjacent sections.
As other pointed out, there's no contradiction. I would like to add that CL often avoids dependency hell with:
- Standardized Specification: Common Lisp has a stable and comprehensive standard that reduces the need for external libraries.
- Load Time Flexibility: Its dynamic nature allows loading and reloading of code at runtime, facilitating easier management of dependencies.
- Isolation through Packages: The Lisp package system provides a way to encapsulate code and manage namespaces effectively, reducing conflicts.
- Backward Compatibility: Common Lisp places a strong emphasis on backward compatibility, which helps in maintaining stability across versions.
- Mature Ecosystem: Many Common Lisp projects are long-lived and stable, leading to a mature ecosystem with less frequent breaking changes.
I can't claim to be a very experienced CL coder, but I wrote enough Clojure, and similarly, rarely ever see dependency problems, even though Clojure heavily relies on the JVM ecosystem, inheriting both its strengths and complexities. Clojure emphasizes interoperability with Java and uses many Java libraries, which can introduce complex dependency trees and conflicts common in the Java ecosystem, yet at the same time, Clojure emphasizes small, composable libraries (often referred to as the "small libraries" philosophy), reducing the likelihood of large, monolithic dependencies causing issues. The community prioritizes modularity and ease of composition, leading to the prevalence of smaller, more focused libraries. Common Lisp in contrast tends to favor more extensive libraries or systems that provide a broad set of features within a single package.
The thing is that’s once you’ve casted most external inputs to lisp data structures, you don’t really need anything else other than utility algorithms (crypto,…). And with metaprogramming, it’s easier to do stuff without special classes and decorator. You can visualize the code as data being transformed, as a chain of transformers, or when you do metaprogramming as data generating code and code generating data. It’s all organic and you can do a lot without external code, because very soon you’re coding with a language adapted to your problem domain.
To GP: Think about how simple the HTTP protocol is (the core), so if you want a web framework, you want something that will map the headers and the body to cl data structures and then you can go to solve smaller problems like routing, auth, response generation,… Then you notice boilerplate and you macro them out. Same for most client libraries. It really easy to add an ad-hoc library that solve your problems. So you do that instead of reaching to others’ code.
The lisp world has a rather different idea of extensibility via library than you are used to.
For example, the Common Lisp Object System (CLOS, object oriented programming in lisp) originated as a library. Lisp's main looping mechanism (loop) also originated as a macro (although long before CL standardization).
You just don't do js levels of dependencies when they're adding big features like that. You don't need 100,000 programming paradigms in your code base.
There's some serious cognitive dissonance going on in this section:
As an open-source project, Racket is positioned at a happy medium. The core development team has been working together for years, and the commits remain fast & furious. But they’re friendly scientists, not Shire-dwelling egotists, and remain receptive to improvements across the whole system. If you have a better idea, they’ll listen; if you code it up to their standards and make a pull request, they’ll take it.
[2021 update: I no longer contribute to Racket due to abuse & bullying by the project leadership. Everyone in the broader Racket community, however, has always been helpful and kind.]
That's not cognitive dissonance, that's "I wrote a thing when I held one view. Instead of changing it entirely here is an additional comment that revises my previous one. I'm not going to edit the original."
The order of things matters. "I hold this view which contradicts my earlier, no longer held, view." is not cognitive dissonance. "I hold this view despite it contradicting my other view." is cognitive dissonance.
This article fails to explain why lisp. People talk about it on HN sometimes. This article says "here's how to learn it".
Why should I learn it? There's a million languages out now that I can learn instead. The author gives a few ways I can use lisp. Why would I choose lisp over any of the other languages that have stronger ecosystems that solve those same problems and are built specifically for them?
The syntax looks arcane and it seems that the most prevalent modern use for lisp is emacs, so if I don't use emacs, why lisp?
In brief, Lisp's S-expressions (the parentheses) are a straightforward instantiation of the abstract syntax trees to which all other languages parse.
You can manipulate that tree easily and directly in Lisp, whereas other languages (that are not variations of Lisp) hide parts of it, to varying degrees.
All other languages are either equivalent to Lisp, perhaps with a different superficial syntax (there are very few of these; one is Mathematica) or expose only a strict subset of this capability (most other non-Lisp languages.)
In particular, Lisp makes it trivial to do (real) metaprogramming -- programs that write other programs, including the introduction of entirely novel syntactic constructs -- via macros. Lisp macros can create new language syntax that controls when and how code is evaluated, perhaps rewriting it.
Non-Lisp languages can't do that fully, only subsets of it. Ruby metaprogramming, for example, is clever syntactic sugar. You can give methods clever names and use clever calling conventions so that they read more similarly to English, which is called "metaprogramming" in Ruby; but you don't have universal control of the parsers from the level of ordinary Ruby code, and you can't completely control when and how code evaluation happens.
Now, how practical or useful is all that? Opinions vary, from hardcore "you should invent a full DSL with unique syntax for each particular application" Lispers, all the way to those who say macros are a fun but dangerous toy with limited practical value.
Thanks! As someone who hates reading metaprogrammed ruby code, I imagine since Lisp gives you even more power, trying to understand what's happening in code is even harder. I know some people enjoy writing this style of code, but I personally do not
From above one can see that the code gets transformed in very low-level Lisp code, with a GO operator (-> a GoTo). It's a low-level loop expressed by loop made from a simple goto operator. One can look at the code and try to figure out what it is doing. One is not guessing what the expanded code looks like, one can generate the expanded code and see what it looks like.
The issue is that many Lisp programmers are .. not great at choosing and naming abstractions. Even when they are, the goals of the program will shift over time, leading to leaky abstractions and macros with names and semantics that no longer exactly match.
In such cases, there's a lot of work required to essentially learn the entire new mini programming language built within Lisp.
That said, there are cases where nothing else but a macro will do. These tend to be abstractions related to the language itself more thain DSLs. Where to draw the line is an art rather than a science, and it can work well -- or go horribly wrong -- on both sides.
That's neat, better than ruby in that regard. Still hate the () syntax especially when you have a bunch of nested functions, I bet you get used to it though
Most use an IDE such as Emacs (paredit-mode and rainbow-parens) that handles automatic paren matching for you, so it's really a non-issue after the first week. The editor takes care of indentation, matching (so you don't have any dangling parens ever), and optionally colors matching pairs in the same color so you can easily tell what level they're at.
After a few years, I turned off rainbow-parens-mode indefinitely. I realized that I no longer see parentheses and instead I see the structure and order, or in the case of badly-written code - chaos and anarchy.
It is sad that the usual first reaction of people seeing Lisp code is distaste. I myself wasted years of my life because of that. I had many opportunities to learn Lisp, but I dismissed them multiple times until I tried Clojurescript. Stupid me.
> especially when you have a bunch of nested functions
In non-lispy langs like JS, deeply nested statements could lead to callback hell or pyramid of doom, making the code harder to read, understand, and debug.
In Lisps, nested expressions are common and idiomatic due to the Lisp's code-as-data philosophy. In Lisps, code is composed of expressions that can be composed together, with each expression potentially being an evaluation of other sub-expressions. This doesn't lead to a pyramid of doom, since the emphasis often is on data transformation rather than orchestration of side effects. Nested expressions allow you to build complex behaviors from simpler ones in a very direct and composable manner, thereby making code concise and easy to reason about.
Deeply nested statements in Javascript often hurt readability and maintainability, while nested expressions e.g., in Clojure typically enhance them due to their emphasis on code/data composition and transformation. It is a difference of chaotic complexity versus structured and intended composition.
> trying to understand what's happening in code is even harder
It's different in Lisp because you basically program a thing from the inside out, meaning that you don't just read the code from top to bottom. Instead, you constantly evaluate expressions and expand macros while going through it. It's practically like playing a game, and there's a good amount of fun. People often dismiss Lisps (Clojure, CL, Fennel, Racket, etc.) after reading Lisp code without any connected REPL and often don't even grasp what makes it so awesome. It's like disliking food after seeing it for the first time in a cooking show on TV. Good talk of relevance: "Stop Writing Dead Programs by Jack Rusher" https://www.youtube.com/watch?v=8Ab3ArE8W3s
The real reason to learn SBCL, is because that is an existence proof that garbage-collected multi-paradigm (procedural, functional, object-oriented) dynamic languages don't have to suck performance wise (speed and memory usage). I'm going to go off the rails and say that 'defmacro' isn't the reason one should use a lisp. ~90% of 'defmacro' instances are really just to prevent evaluating expressions, which could also have been done with a lighter syntax (e.g. reader macro) for lambda (smalltalk for instance). Why didn't lisp catch on? Some 30-40% of people really hate parenthesis with a passion. There is no accounting for taste I guess. Lisp advocates also spent too many decades comparing lisp to C (instead of Python or Haskell or Java). And it seems like there was at one time a faction that looked down on free-software/GPL for a while, so there wasn't as much activity that people (especially college students) could see and engage with. The standard library is sparse and could use a CLOSified refresh, but everyone disagrees on how that should look. Too many lofty promises of 10x or greater productivity gains, and not enough wins showing it. Who knows, I haven't looked as common lisp in a while, maybe there are now vectorized/parallel array libraries targeting GPU's now that everyone uses, because things "just work" out of the box. Or a polished off McClim, or other lispy gui, instead of wrapping Tk. That combined with the usual network effects.
The article is not trying to sell you on Lisp, in its first sentence it clarifies, that it is written for people who were "asking for advice on how to learn Common Lisp"...
> for 30 years I thought I don't want to learn lisp, it's a dumb language, and I read (Structure and Interpretation of Computer Programs) and it changed my mind... I thought I'm going to get (clojure) and dither around with it.. and I just fell in love with the language and I've been using it ever since
Being tied to either EMacs or an enterprise solution like LispWorks to get the full language experience was ultimately a non-starter. I’d love for someone to build an alternative CL development experience that could work in a wider range of text editors and IDEs.
There is a lot to learn from CL, but I think it can be hard to access for most developers.