Hacker News new | past | comments | ask | show | jobs | submit login
Writing Small CLI Programs in Common Lisp (2021) (stevelosh.com)
144 points by mooreds on Sept 6, 2023 | hide | past | favorite | 66 comments



I decided to port all my old perl scripts to guile scheme recently and got to add a bunch of nicities (like properly cleaning up on ctrl-c). Not because I really needed to, but because I haven't written perl seriously since 2010 and updating the scripts has started becoming hard.

The guile manual is pretty nice, but having something like this will always help people get started. For people that don't know about things like how non-local exits work getting started and doing it correctly can really take some time to figure out.


how is guile for scripting? how well does it interact with other programs (running them, getting their output, piping to them, etc)


tbh it's no janet-sh, but of course you can make your own macros.

I'd love to see either provide native support for additional file descriptors, bash `<()` style. I often wrap a shell around commands just for that.

https://www.gnu.org/software/guile/manual/html_node/Pipes.ht... https://github.com/andrewchambers/janet-sh


I think Gauche is better than Guile. Its manual is easier to navigate, its standard library is huge, and it's fast enough for anything I'd ever need it to do. Both tools are fine, but personally I'd choose Gauche of Guile every time.


It is definitely nicer to do shell scripts in, but I have other things written specifically for guile using delimited continuations and guile-fibers. Since concurrentML is the only way of writing concurrency that I like I am pretty much stuck with guile since it has what is probably the best implementation.

Guile is noticeably faster as well, but that is not really surprising considering the implementation differences.


the nice thing about guile is that on GNU systems it is pretty much installed by default, so scripts depending on it would be easy to distribute


I don’t think that’s true for any mainstream Linux distros like Debian, Fedora or Arch.


well, at least any machine used for development.

when i try to remove guile on my fedora machine, then the following get removed too:

    abrt
    akmods
    cargo
    clang
    cmake
    gcc
    gdb
    golang
    kernel-devel
    make
    rpm-build
    rpmdevtools
    rust
    vcpkg
also several python3 packages and many others that were installed as dependencies from the above that i didn't check if they too depended on something that depends on guile.

strictly speaking it is just make and gdb that depend on guile, but gcc depends on make and many other packages depend on gcc.

it appears that on debian make does not depend on guile.

but since i mainly use fedora, all my machines are likely to have guile installed.


I believe Guile scripting is an optional (compile-time) GNU Make extension, so whether or not it's a dependency will depend on how GNU Make is packaged. The others I have no idea about, but I do wonder if these are mostly transitive dependencies via Make.


Did you post an example gist someplace, perchance?


Awesome, but there is kind of a deal-breaker here:

AFAIK, there isn't an official and hassle-free way to generate a statically-compiled binary with SBCL the way Go and Rust can. Like you develop on a debian 12 system, then move the binary to your debian 11 server only to be confronted with the notorious 'incompatible glibc version error'. It's super annoying.

In ECL, I think it's possible to use musl and bundle libecl and mucl libc into the final binary, but I'm not sure.

The caveat is ECL output startup time is around 500ms (as opposed to the much more performant SBCL output)


> only to be confronted with the notorious 'incompatible glibc version error'. It's super annoying.

I started making my own freestanding Linux Lisp because of this exact issue. It's nowhere near as performant as something like SBCL but it's small and has zero dependencies. Once compiled it will literally run on any Linux of the same architecture.

https://github.com/lone-lang/lone

I'm taking a break from this project at the moment but eventually I'm gonna add a feature that lets me put a Lisp script into the ELF itself so I can just copy it with the scripts included and have a totally self-contained freestanding Lisp executable.


Your deal-breaker applies to C programming on GNU/Linux.

There is no official, hassle-free way to generate a statically-compiled binary with C for GNU/Linux.

A C program compiled on Deb 12 might not work on 11, if it requests a new version of some function, or a new function.

Go and Rust can, but they are silos.

Glibc dropped support for static linking years ago. You can't deploy a new library to close a library security hole, for programs that have statically linked it.

Rust and Go are doing a stupid thing.


> You can't deploy a new library to close a library security hole, for programs that have statically linked it.

Of course you can. You just recompile affected programs. Just like you need to do in the case of header-only C/C++ libraries like much of widely-used Boost. And if the library is not strictly header-only, but still has significant code in headers, as is often the case in C++, if you only swap the dynamic library, the program is still broken, or much more likely: it becomes broken, even though it originally wasn’t.


Programs on GNU/Linux linked against glibc can be proprietary.


Indeed. This patch was working for a previous version of SBCL: https://www.timmons.dev/posts/static-executables-with-sbcl-v...

I heard projects (like the Kandria game) build their releases on an old glibc (but latest SBCL and libraries), so it can be shipped to any more recent system.


Can you confirm the ECL part of my comment? I mean is it possible to do it with ECL?


> Like you develop on a debian 12 system, then move the binary to your debian 11 server only to be confronted with the notorious 'incompatible glibc version error'. It's super annoying.

I get that with Go too. I now make all my release builds on the oldest version of Linux I have.


It sounds like you’re using cgo.


Not as far as I know :-/

I got the glibc errors when I moved a binary I compiled on the newest Linux Mint to my VPS running some ancient debian.


This is a Linux problem not a SBCL one. You could move your SBCL binary between different versions of a BSD variant.


This is not a Linux problem. The Linux system call binary interface is documented as stable. It's user space libraries like glibc that break backwards compatibility. There's nothing Linux can do about that.

I've written more about this here:

https://www.matheusmoreira.com/articles/linux-system-calls


> It's user space libraries like glibc that break backwards compatibility.

s/backward/forward/

The commlaint here is that the older glibc on Debian 11 doesn't run programs built against a newer glibc under Debian 12.

glibc has very good backward compatibility, with careful symbol versioning and whatnot.

Backward compatibility means that a newer glibc (like in Debian 12) will handle programs requesting the older glibc (e.g. from Debian 11).


> glibc has very good backward compatibility, with careful symbol versioning and whatnot.

Not in my experience. Working through exercises in Practical Binary Analysis, the first problem I needed to solve was running programs compiled against an older glibc on older Ubuntu on a new Ubuntu version with newer glibc, which boiled down to hunting down the glibc so from the older Ubuntu. Without it, the programs crashed on startup. That problem wasn’t even planned by the book author, but it sure was a good thematic fit. :)


How come it's not a Go/Rust problem?


Go makes syscalls directly, it doesn't call through the system libc. This does cause support problems on at least NetBSD, don't know what happens on FreeBSD or OpenBSD.

The NetBSD backwards compatibility mechanism is to not change the arguments to an existing syscall, if there is a need to change something then a new syscall will be created and aliased to the old name in the system include files. Old binaries use the old syscall, those compiled after the new one has been added will use that. Languages that don't parse the system .h files like Go need conditional code for NetBSD.


Kind of a deal breaker sure, but I don't think in that many cases. Is it really that hard to launch an older VM and make a build there? Or just decide you don't need to support anything older than your dev machine? Or just always run with source code? For my little scripts (which admittedly are much less organized than those in the article) they're mostly just on my own dev machine. Occasionally I build binaries and ship them over to another server without any tooling on it, but it's never very far out of date. Such scripts are also fairly long-lived; it's nice that some binary I made in 2021 (to replace a perl script that had been stable for ~10 years) continues to do its job.


currently there is not build flag to statically link libc manually (it is planned though). Regarding startup time, it all comes to ASDF:

$ time ecl --eval "(quit)"

0.052 secs

$ time ecl --eval "(require 'asdf)" --eval "(quit)"

;;; Loading #P".../lib64/ecl-21.2.1/asdf.fas"

0.301 secs


> it is planned though

This is a bad plan for target systems that use glibc, because static linking is not recommended and not supported.


it is planned to allow static linking against libc, not necessarily glibc (I've been thinking about musl). Dynamic linking is still a default, but different strokes for different folks I guess.


Sure there is, as always: load all the code into the running SBCL, then call (SB-EXT:SAVE-LISP-AND-DIE :EXECUTABLE T :TOPLEVEL your-entry-point). The downside, as always, is that this kills the Lisp process, and the resulting binary weighs tens of megabytes even for a simple hello world. I guess one of these disqualifies it as not hassle-free.


No, I mean the dumped image depends on system glibc (not portable)


Depending on glibc can be portable enough if you build against the oldest glibc version that you need. Running on newer systems works.


How do you get the latest toolchain and compiler but an old glibc? The tools have their own dependency on libc


You build them. Or if the latest compilers don't build against the older glibc (which is rare in my experience, but may be possible), then you just don't support the older glibc any more.

But I've never had trouble building against older glibcs.


Even shops that shipped static binaries, they all still shipped containers at the end of the day to run in production.

I find a lot of the static binary hand wringing basically: did you ever solve how to ship more than 1 file to production? Did you ever need to virtualize any resource on your compute?

Then you're at a minimum using a chroot and containers.

If you aren't shipping to production, if you're really just copying files around between your laptops and you want them to run - you have glibc.


There's a big gap between what you're talking about: ship a dockerfile or snap or whatever to make your big ol' service distribution-portable vs what this article was about: writing CLI programs.

The moment we end up in a world where a "small CLI program" requires docker.. I give up. Time to retire from software and go raise sheep or something instead.


Thus my last sentence: if you aren't shipping to production... you are really just copying files around, and you have a glibc.

I think I just use linux enough that all the macos bugs I don't experience.


No, it sounds like you just have a mostly "I write code that goes on servers" type job, and haven't been burned by the issue of older binaries, that won't work with new libcs.

This is a real problem with Linux (not MacOS or other BSDs from what I understand) because it doesn't have a stable ABI (the ABI is basically glibc). It's a design choice, and a pain in the ass for people who want to distribute things as binaries. Which, well, it's not something I do, but, it has burned me using other people's things.

Could docker/containerization solve this problem? Yes, in the same way that a shotgun would serve for eliminating mosquitoes...


Yup, definitely write code that goes on servers.

I agree with you about containers.

The article was about writing small cli programs, and compiling them yourself. I guess I assumed you would be able to rebuild in this discussion trivially - you wrote the program.


This is one of the reasons that developing on your production OS is something of a best practice. Not specific to SBCL at all, but this general problem of dependencies not quite matching can bite you in a lot of ways.

People forgot this lesson when macbooks got popular for development, then had to relearn it by upending the ecosystem into containers.


It's also a reason for distributing software under a Free license. If people who use your software want to try running against a different libc or different versions of other dependencies, why not let them? You can always say you only support binaries that you distribute.


This reminds me of something I thought about taking back up:

A while back I wrote a number of Emacs scripts so that I could listen to Discord messages, as they were streaming in basically, and then 'do Lisp' on them. For example, take the data that was streaming in and format it into a web page with a few links I could then investigate. I may get back into that.


Do you still have that script? I would be interested in checking that out.


I replied a couple comments below w a fuller explanation but here's the relevant emacs functions, code only.

(defun opyn (arg) "Open local file in Firefox" (interactive) (let ((filearg (concat "file:///home/julian/Lang/Python/Something/" arg ".html"))) (shell-command (concat "chromium " filearg))) (message "opened file") )

(defun refresh (arg) "Rerun local file creation & open in Firefox" (interactive) (let ((filearg (concat "ruby project_refresh_emacs.rb " arg))) (shell-command filearg)) (message "refreshed file for you") )


I would, as well. Very neat idea and I'm in a Discord server where I'd get some utility from this sort of thing.


Here's the meat of it.

First, you have an API or something which is piping text into your terminal - in this case, into eshell.

Like for example let's say it's churning out lines like this:

"big vote in berlin" https://cnn.com/vote-in-berlin

I move my cursor over the text - remember, this is in emacs in eshell, I can modify the text there - and change the line to read:

"big vote in berlin" (opyn https://cnn.com/vote-in-berlin)

I move my cursor after the parenthesis and type C-x C-e.

In my .emacs file I've defined this function:

(defun opyn (arg) "Open local file in Firefox" (interactive) (let ((filearg (concat "file:///home/julian/Lang/Python/Something/" arg ".html"))) (shell-command (concat "chromium " filearg))) (message "opened file") )

Now it opens that file.

But probably what you want to do is process the data in the text. So here's my emacs function for this (not the code for the data processing itself which was done in Python, separately - but see how you can go combine those together).

(defun refresh (arg) "Rerun local file creation & open in Firefox" (interactive) (let ((filearg (concat "ruby project_refresh_emacs.rb " arg))) (shell-command filearg)) (message "refreshed file for you") )

That's pretty much it. Hope that helps.


Is this something using Bitlbee ? I remember doing fun stuff when I used IRC inside Emacs.


I wasn't using Bitbee, no. I just showed my code in the comment above fyi.


very good article, nice and practical

An alternative is also Babashka which is excellent for this ! https://github.com/babashka/babashka


and for CL: https://github.com/ciel-lang/CIEL/ (pre-alpha) CL with many batteries included (json, csv, http, CLI parser…) so the scripts start fast.


Babashka has very fast startup, has solid filesystem bindings via babashka.fs, http client and server, awyeah for AWS, Jackson or pure Clojure JSON, access to GraalVM friendly Clojure and Java libraries. Well maintained and mature. Go Borkdude!


Borkdude's output pace for code and projects is nuts !!



Thanks! Macroexpanded:

Writing Small CLI Programs in Common Lisp - https://news.ycombinator.com/item?id=26493588 - March 2021 (61 comments)


amusingly i was just wrangling with how to do this yesterday, while writing a CLI common lisp file-watching testrunner.

i ended up heading toward something that was more engineered to run a simple repl function call via SWANK, because i'm increasingly settled to having a single, long-running SBCL image rather than starting up individually compiled or interpreted lisp scripts. Feels more lispy somehow -- and definitely better for TDD.


Yeah, that’s definitely where I’ve ended up: I have a lot of lisp code, but it’s more of a toolbox for my shell (REPL) than standalone programs.

However, I’ve settled on a pattern that works pretty well for the few small tools I write: https://github.com/fiddlerwoaroof/dotfiles/blob/18cecfc93bcf...


If you find this article interesting and are curious about Nim then you would probably also be curious about https://github.com/c-blake/cligen

That allows adding just 1-line to a module to add a pretty complete CLI and then a string per parameter to properly document options (assuming an existing API using keyword arguments).

It's also not hard to compile & link a static ELF binary with Nim.. I do it with MUSL libc on Linux all the time. I just toss into my ~/.config/nim/nim.cfg:

    @if musl:  # make nim c -d:musl .. foo static-link `foo` with musl
      cc            = gcc               # --opt:size trades more speed
      gcc.exe       = "musl-gcc"        # NOTE: This also works as a ..
      gcc.linkerexe = "musl-gcc"        #..per-module foo.nim.cfg
      passL         = "-static -s"
    @end
Lisp is often cited as one of Nim's inspirations.

EDIT: https://nim-lang.org/ has much more detail and there are over 50 small CLI programs at https://github.com/c-blake/bu as rather extended examples. Using that musl approach I get tiny little 100 kB ELF executables that would work on practically any Linux you can `scp` them to with none of the boilerplate/language limits/hassles or speed-limits of, say, Go and run-time start-up times on the order of 100 microseconds. There are all kinds of great libs in Nim, too like @V1ndaar's https://github.com/SciNim/Measuremancer or all sorts of goodies over at https://nimble.directory/


I've used cligen before, and it's very elegant. But if you are used to more declarative ways of writing your cli, it can be a bit of a learning curve. Best advice I can give is look at what you are actually trying to do, not "how do I make it work like <other tool>"


The author has a nice idea there with generating man pages from the help data entered into the option parser. But he doesn't take it to its natural conclusion by then working with Makefiles that generate actual man page files.

If your program can generate its own man page from the option data, you don't need an actual man page file. man takes input from a pipe, e.g.

  # useful use of cat: shows man is not accessing a file

  cat foo.1 | man -l -
This could be built into your utility:

  $ util --man
could spin up the man pager, and pipe the nroff code into it to be rendered.

If you still need man util, that could be a proper, verbose man page.


As someone else noted, SBCL's `SB-EXT:SAVE-LISP-AND-DIE` ends up producing hilariously large executables. The solution I've settled on for bags of small, sharp tools is to simply have an omnibus program that includes a variety of functionality as subcommands, and that can dispatch to the appropriate function based on the name it was run as (i.e., the value of argv[0]). One of those commands will pepper a directory of your choice with soft links to the omnibus program, each named for a subcommand.


Recently, I read about someone who basically had a single running image and it would dispatch to the right function based on the command names sent to it, and the image was running swank, which sounded like a neat idea. I saw it in a comment, either here or on reddit ...


Before typescript and node > about 16 I’d reach for Python or more organized bash for these things. Now it’s Typescript from the client down to my system scripts.

The ability to use the same language all the way up and down my stack, to import code I use in my program from my script, and get Type-checking means more often than not I’ve got a hash bang with npx ts-node.

Especially useful for testing server code while skipping the HTTP nonsense.


I do the same thing but in Elixir. I don’t want to touch anything related to JS.


I love exs scripts, and I've largely replaced postman with LiveBook notebooks. Elixir all day every day


  I hate that I’m largely living in a JS world, but I’m fast and effective with it, and the standard library, ecosystem, and tooling has matured enough that I can deal with the quirks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: