I decided to port all my old perl scripts to guile scheme recently and got to add a bunch of nicities (like properly cleaning up on ctrl-c). Not because I really needed to, but because I haven't written perl seriously since 2010 and updating the scripts has started becoming hard.
The guile manual is pretty nice, but having something like this will always help people get started. For people that don't know about things like how non-local exits work getting started and doing it correctly can really take some time to figure out.
I think Gauche is better than Guile. Its manual is easier to navigate, its standard library is huge, and it's fast enough for anything I'd ever need it to do. Both tools are fine, but personally I'd choose Gauche of Guile every time.
It is definitely nicer to do shell scripts in, but I have other things written specifically for guile using delimited continuations and guile-fibers. Since concurrentML is the only way of writing concurrency that I like I am pretty much stuck with guile since it has what is probably the best implementation.
Guile is noticeably faster as well, but that is not really surprising considering the implementation differences.
also several python3 packages and many others that were installed as dependencies from the above that i didn't check if they too depended on something that depends on guile.
strictly speaking it is just make and gdb that depend on guile, but gcc depends on make and many other packages depend on gcc.
it appears that on debian make does not depend on guile.
but since i mainly use fedora, all my machines are likely to have guile installed.
I believe Guile scripting is an optional (compile-time) GNU Make extension, so whether or not it's a dependency will depend on how GNU Make is packaged. The others I have no idea about, but I do wonder if these are mostly transitive dependencies via Make.
Awesome, but there is kind of a deal-breaker here:
AFAIK, there isn't an official and hassle-free way to generate a statically-compiled binary with SBCL the way Go and Rust can. Like you develop on a debian 12 system, then move the binary to your debian 11 server only to be confronted with the notorious 'incompatible glibc version error'. It's super annoying.
In ECL, I think it's possible to use musl and bundle libecl and mucl libc into the final binary, but I'm not sure.
The caveat is ECL output startup time is around 500ms (as opposed to the much more performant SBCL output)
> only to be confronted with the notorious 'incompatible glibc version error'. It's super annoying.
I started making my own freestanding Linux Lisp because of this exact issue. It's nowhere near as performant as something like SBCL but it's small and has zero dependencies. Once compiled it will literally run on any Linux of the same architecture.
I'm taking a break from this project at the moment but eventually I'm gonna add a feature that lets me put a Lisp script into the ELF itself so I can just copy it with the scripts included and have a totally self-contained freestanding Lisp executable.
Your deal-breaker applies to C programming on GNU/Linux.
There is no official, hassle-free way to generate a statically-compiled binary with C for GNU/Linux.
A C program compiled on Deb 12 might not work on 11, if it requests a new version of some function, or a new function.
Go and Rust can, but they are silos.
Glibc dropped support for static linking years ago. You can't deploy a new library to close a library security hole, for programs that have statically linked it.
> You can't deploy a new library to close a library security hole, for programs that have statically linked it.
Of course you can. You just recompile affected programs. Just like you need to do in the case of header-only C/C++ libraries like much of widely-used Boost. And if the library is not strictly header-only, but still has significant code in headers, as is often the case in C++, if you only swap the dynamic library, the program is still broken, or much more likely: it becomes broken, even though it originally wasn’t.
I heard projects (like the Kandria game) build their releases on an old glibc (but latest SBCL and libraries), so it can be shipped to any more recent system.
> Like you develop on a debian 12 system, then move the binary to your debian 11 server only to be confronted with the notorious 'incompatible glibc version error'. It's super annoying.
I get that with Go too. I now make all my release builds on the oldest version of Linux I have.
This is not a Linux problem. The Linux system call binary interface is documented as stable. It's user space libraries like glibc that break backwards compatibility. There's nothing Linux can do about that.
> glibc has very good backward compatibility, with careful symbol versioning and whatnot.
Not in my experience. Working through exercises in Practical Binary Analysis, the first problem I needed to solve was running programs compiled against an older glibc on older Ubuntu on a new Ubuntu version with newer glibc, which boiled down to hunting down the glibc so from the older Ubuntu. Without it, the programs crashed on startup. That problem wasn’t even planned by the book author, but it sure was a good thematic fit. :)
Go makes syscalls directly, it doesn't call through the system libc. This does cause support problems on at least NetBSD, don't know what happens on FreeBSD or OpenBSD.
The NetBSD backwards compatibility mechanism is to not change the arguments to an existing syscall, if there is a need to change something then a new syscall will be created and aliased to the old name in the system include files. Old binaries use the old syscall, those compiled after the new one has been added will use that. Languages that don't parse the system .h files like Go need conditional code for NetBSD.
Kind of a deal breaker sure, but I don't think in that many cases. Is it really that hard to launch an older VM and make a build there? Or just decide you don't need to support anything older than your dev machine? Or just always run with source code? For my little scripts (which admittedly are much less organized than those in the article) they're mostly just on my own dev machine. Occasionally I build binaries and ship them over to another server without any tooling on it, but it's never very far out of date. Such scripts are also fairly long-lived; it's nice that some binary I made in 2021 (to replace a perl script that had been stable for ~10 years) continues to do its job.
it is planned to allow static linking against libc, not necessarily glibc (I've been thinking about musl). Dynamic linking is still a default, but different strokes for different folks I guess.
Sure there is, as always: load all the code into the running SBCL, then call (SB-EXT:SAVE-LISP-AND-DIE :EXECUTABLE T :TOPLEVEL your-entry-point). The downside, as always, is that this kills the Lisp process, and the resulting binary weighs tens of megabytes even for a simple hello world. I guess one of these disqualifies it as not hassle-free.
You build them. Or if the latest compilers don't build against the older glibc (which is rare in my experience, but may be possible), then you just don't support the older glibc any more.
But I've never had trouble building against older glibcs.
Even shops that shipped static binaries, they all still shipped containers at the end of the day to run in production.
I find a lot of the static binary hand wringing basically: did you ever solve how to ship more than 1 file to production? Did you ever need to virtualize any resource on your compute?
Then you're at a minimum using a chroot and containers.
If you aren't shipping to production, if you're really just copying files around between your laptops and you want them to run - you have glibc.
There's a big gap between what you're talking about: ship a dockerfile or snap or whatever to make your big ol' service distribution-portable vs what this article was about: writing CLI programs.
The moment we end up in a world where a "small CLI program" requires docker.. I give up. Time to retire from software and go raise sheep or something instead.
No, it sounds like you just have a mostly "I write code that goes on servers" type job, and haven't been burned by the issue of older binaries, that won't work with new libcs.
This is a real problem with Linux (not MacOS or other BSDs from what I understand) because it doesn't have a stable ABI (the ABI is basically glibc). It's a design choice, and a pain in the ass for people who want to distribute things as binaries. Which, well, it's not something I do, but, it has burned me using other people's things.
Could docker/containerization solve this problem? Yes, in the same way that a shotgun would serve for eliminating mosquitoes...
The article was about writing small cli programs, and compiling them yourself. I guess I assumed you would be able to rebuild in this discussion trivially - you wrote the program.
This is one of the reasons that developing on your production OS is something of a best practice. Not specific to SBCL at all, but this general problem of dependencies not quite matching can bite you in a lot of ways.
People forgot this lesson when macbooks got popular for development, then had to relearn it by upending the ecosystem into containers.
It's also a reason for distributing software under a Free license. If people who use your software want to try running against a different libc or different versions of other dependencies, why not let them? You can always say you only support binaries that you distribute.
This reminds me of something I thought about taking back up:
A while back I wrote a number of Emacs scripts so that I could listen to Discord messages, as they were streaming in basically, and then 'do Lisp' on them. For example, take the data that was streaming in and format it into a web page with a few links I could then investigate. I may get back into that.
But probably what you want to do is process the data in the text. So here's my emacs function for this (not the code for the data processing itself which was done in Python, separately - but see how you can go combine those together).
(defun refresh (arg)
"Rerun local file creation & open in Firefox"
(interactive)
(let ((filearg (concat "ruby project_refresh_emacs.rb " arg)))
(shell-command filearg))
(message "refreshed file for you")
)
Babashka has very fast startup, has solid filesystem bindings via babashka.fs, http client and server, awyeah for AWS, Jackson or pure Clojure JSON, access to GraalVM friendly Clojure and Java libraries. Well maintained and mature. Go Borkdude!
amusingly i was just wrangling with how to do this yesterday, while writing a CLI common lisp file-watching testrunner.
i ended up heading toward something that was more engineered to run a simple repl function call via SWANK, because i'm increasingly settled to having a single, long-running SBCL image rather than starting up individually compiled or interpreted lisp scripts. Feels more lispy somehow -- and definitely better for TDD.
If you find this article interesting and are curious about Nim then you would probably also be curious about https://github.com/c-blake/cligen
That allows adding just 1-line to a module to add a pretty complete CLI and then a string per parameter to properly document options (assuming an existing API using keyword arguments).
It's also not hard to compile & link a static ELF binary with Nim.. I do it with MUSL libc on Linux all the time. I just toss into my ~/.config/nim/nim.cfg:
@if musl: # make nim c -d:musl .. foo static-link `foo` with musl
cc = gcc # --opt:size trades more speed
gcc.exe = "musl-gcc" # NOTE: This also works as a ..
gcc.linkerexe = "musl-gcc" #..per-module foo.nim.cfg
passL = "-static -s"
@end
Lisp is often cited as one of Nim's inspirations.
EDIT: https://nim-lang.org/ has much more detail and there are over 50 small CLI programs at https://github.com/c-blake/bu as rather extended examples. Using that musl approach I get tiny little 100 kB ELF executables that would work on practically any Linux you can `scp` them to with none of the boilerplate/language limits/hassles or speed-limits of, say, Go and run-time start-up times on the order of 100 microseconds. There are all kinds of great libs in Nim, too like @V1ndaar's https://github.com/SciNim/Measuremancer or all sorts of goodies over at https://nimble.directory/
I've used cligen before, and it's very elegant. But if you are used to more declarative ways of writing your cli, it can be a bit of a learning curve. Best advice I can give is look at what you are actually trying to do, not "how do I make it work like <other tool>"
The author has a nice idea there with generating man pages from the help data entered into the option parser. But he doesn't take it to its natural conclusion by then working with Makefiles that generate actual man page files.
If your program can generate its own man page from the option data, you don't need an actual man page file. man takes input from a pipe, e.g.
# useful use of cat: shows man is not accessing a file
cat foo.1 | man -l -
This could be built into your utility:
$ util --man
could spin up the man pager, and pipe the nroff code into it to be rendered.
If you still need man util, that could be a proper, verbose man page.
As someone else noted, SBCL's `SB-EXT:SAVE-LISP-AND-DIE` ends up producing hilariously large executables. The solution I've settled on for bags of small, sharp tools is to simply have an omnibus program that includes a variety of functionality as subcommands, and that can dispatch to the appropriate function based on the name it was run as (i.e., the value of argv[0]). One of those commands will pepper a directory of your choice with soft links to the omnibus program, each named for a subcommand.
Recently, I read about someone who basically had a single running image and it would dispatch to the right function based on the command names sent to it, and the image was running swank, which sounded like a neat idea. I saw it in a comment, either here or on reddit ...
Before typescript and node > about 16 I’d reach for Python or more organized bash for these things. Now it’s Typescript from the client down to my system scripts.
The ability to use the same language all the way up and down my stack, to import code I use in my program from my script, and get Type-checking means more often than not I’ve got a hash bang with npx ts-node.
Especially useful for testing server code while skipping the HTTP nonsense.
I hate that I’m largely living in a JS world, but I’m fast and effective with it, and the standard library, ecosystem, and tooling has matured enough that I can deal with the quirks.
The guile manual is pretty nice, but having something like this will always help people get started. For people that don't know about things like how non-local exits work getting started and doing it correctly can really take some time to figure out.